For counting number of zero's in a number ..
def cal(l):
r = 0
o = 0
while (r == 0):
r = l % 10
o += 1
l = int(l / 10)
return o-1
So when the input is greater than 10^23 output is 1. Why does this happen?
For counting number of zero's in a number ..
def cal(l):
r = 0
o = 0
while (r == 0):
r = l % 10
o += 1
l = int(l / 10)
return o-1
So when the input is greater than 10^23 output is 1. Why does this happen?
int(l / 10) is floating point division, which is inaccurate as your numbers grow larger(more info). Change it to l // 10, which is integer division and has no accuracy issues.
Do not use arithmetic operations to count characters. Use string operations instead, for example str.count for simple strings, or re.findall for something more complex, which requires regular expressions:
import re
for n in [1, 0.1, 10.0, 10**23, 10**24]:
print(str(n).count('0'))
# or this:
# print(len(re.findall(r'0', str(n))))
# 0
# 1
# 2
# 23
# 24