10000000000
cannot be represented as a 32-bit integer. Hence, we have to represent it as a 64-bit integer using the long
data type and by appending L
at the end of the number just like this:
long billion = n/1000000000L;
long million = (n-billion*10000000000L)/1000000;
long thousand = (n-billion*10000000000L - million*1000000)/1000;
long rest = n-billion*10000000000L - million*1000000 - thousand*1000;
By following this rule, the errors are resolved. However, lines such as result = three(billion) + " Billion";
and result+= three(million) + " Million";
are still throwing in errors because three
is not defined anywhere. Can you clarify as to what three
is?
Hope this helps!