# Don’t Write CENTS-less Code When Calculating Money

In my last blog: Avoid Magic Numbers in Writing Code | by Sean Shaughnessy | May, 2021 | Medium , I wrote about a number that you shouldn’t be using in you code. This blog, I have another number, or more accurately, a number format, to avoid: expressing money in terms of dollars when doing precise monetary calculations. In other words, when something costs $19.95, store the value as 1995 and then either use a formatting method or divide by 100 to display the correct value. This is because computers store numbers in binary. If you are not interested in the math, skip to the conclusion. Otherwise, keep reading.

This is what happens when you try to divide 1 by 10 in hexadecimal, a condensed form of binary. Because 2 is prime, any number that is not a power of 2 will repeat infinitely when used as a divisor and there is a remainder. This causes rounding errors when the computer converts the number to binary and back to a decimal.

Besides the solution mentioned in the introduction, you could use an object to store a decimal as a fraction for precise calculations. Depending on your programming language, there might be a built-in specific data type and/or library that could help you. However, converting dollars to cents or decimals to integers is easy and works in any language.

Whether you convert cents to dollars or use specific methods or objects to handle precise decimal computations, remember to handle them in ways that make *cents* for your purpose. Unless, of course, you want to be featured in the Error’d column of The Daily WTF