I need help with an algorithm to map decimal values to binary integers. Here is the situation: Say you have a list of decimal numbers: (0.15, 0.015, 0.85), if you were to attempt to convert these directly to binary you would of course get 0.
However, lets assume that we decide to only use a max of two bits, the only numbers we can represent are 0,1,2, and 3 (00,01,10,11). SInce our decimal numbers are constrained between 0 and 1, we can simply divide 1 by 4 and approximate with an if block:
so all numbers from 0 to .25 map to 00, from .25 to .5 map to 01, .5 to .75 to 10, and .75 to 1 to 11
Now of course that is not very precise, but it illustrates what I want to do. To icrease accuracy we just increase the number of bits used and divide the 0 to 1 space more finely. On a 32 bit processor we can have 32 1's which is 4294967295 unique integers.
1 divided by above number is 2.3283064370807973754314699618685e-10, that is a pretty fine division, however I don't want to write an if statement that long. What is the best way to figure out where a decimal would lie in a given range then map it to the correct binary representation?
2004-11-12 Edited by Arunbear:
- Changed title from 'Mapping Algorithm', as per Monastery guidelines
- added <p> tags to improve readability