That makes sense, although it would be an interesting exercise to compare both algorithms, written in C, with a good optimising compiler (Intel's on x86 for example).
With ubiquitous FP processors on-board modern cpus, the difference in costs between FP and integer operations has narrowed considerably, esp. when pipelining can be used to good advantage.
I've read some articles that make the case for dropping the distinctions between integer, float, & double in programming languages and just using the FP processors native size (80-bit on Intel) for all program-level numerical quantities. The slight drop in performance for heavy integer math can be more than compensated for, by removing all the decision points--what type of number is this? Does it need to be extended? Will it/did it overflow? etc.
Perl threw the float away years ago, why not bin the (internal) integers as well, and make full use of the hardwares FP precision saving all the conversions that take place converting between 64-bit doubles and 80-bit internals.
Makes perfect sense to me.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |