With respect, but if your floating point code creates "a whole mess" in scientific applications you are not writing good quality code.
Numerical analysts have recognized the issues which arise from rounding errors, truncation errors and quantization errors for a very long time now, certainly since before the invention of electronics. For around 200 years, since the days of Gauss at least, techniques and algorithms have been developed to help minimize inaccuracy in chains of computations on finite representations of real numbers.
Anyone who wishes to find out more on the subject should consult texts on numerical analysis. There are many good ones available.