For a .exposed domain, it's not really shocking.
The real shocking fact about floating point is that they are even used at all.
It's throwing out of the window the most basic property operations on number should have : "associativity" and all that for a gain in dynamic range which is not necessary most of the time.
The associativity we expect to hold is (a+b)+c == a+(b+c) and (ab)c == a(bc) and these don't hold for floats even though most math formulas and compiler optimizations rely on these to hold. It's a sad miracle that everything somehow still works out OK most of the time.
You lose determinism most of the time with respect to compiler optimizations, and platform reproducibility if processor don't exactly respect IEE-754 (or is it IEE-854).
The real problem comes when you want to use parallelism. With things like atomic operations and multiple processor doing things out of order, you lose determinism and reproducibility, or add a need for synchronisation or casting operations everywhere.
Even more problematic, is that because number operations are used so often, they are set in "stone", and are implemented at the hardware level. And they use much more transistor because they are more complex than integer arithmetic.
Real programmers don't use floating points, only sloppy lazy ones do.
Real programmers use fixed point representation and make sure the bounds don't overflow/underflow unexpectedly.
Let's ban all hardware floating-point implementation : Just imagine future alien archeologists having a laugh at us when they look at our chips and think "no wonder they were doomed they can't even do a+b right : its foundations were built on sand".