A thing that never stops infuriating me when writing C is I'm *PRETTY* sure, after it does a integer multiplication, a modern CPU knows whether an overflow occurred and sets a specific status bit somewhere, but there doesn't seem to be any way to extract that information in a high-level programming language.
@mcc There are lots of similar mismatches.
Lack of access to Not a Number values.
Real numbers are limited to Real64 when everything has supported Real80 for decades.
Not just in C, C# has similar inherited limitations.
@hallam I find it more excusable in C# because C# is advertised as a virtual-machine language which makes sacrifices for portability. But the only reason we theoretically use C is it's "close to the metal"! We pick C for its metal-closeness but then it turns out the only things that are particularly close to the metal are UB
OTOH, I disagree for example with the standard's choice to make integer overflow UB instead of ID. Yes, different architectures have different behaviors in case of over/underflow depending on signedness and hardware behavior. Making it ID means that users can leverage hardware-specific capabilities with ease, making it UB means introducing all kinds of subtle bugs instead.
@ekg @oblomov @mcc @hallam it matters a lot. If something reruns an implementation defined value, for example, it still is valid code, and must still return a value. Undefined behavior is very different, and writing code that has UB is unsound as compilers can and will assume it can never happen.