A thing that never stops infuriating me when writing C is I'm *PRETTY* sure, after it does a integer multiplication, a modern CPU knows whether an overflow occurred and sets a specific status bit somewhere, but there doesn't seem to be any way to extract that information in a high-level programming language.
@mcc There are lots of similar mismatches.
Lack of access to Not a Number values.
Real numbers are limited to Real64 when everything has supported Real80 for decades.
Not just in C, C# has similar inherited limitations.
@hallam I find it more excusable in C# because C# is advertised as a virtual-machine language which makes sacrifices for portability. But the only reason we theoretically use C is it's "close to the metal"! We pick C for its metal-closeness but then it turns out the only things that are particularly close to the metal are UB
OTOH, I disagree for example with the standard's choice to make integer overflow UB instead of ID. Yes, different architectures have different behaviors in case of over/underflow depending on signedness and hardware behavior. Making it ID means that users can leverage hardware-specific capabilities with ease, making it UB means introducing all kinds of subtle bugs instead.
@ekg @mcc @hallam
No. UB is for stuff that isn't supposed to happen, so if it happens “anything goes”. ID means “each implementation defines what happens in these circumstances”. An implementation can define such behavior to be undefined, of course, but in general they are different.
Of course if your code depends on ID-behavior it's not portable, but within that implementation you can expect it to behave “correctly” (i.e. as documented).