A thing that never stops infuriating me when writing C is I'm *PRETTY* sure, after it does a integer multiplication, a modern CPU knows whether an overflow occurred and sets a specific status bit somewhere, but there doesn't seem to be any way to extract that information in a high-level programming language.

@mcc There are lots of similar mismatches.

Lack of access to Not a Number values.
Real numbers are limited to Real64 when everything has supported Real80 for decades.

Not just in C, C# has similar inherited limitations.

@hallam I find it more excusable in C# because C# is advertised as a virtual-machine language which makes sacrifices for portability. But the only reason we theoretically use C is it's "close to the metal"! We pick C for its metal-closeness but then it turns out the only things that are particularly close to the metal are UB

@mcc @hallam
BTW I _seriously_ hate how many things in C are UB when they should be Implementation Defined instead.

@oblomov @mcc @hallam aren't UB an implementation defined synonymous in the standard?

@ekg @mcc @hallam
No. UB is for stuff that isn't supposed to happen, so if it happens “anything goes”. ID means “each implementation defines what happens in these circumstances”. An implementation can define such behavior to be undefined, of course, but in general they are different.
Of course if your code depends on ID-behavior it's not portable, but within that implementation you can expect it to behave “correctly” (i.e. as documented).

Follow

@oblomov @mcc @hallam yeah, but my compiler can extend any UB as well. There for does it matter what the standard says?

· Librem Social · 1 · 0 · 0

@ekg @oblomov @mcc @hallam it matters a lot. If something reruns an implementation defined value, for example, it still is valid code, and must still return a value. Undefined behavior is very different, and writing code that has UB is unsound as compilers can and will assume it can never happen.

@ekg @oblomov @mcc @hallam for a concrete example integer overflow is undefined. This means the compiler will assume that `(i + 1) > i` always evaluates to true. If it were implementation defined what value you get after overflow, the compiler could not make that assumption.

@dotstdy @ekg @mcc @hallam

OTOH, I disagree for example with the standard's choice to make integer overflow UB instead of ID. Yes, different architectures have different behaviors in case of over/underflow depending on signedness and hardware behavior. Making it ID means that users can leverage hardware-specific capabilities with ease, making it UB means introducing all kinds of subtle bugs instead.

@oblomov @ekg @mcc @hallam yes I'm not really trying to argue that it's a good idea, just that it's fundamentally different from implementation defined.

@dotstdy @ekg @mcc @hallam oh yes, agreed on that. It's just that the specific example is one of my per peeves on the debatable choices the committee made ;-)

@dotstdy @oblomov I meant specifically if the standard called it undefined or implementation defined. Of course what is undefined matters a lot.

Sign in to participate in the conversation
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml