@jzb the problem is not Wayland, just like it is not systemd. The problem is trying to force people to adopt it or die.
I have use cases that are inimical to the fundamental assumptions Wayland has, and pretty much no 3D-accelerated hardware across my machine park, so it’s never going to be an option for me. So don’t make me "use it or die". Just keep X11 working and we’re good.
@mirabilos @jzb You can run most if not all Wayland compositors without 3D acceleration provided your Linux kernel is new enough to provide SimpleDRM. But, given your profile picture maybe you don't use Linux?
@datenwolf @newbyte @mirabilos @jzb I have used KiCad and created projects in it under Wayland with no troubles whatsoever.
You know that Xwayland exists and isn't going anywhere, right? It's not some temporary glue to ease the migration, it's the main still maintained X11 implementation these days and it's here to stay.
@datenwolf @dos @newbyte @mirabilos @jzb and yet kicad works just fine
Also Xwayland… yay, let's just glue some broken thing (Xorg implementation of X11) to another broken thing (the Wayland protocol.
What the Phoenix guys are doing is much, much better. Also more resource efficient, since the Wayland design already is testing memory bottlenecks with regard to 4k or 8k display resolutions.
@datenwolf @newbyte @mirabilos @jzb Maybe in your world dma-buf passing weights more with higher resolutions, but in the real world when you want high performance you end up with things like Gamescope.
That's not what I was hinting at.
Socraic question: What's the memory requirements for a 2 image swap chain of a fullscreen window at an 8k display resolution in R10G10B10A2 pixel format?
How many clients running (and compositing) at that resolution do fit in your typical GPU's VRAM?
@datenwolf @dos @jzb @newbyte 8k? So about 100x80 pixels? That’s not a large screen.
You know what I mean:
8k UHD = 7680 × 4320
@dos @jzb @newbyte @datenwolf no, I don’t. I do not use "entertainment/consumer TV" sizes. I use computer monitors, in which the resolution is given by width and height. (And, ideally, dpi. All three* are relevant.)
*) Yes, I know dpi is strictly two values, but on all practical monitors at the moment they are sufficiently close.
@datenwolf @newbyte @mirabilos @jzb Ah, so you're not anti-Wayland, but anti-composition?
Then you'll be relieved to hear that Wayland is designed in a way that makes composition unnecessary in the case you described :)
Make the windows slightly smaller than fullscreen and slightly offset to each other, so that of each window you can see some pixels.
Or more devious: Make it a pyramid stacking where each lower window on the Z-stack sticks out by some pixels to the side of the window on top of it.
Wayland doesn't have the concept of pixel ownership based clipping of window contents. So every window by necessity will get a complete surface.
@datenwolf @newbyte @mirabilos @jzb So first it was supposedly about bandwidth (which is not an issue in this case either thanks to damage tracking), but now it's suddenly about RAM usage which you'd need to pay up one way or another anyway if you wanted to implement features commonly expected from desktop environments these days? 🤔
Read my post again. I didn't mention bandwidth at all. I wrote bottleneck, which can also mean, running out of RAM.
And which features of desktop environments would that be, that are expected?
Effects? You mean distractions.
Window content previews? Sure, let's take a whole, huge window and scale it down to a thumbnail, things will remain readable. Of course.
Client side decoration? eff those. I want my windows to have titles telling me what they are.
However, when compositing you're also paying the full memory bandwidth bottleneck for the parts that actually undergo compositing.
And if it's done using some 3D API better hope that the people developing the compositor know what they're doing and not mindlessly blend two-triangle quads on top of each other (like e.g. Hyprland does, which is full or hilariously bad patterns in using OpenGL).
@datenwolf @newbyte @mirabilos @jzb Yeah, let's instead have the apps OOM once you drag their windows or view them in an expose-like arrangement 😂
OOM situations due to overloading with windows is not a problem if you do graphics the old-school way:
Single display screen framebuffer, windows cut areas from that framebuffer and drawing operations undergo the pixel ownership test.
This isn't magic or rocket science; it was figured out how to do this in the 1980ies.
Every graphics system worth its salt did it that way. And it even plays nicely with 3D acceleration.
@datenwolf @newbyte @mirabilos @jzb Except it's not 1980 anymore. Buffers to be presented on screen come from all sorts of sources - CPUs, GPUs, VPUs, ISPs. Some can render directly to a shared framebuffer, some can't. Some could be cropped to save memory, some can't - and when they can, they'd need to reallocate.
Yes, you can design a system that will consume less resources if you make it special-purposed to the set of requirements you happen to care about. Go ahead. Wayland is not that thing.
@datenwolf @newbyte @mirabilos @jzb (that said, a non-composited shared-framebuffer system could be easily built on top of Wayland anyway)
Ok, show me: How would you pass a Wayland surface that has a clip region attached and let the client mmap it so that the clipped out regions don't consume memory (without forcing clip region row start and end addresses fall onto page boundaries and not implying narrow window clips spanning multiple rows)?
@datenwolf @newbyte @mirabilos @jzb Buffer passing mechanics are extensible in Wayland. Even dma-bufs are in their own extension, the only one in the core is wl_shm (which, BTW, works by giving the client a buffer to mmap to...).
What's more - some toolkits, such as Qt, even have elaborate plugin support to handle custom buffer passing mechanisms that you could use.
I won't implement this for you as I'm not interested in it, but you can just sit down and do it!
So I do it… and then? Then only programs that actually know about these extensions and actually use them will give me their benefits.
Every program which developers' didn't care about going the extra mile will waste memory, simply by creating toplevel windows on screen.
Memory that's not available for doing actual work (like visualizing data).
Old-school graphics systems give the benefits to all applications, without extra effort.
@datenwolf @newbyte @mirabilos @jzb ...and outright don't work when presented with modern challenges, unless you're willing to put the extra effort or compromise on performance as well :)
Outright don't work?
Okay, what exactly doesn't "work" with X11. And please don't list up shortcomings of Xorg, that could have been addressed a long time ago and are perfectly possible with X11.
@datenwolf @newbyte @mirabilos @jzb So how are you going to offload surfaces to hardware planes while preserving the ability to save memory from unused portions of the window? How are you going to utilize display's engine framebuffer compression to hit bandwidth targets on high res screens?
X11 can do lots of things, but only once you make it move away from these "old-school" ways and explode the complexity.
display engine framebuffer compression: First I'd ask myself why I'd want to address only single windows with that and not go for the worst case scenario of whole screen content updates every frame. → going for that case have the whole screen shared framebuffer go into the compression. Also – as I was so rightfully scolded for some 16 years ago by DarkShikari – don't even bother with explicit damage regions. Doesn't matter if you compare pixels to pixels or clip.
@datenwolf @newbyte @mirabilos @jzb Well, maybe because the compressed buffer is produced by the GPU and the display engine knows how to decompress it on the fly while it's fully opaque to the CPU?
Offload surfaces to hardware planes with saving of memory regions beneath.
This is a little ambiguous to me, and there are at least 3 different ways how I can understand it. Please clarify.
@datenwolf @newbyte @mirabilos @jzb Modern display engines provide some in-hardware composition abilities. You're explicitly not interested in window composition, fine, but applications want to draw over buffers that are opaque to them, such as video content, which can then go straight from the VPU to the display engine without CPU or GPU involvement.
That's well-supported today with Wayland and necessary for reasonable performance on some hardware out there.
This is the cost/benefit calculation I do with regard to Wayland/X11
Wayland gives me very little, but piles a huge combinatorial explosion of extensions on clients and compositors, vastly inflating the complexity of the whole system, if it wants to be resource efficient in general purpose use.
X11 (Win32 GDI for that matter) give me a reasonably well designed resource allocation, that everything just uses.
@datenwolf @newbyte @mirabilos @jzb I have debugged plenty of X11 and Wayland client code in my life and having to deal with "combinatorial explosion" of stuff definitely describes the experience of working with the former more than the latter.
Yes, I know.
I've been doing graphics programming since my early teenage years, some 30 years ago, and these days I'm spending most of my time developing scientific and medical imaging applications that do real-time on-GPU processing and visualization of data at bandwidths that saturate the PCIe links to the GPUs.
Let me put it this way: Over the years I developed fine tuned instincts of what are good and bad designs in graphics. Wayland is a bad design IMHO.
@dos @newbyte @mirabilos @jzb
Please read what the KiCAD developers have to say about Wayland:
https://www.kicad.org/blog/2025/06/KiCad-and-Wayland-Support/