@jzb the problem is not Wayland, just like it is not systemd. The problem is trying to force people to adopt it or die.
I have use cases that are inimical to the fundamental assumptions Wayland has, and pretty much no 3D-accelerated hardware across my machine park, so it’s never going to be an option for me. So don’t make me "use it or die". Just keep X11 working and we’re good.
@mirabilos @jzb You can run most if not all Wayland compositors without 3D acceleration provided your Linux kernel is new enough to provide SimpleDRM. But, given your profile picture maybe you don't use Linux?
You've entirely missed the point, or built a strawman here. Problem isn't (lack of) 3D acceleration, but that the whole conceptual design of Wayland has been broken from day 0. People were pointing out the shortcomings all the time and mostly ignored.
Wayland's design breaks things.
Certain applications, like KiCAD will not support Wayland anytime soon, because it breaks the way KiCAD cooperates with window managers.
In Wayland-land there's no such cooperation.
@datenwolf @newbyte @mirabilos @jzb I have used KiCad and created projects in it under Wayland with no troubles whatsoever.
You know that Xwayland exists and isn't going anywhere, right? It's not some temporary glue to ease the migration, it's the main still maintained X11 implementation these days and it's here to stay.
Also Xwayland… yay, let's just glue some broken thing (Xorg implementation of X11) to another broken thing (the Wayland protocol.
What the Phoenix guys are doing is much, much better. Also more resource efficient, since the Wayland design already is testing memory bottlenecks with regard to 4k or 8k display resolutions.
@datenwolf @newbyte @mirabilos @jzb Maybe in your world dma-buf passing weights more with higher resolutions, but in the real world when you want high performance you end up with things like Gamescope.
That's not what I was hinting at.
Socraic question: What's the memory requirements for a 2 image swap chain of a fullscreen window at an 8k display resolution in R10G10B10A2 pixel format?
How many clients running (and compositing) at that resolution do fit in your typical GPU's VRAM?
@datenwolf @newbyte @mirabilos @jzb Ah, so you're not anti-Wayland, but anti-composition?
Then you'll be relieved to hear that Wayland is designed in a way that makes composition unnecessary in the case you described :)
Make the windows slightly smaller than fullscreen and slightly offset to each other, so that of each window you can see some pixels.
Or more devious: Make it a pyramid stacking where each lower window on the Z-stack sticks out by some pixels to the side of the window on top of it.
Wayland doesn't have the concept of pixel ownership based clipping of window contents. So every window by necessity will get a complete surface.
@datenwolf @newbyte @mirabilos @jzb So first it was supposedly about bandwidth (which is not an issue in this case either thanks to damage tracking), but now it's suddenly about RAM usage which you'd need to pay up one way or another anyway if you wanted to implement features commonly expected from desktop environments these days? 🤔
Read my post again. I didn't mention bandwidth at all. I wrote bottleneck, which can also mean, running out of RAM.
And which features of desktop environments would that be, that are expected?
Effects? You mean distractions.
Window content previews? Sure, let's take a whole, huge window and scale it down to a thumbnail, things will remain readable. Of course.
Client side decoration? eff those. I want my windows to have titles telling me what they are.
@datenwolf @newbyte @mirabilos @jzb Yeah, let's instead have the apps OOM once you drag their windows or view them in an expose-like arrangement 😂
OOM situations due to overloading with windows is not a problem if you do graphics the old-school way:
Single display screen framebuffer, windows cut areas from that framebuffer and drawing operations undergo the pixel ownership test.
This isn't magic or rocket science; it was figured out how to do this in the 1980ies.
Every graphics system worth its salt did it that way. And it even plays nicely with 3D acceleration.
@datenwolf @newbyte @mirabilos @jzb Except it's not 1980 anymore. Buffers to be presented on screen come from all sorts of sources - CPUs, GPUs, VPUs, ISPs. Some can render directly to a shared framebuffer, some can't. Some could be cropped to save memory, some can't - and when they can, they'd need to reallocate.
Yes, you can design a system that will consume less resources if you make it special-purposed to the set of requirements you happen to care about. Go ahead. Wayland is not that thing.
@datenwolf @newbyte @mirabilos @jzb (that said, a non-composited shared-framebuffer system could be easily built on top of Wayland anyway)
Ok, show me: How would you pass a Wayland surface that has a clip region attached and let the client mmap it so that the clipped out regions don't consume memory (without forcing clip region row start and end addresses fall onto page boundaries and not implying narrow window clips spanning multiple rows)?
@datenwolf @newbyte @mirabilos @jzb Buffer passing mechanics are extensible in Wayland. Even dma-bufs are in their own extension, the only one in the core is wl_shm (which, BTW, works by giving the client a buffer to mmap to...).
What's more - some toolkits, such as Qt, even have elaborate plugin support to handle custom buffer passing mechanisms that you could use.
I won't implement this for you as I'm not interested in it, but you can just sit down and do it!
So I do it… and then? Then only programs that actually know about these extensions and actually use them will give me their benefits.
Every program which developers' didn't care about going the extra mile will waste memory, simply by creating toplevel windows on screen.
Memory that's not available for doing actual work (like visualizing data).
Old-school graphics systems give the benefits to all applications, without extra effort.
@datenwolf @newbyte @mirabilos @jzb ...and outright don't work when presented with modern challenges, unless you're willing to put the extra effort or compromise on performance as well :)
Outright don't work?
Okay, what exactly doesn't "work" with X11. And please don't list up shortcomings of Xorg, that could have been addressed a long time ago and are perfectly possible with X11.
@datenwolf @newbyte @mirabilos @jzb So how are you going to offload surfaces to hardware planes while preserving the ability to save memory from unused portions of the window? How are you going to utilize display's engine framebuffer compression to hit bandwidth targets on high res screens?
X11 can do lots of things, but only once you make it move away from these "old-school" ways and explode the complexity.
display engine framebuffer compression: First I'd ask myself why I'd want to address only single windows with that and not go for the worst case scenario of whole screen content updates every frame. → going for that case have the whole screen shared framebuffer go into the compression. Also – as I was so rightfully scolded for some 16 years ago by DarkShikari – don't even bother with explicit damage regions. Doesn't matter if you compare pixels to pixels or clip.
@datenwolf @newbyte @mirabilos @jzb Well, maybe because the compressed buffer is produced by the GPU and the display engine knows how to decompress it on the fly while it's fully opaque to the CPU?
@datenwolf @newbyte @mirabilos @jzb Well, these things Just Work™ in several commonly used Wayland compositors and toolkits out there today.
@datenwolf @newbyte @mirabilos @jzb Pretty much all of them these days, as this is handled by dmabuf feedbacks and buffer modifiers, so it works even with split render/display pipelines or with multiple GPUs.
@newbyte @mirabilos @jzb @dos which ones? Link please.