@pavel @datenwolf @dcz @martijnbraam @NekoCWD I played with it over the last days and I already have 526x390 30 FPS encoding with live viewfinder, perfectly synced audio, color correction, lens shading correction, tone mapping, AWB and AE - consuming about 40% CPU. Still needs chromatic shading correction and AF, and I started experimenting with enabling PDAF.
I can also make the sensor output 60 FPS and RAW10. Patches incoming ;) Still only 2 MIPI lanes though, so no 13MP 30FPS yet.
I think both left- and right-masked PDAF pixels are accessible in 1:2 mode with binning disabled, though I haven't done the math yet to be 100% sure. Enabling/disabling it on demand will be somewhat clunky though. I can also read calibration data from OTP, but AFAIK there are no kernel APIs to expose that to userspace :(
https://source.puri.sm/Librem5/linux/-/issues/411#note_285719
@dos Woa 🐱! That's awesome
@pavel @datenwolf @dcz @martijnbraam @NekoCWD It's 526x390, but properly binned (each channel's sample consists of 4 raw pixels averaged), which reduces noise. The shader got heavy though, does only ~35 FPS at this res - but there should be room for optimization. I've been more concerned with its correctness than performance so far.
Stats counting on the CPU with NEON is fast enough for full frame with some subsampling.
I'm giving it some finishing touches and will then publish it of course 😁
@pavel @datenwolf @dcz @martijnbraam @NekoCWD I'd expect moving binning to a separate pre-pass to make it faster, we'll see.
Also, my stats are center-weighted. Millipixels annoyed me with its reluctance to overexpose the sky 😄
@dos @pavel @datenwolf @dcz @martijnbraam @NekoCWD I'm an absolute noob in all these but i have a very naive question : how come older android smartphone can do the same thing with bigger resolution on older chip ? If i compare with an old samsung galaxy s3, it did all this very easily. Is there some secret proprietary sauce to it with specialized closed-source firmware ? Or the librem 5 just has an exotic hardware ?
@lord @datenwolf @dcz @martijnbraam @NekoCWD @pavel Specialized hardware. Phones usually don't have to do image processing nor video encoding on CPU or GPU at all, they have hardware ISP and encoders. L5 does not.
On other hardware there's also a matter of whether there's driver and software support for it, so Android may be able to use that, but pmOS - not necessarily.
@dos @datenwolf @dcz @martijnbraam @NekoCWD @pavel Ok so it really is due to some hardware "missing" and not just some closed-source firmware in the case of L5. Good to know :-)
@pavel @datenwolf @dcz @martijnbraam @NekoCWD I'm just using waylandsink at this point, but it could be passed anywhere else. That's literally the least interesting part of the thing 😂
@pavel There's plenty of apps that embed GStreamer's output to look at, and you can even skip it completely and simply import the V4L buffer into SDL's GL context and don't create your own one at all. This is just gluing things together at this point.
@pavel Doing it via GStreamer makes buffer queue management easier, but of course it can be done either way. With SDL you get GL context already, so you just do it the way I showed you some time ago. You skip the context creation part and you're done.
@pavel So I was just adding autofocus to my toy code and I wanted to be able to trigger it by tapping on the viewfinder. Just replaced waylandsink with gtkwaylandsink, grabbed its GtkWidget and added it to my GtkWindow and it works 😛
@pavel Displaying the viewfinder is free with (gl)waylandsink, as the dmabuf gets passed directly to the compositor in a subsurface, so the resolution is not important - it goes straight to the GPU for compositing. It's important for encoding and that's where uncached buffers can bite you, but since my shader is currently too heavy to handle higher res anyway it's not a thing I concern myself with right now.
And the code is basic, it just takes time to get familiar with the problem space 😜
@pavel Outputting AYUV is trivial and makes encoding slightly faster, but then you lose the viewfinder for free as that's not a format that can be passed to the compositor.
Ideally we would output planar YUV (perhaps with multiple render targets?) as that can be sampled from with recent etnaviv: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/3418
For now I'm limiting myself to what's possible on bookworm/crimson though :)
@pavel @datenwolf @dcz @martijnbraam @NekoCWD You could encode to YUV444 or even RGB, but lack of lens corrections is much more visible than chroma subsampling:)
@pavel @datenwolf @dcz @martijnbraam @NekoCWD Both, including chromatic shading. They're not that bad in still photos, but I find them very distracting when in movement.
I've got a song for you, but you'll have to translate it yourself 😁 https://www.youtube.com/watch?v=FdIid5IJEds
I'm relying on kernel changes, so I need to put everything in place first. It also took some time for me to get reasonably confident that what I'm doing is correct. I'm pretty much done though, just some cleanups left to do.
BTW. Recording 30 minutes with screen off ate about 15% of battery, so 3 hours of recording should be possible.