@pavel @datenwolf @dcz @martijnbraam @NekoCWD I played with it over the last days and I already have 526x390 30 FPS encoding with live viewfinder, perfectly synced audio, color correction, lens shading correction, tone mapping, AWB and AE - consuming about 40% CPU. Still needs chromatic shading correction and AF, and I started experimenting with enabling PDAF.
I can also make the sensor output 60 FPS and RAW10. Patches incoming ;) Still only 2 MIPI lanes though, so no 13MP 30FPS yet.
@pavel @datenwolf @dcz @martijnbraam @NekoCWD It's 526x390, but properly binned (each channel's sample consists of 4 raw pixels averaged), which reduces noise. The shader got heavy though, does only ~35 FPS at this res - but there should be room for optimization. I've been more concerned with its correctness than performance so far.
Stats counting on the CPU with NEON is fast enough for full frame with some subsampling.
I'm giving it some finishing touches and will then publish it of course ๐
@dos @pavel @datenwolf @dcz @martijnbraam @NekoCWD I'm an absolute noob in all these but i have a very naive question : how come older android smartphone can do the same thing with bigger resolution on older chip ? If i compare with an old samsung galaxy s3, it did all this very easily. Is there some secret proprietary sauce to it with specialized closed-source firmware ? Or the librem 5 just has an exotic hardware ?
@lord @datenwolf @dcz @martijnbraam @NekoCWD @pavel Specialized hardware. Phones usually don't have to do image processing nor video encoding on CPU or GPU at all, they have hardware ISP and encoders. L5 does not.
On other hardware there's also a matter of whether there's driver and software support for it, so Android may be able to use that, but pmOS - not necessarily.
@dos @datenwolf @dcz @martijnbraam @NekoCWD @pavel Ok so it really is due to some hardware "missing" and not just some closed-source firmware in the case of L5. Good to know :-)
@pavel @datenwolf @dcz @martijnbraam @NekoCWD I'm just using waylandsink at this point, but it could be passed anywhere else. That's literally the least interesting part of the thing ๐
@pavel There's plenty of apps that embed GStreamer's output to look at, and you can even skip it completely and simply import the V4L buffer into SDL's GL context and don't create your own one at all. This is just gluing things together at this point.
@pavel Doing it via GStreamer makes buffer queue management easier, but of course it can be done either way. With SDL you get GL context already, so you just do it the way I showed you some time ago. You skip the context creation part and you're done.
@pavel @datenwolf @dcz @martijnbraam @NekoCWD I'd expect moving binning to a separate pre-pass to make it faster, we'll see.
Also, my stats are center-weighted. Millipixels annoyed me with its reluctance to overexpose the sky ๐