Posts by Andrew_xXx

    The method i posted only works with perspective correction and some simple lens correction if it is needed, for fisheye it is more complicated, but you can try other filters described here https://www.danielplayfaircal.…stortion-with-ffmpeg.html or here https://stackoverflow.com/ques…-ffmpeg/40659507#40659507


    If u have some known camera with known lens parameters it is much easier, but it is more trail and error with individual fisheye lens u have so cant really provide any ready to go solution.

    Hmm, i think there is some misconception here, do we have the fullhd frame and resized inside hyperion or the grabber is configured to output a lower res? What is the default MJPEG grabber quality, does it look good? Cuz i dont know how the grabber does it internally, also is there any interpolation in the hyperion decimator? Well i was assuming the grabber quality is quite good, hyperion doesn't output more colors than leds used, so in theory we could use very small images, but they need to be resized with some filtering.

    As i said, your own screenshots looks basically the same, do u see any meaningful difference in leds colors?
    Also there is the issue of performance, i have gaming in mind and it needs to be at least 60 fps processing.
    Why bother with buying better SPI leds when it will be even less than 30 fps, also MT optimisation works, but only if u have the cores, not anyone has.
    Its great works nonetheless, just looking for more answers.


    Is the decimation first? I dont remember now but is the decimation resolution independent in hyperion, 4k and 1080p outputs the same image res? Why would someone use 800x600 as input when you can go as low as 192 x 108? Then we could maintain high fps, first decimate to as low as it can be and only then do the processing, this should be flawless 60fps+ without frame skip.


    And why you said that the input scale is not linear? Do the grabbers cut off the colors non-linearly? If so how, i did read your PR about it, where you explain that chrominance (color) is well preserved but its the gamma that is the issue, gamma is about shadow and highlights, some could simply say about brightness or white/black relation, would it change colors and why so much if the colors are in the chrominance.


    Still one thing to remember, the fast hdr project works with true HDR data input, not broken sdr.

    Reassessed this again and there is no data to do proper lut, at least not to bring back proper or close to real hdr to sdr color, lut is for color mapping, and that is non linear, the basic methods with v4l2-ctl works equally good, lut correction is not noticeable on the leds either so there is no point for such changes and performance issues.
    The best thing we could have is that configurable image adjuster i mentioned with build in v4l2-ctl options, saturation, brightness, contrast and maybe even more, its enough.


    Philips ambilight patents are partially gone, so i guess no one is interested in it, they (tv producers) could also add a hdmi out with decimated (or not) image, by they don't bother either.

    Sorry didn't answer, had rough week.


    Its just that it is not an HDR to SDR tone mapper and it never will be, the function name is misleading, on top of that hdr makes it so everyone based on his setup/devices could need different adjusting so it need to have some options.


    I was just analysing this and comparing with your own (did u change login?) adjusted screenshots from the topic https://hyperion-project.org/t…er-supposedly.631/page-12 and it looks all the same, just look


    corrected from the hdr thread ttps://postimg.cc/5jgLNDJz
    corrected this thread with 3d lut https://postimg.cc/kVpVwf2Y


    As i said its very close, i could even prefer the non 3d lut one, so what is the point of a 3d lut if it look basically the same, a very resource intensive option that even needs to lower the fps to get it to work real-time.
    Based on the screenshots i don't think there will be much difference in the led colors, at least not that noticeable if at all.


    So it is great to research it more before releasing anything and not call it HDR to SDR tone mapper, refine it more. A more interesting option would be an universal image adjuster that everyone can calibrate for its own purposes and it should be mandatory to have a quick switch option as we can't detect if it was HDR or not so it would need to be switched easily, maybe with presets.


    I cant remember now, but do hyperion allows to save v4l2 device setting configuration, like those to fix the image?


    Code
    /usr/bin/v4l2-ctl --set-ctrl contrast=220
    /usr/bin/v4l2-ctl --set-ctrl saturation=255
    /usr/bin/v4l2-ctl --set-ctrl brightness=100
    /usr/bin/v4l2-ctl --set-ctrl hue=2


    So we would have and switchable image adjuster than can do all in one, able to set v4l2 commands and if it is not enough use the 3d lut or other software fixes.


    Also regarding the HDR data and 3D LUT, 3d lut is a non linear color mapping, but we dont have the HDR data and it is impossible to know from what rec 2020 colors it was transformed to a rec 709 space, so we can't do the non linear mapping as it is a pure guess, so im just confused how a non linear transformation could help


    A helper image



    Every data between the big triangle and small triangle is lost without information about what it was, if lets say we had a green like 2020 color between the upper triangles points and we have it transformed in a broken state by a 709 device then it could be any 2020 color between those points, no way to find out what it was.


    All im just saying, we can't use a non linear transformation without knowing how it is done. We could do it if we knew how the wrong colors are made, but for now i dont see how this would carry the information we need.


    I was thinking about this also, what if every device is wrongly transforming the colors in the same way, if we knew the way, and if it has some data that helps to revert it then we would have something to work on.


    If we only knew how the wrong colours are made, i did some tests with ffdshow and madvr tone mappers but couldn't get definitive results.


    I see u did also done some research about this here https://github.com/hyperion-project/hyperion.ng/pull/928 didn't have time yet to analyse it, but its very interesting.


    And yes i did think the same way, use 3d lut to revert the wrong colors, but there is no data to revert from, this is what i realised and i felt dumb :-) It was never there at least to my current state of knowledge.


    On the other hand if we had the HDR raw data it is too complicated to transform it by RPI, its not only the 3d lut, its all the formats data, dolby, hdr10, hdr10+, transform colors smtpe, pq, lightning level, per scene HDR changes, its a lot to do to adjust it, so it is a lot better to have a device to do it for us, i did read some research from the creators of HDFury itself having all the issues i mentioned in my previous post, there is no guarantee that the same HDR data looks the same on different devices, it needs adjusting and they really have a virtual display parameters in grabber to calculate the SDR but it is always an approximation.


    This is probably why there only couple and expensive devices working, not even sure they support full specification for dolby vision.


    So i think we need to research this better and not rush this, but there is a light in the tunnel for sure.

    As great research it is, i would not add this to hyperion, its not a real hdr to sdr tone mapping, it just some needlessly resource intensive trick to adjust sdr image that does not contain any hdr data and the resulting images are practically the same as those posted in the 4k hdr grabber thread, frankly the latter are little better, thus, there are no real advantages, its just wasting cpu cycles.


    Calling this option HDR to SDR tone mapping is false and misleading the users, its more like a SDR enhancer.


    It works for that fast-hdr project guy cuz he has direct true HDR video input with all hdr data present, which hyperion doesn't have.


    The thing with HDR is, its way more complicated than it was before, HDR streams have many hdr metadata present, those data needs to be there, at least the mandatory Mastering Display Color Volume metadata (RGB primaries, white point parameters, display maximum and minimum light levels) is needed to calibrate the end tv device, there are also MaxFALL metadata Maximum Frame Average Light Level and MaxCLL metadata Maximum Content Light Level.


    Every hdr device like tvs need these data to display HDR correctly. Every tv has its own HDR screen capabilities that are calibrated in the factory and hardcoded into it, it then calculates the right values for that display from the Mastering Display metadata and other parameters in the hdr stream and its own calibration data.


    It gets even worse, the basic HDR10 format is static metadata for the whole stream, the easiest one, HDR10+ and Dolby VISON is dynamic hdr per scene or even per frame, that metadata is present in SEI headers in HDR video, and it can be a lot of it, it is very complicated to process that data properly.


    To sum it up, its impossible for hyperion to get HDR to do the tone mapping correctly, they only way to do it is to have a grabber that is really outputting raw HDR metadata, which probably doesn't exist, even then im not sure it would be possible due to all the parameters i described. Its a lot of things to do.


    Those parameters are needed for any HDR device, including grabbers, so a working HDR to SDR grabber would already do everything of this and it looks like it would need to have embed its own virtual SDR display parameters to calculate the proper output.


    So those proper grabbers are probably having low MaxFLL and MaxCLL values like most SDR non HDR tvs, but the colours... its always estimated, there is not such thing as exact hdr or sdr colours, with HDR it is always interpreted from HDR values, display or grabber embed parameters, that also means that a single 3d lut file will not work for every hdr video as it can have different properties that needs to be taken into account when tone mapping.


    So, as hard as it gets, we can't win this, too much to calculate and process, raw hdr data unavailable and probably never will be available, its not a hdr to sdr tone mapper, i would not add this, i see no point, but if you want, dont give it this misleading name.

    Or find a grabber that is at least downscaling to 1080p with HDR data, that way it would be possible to access the HDR data and do the tone map on rpi, i have seen some grabbers that supposedly does this, but not sure what is the 1080p format then and how to grab that hdr data, or maybe, just maybe, ezcap is doing the same, but we are unable to read that hdr data in stream as we dont have any docs.


    I did a raw ffmpeg screenshot dump from real 1080p HDR video to png and it has about 10mb in size and is 48 bits, so, its something there, but i cant view it properly, HDR + WCG viewer app can view it better but still not that good as madVR live vid, trying to figure out what format hdr screenshot should have and where are those hdr data in there, that way it would be possible to test as it would be a frame from a grabber, cuz probably that hdr metadata do not sit with the pixels, its separate.

    Well i know all of this, but hyperion is working good even on rpi zero, it all depend only on algorithm and u can make it work good but depends also on what you want to do, decimation can be controlled, even half of this is enough, as of fastled, you mean 3d lut, well it is probably only one way HDR => SDR tone mapping.


    Only the expensive devices seems to work with proper HDR to SDR output, thats a shame, i'm more and more towards using a camera, it will work with anything and is immune to technical changes, soon there will be new consoles, so 4K 120fps will be more popular and any current grabber that does not have at least 4k 120 fps input support will not work, and there are more, vrr, freesync, gsync, i just want something that will always work without any hassle.

    @TPmodding well i'm, but its not the issue of hyperion here, the ezcap device is just not outputting right colors and nothing can fix that, we can only try to change something with the wrong color on image but thats it, its impossible to revert the broken HDR to SDR conversion, if they at least would output a 1080p but with HDR color, then something could be done with this, everything else is just a workaround.
    I contacted ezcap again, will see if they answer something else now.

    Hyperion doesn't use full resolution image and they state its low cpu load, i'm not an expert but i think it would be somehow possible, maybe with ffmpeg, just need to find the right method, i know its there, maybe use LUT..,a simple result of color dodge in ps is, so it can be done

    Ok, that amazon seller answered, his answer is completely useless, based on his answer i think he dont even know English that good and for sure i have no intent to continue a discussion with him, it would be pointless.
    On the topic of correcting colors, read some more, did some initial tests, it is probably impossible to correct the colors in any other way than 3d lut, its no magic and ffmpeg cant be forced to do impossible things, at least on images, still didn't do any video tests.

    Well i know, its a stretch, just trying to find some workaround, if it will work then i can think what's next and how to make it user friendly. In the meantime i will read more about hdr on sdr displays, how it works. I did ask the seller from that amazon link for details, will see if he will answer.
    Well no one can fix the issues on ezcap but the company, there is a firmware, but they need to update it, i also searched for a custom firmware, but found nothing.

    There is no point in knowing the base input as we are working only on the output data, besides input can change, only the raw 1080p output data is important, so this is the base material for me, vlc can even save the raw stream, i dont care much about the input as i will experiment with the formats either way and it wont work for sure out of the box, thats what the experiments are for. The SDR output is one way or another bugged so i just dont trust anything but the output data itself.
    As of ezcap version, just a typo, but i posted a link to the ez269 offer on page 10 to amazon and now it is no more pointing to the right version, thats odd, but i will try to ask from your link.
    Also, not sure what setup you have, but if it is rpi with system gui then vlc works there too.

    Will check it, but the base streams is just what you get at the 1080p output, at least the base we can get, there is only the topic of what software you use and format to capture and save it.
    In vlc when previewing capture you can take a screenshot or record it and even check the media codec information. Even record in raw format.


    As the amazon way, well there is some angle there, but then i cant use the advantage of owning it and say it is not working to get some more info. But now i see it is no more on amazon, they replaced the offer, its not ezcap 269 any more.

    Well thats the part to do so experiments as i dont know what will came up, but it is mjpeg frames, so we can even experiment on one frame from the 1080p output but a video would be better. I once have done even some tests with a bleak SDR frame screenshot from a HDR material and even then i was able to correct the colors even if they were clearly not there (visually), its just i dont trust ezcap in the parameters, i would trust only in raw source capture, be it screens or video.


    I just dont know how exactly the hdr/sdr display and conversion work, what data is there, but there are players that do not support HDR and yet they show you the video but it has wrong colors, now how is this possible, it somehow converts those hdr colors, maybe the colors ale always saved as HDR values (10 bits?) but if your player dont understand this it just display wrong colors but the data could still there.


    Anyway im not in a hurry, its just an idea.


    Its clearly that the ezcap device must have some issues, but maybe there is a chance of a workaround. I did answer them when they said 1080p output is sdr that it has wrong colors and when they plan do fix it, but no answer for now :)


    Amazon would be a nice try, but im not sure i can ask them specific if i dont own the device.

    @rasp
    Hmm, i have found one more possible solution, wanted to test if for some time now, use ffmpeg to do a hdr to sdr tonemap and use it the same way as for cameras, with a virtual one.
    The basic command to test this is ffplay.exe "Real4Colors.mkv" -loop 0 -y 780 -vf zscale=t=linear,tonemap=hable,zscale=t=bt709,format=yuv420p


    But i have limited testing files, the Real4Colors.mkv is a HDR bt2020 file from YT, the command did successfully do a hdr to sdr conversion but it was not ideal.


    The support from ezcap did only say that 1080p output is SDR, but i dont trust them much, so maybe there is a way to force ffmpeg to treat it like hdr and convert those bad colors to good sdr ones.


    It probably wont be that easy and maybe some additional commands would be needed, but i would need a direct 1080p video output recording from that ezcap device when the input is 4k HDR to even somehow test it. Could be the sony hdr demo.

    Well i checked this a while ago and yes, hyperion ng is not using multiple cores, on the other hand it wasn't that needed, either way my goal is 60 fps performance and i will use rpi4, if things wont go my way i will try to make my own ambi with .net core.