Beiträge von Andrew_xXx

    Ah, so sad :(
    I had hope, but if the vlc preview and everything else is correct then the issue is in the internal grabber itself, but that makes it false advertising, who would need a broken colors recording or for streaming, it just doesn't make sense.
    I did see they have some firmware updates so maybe the can fix it? I will try to contact them and ask exactly what parameters should the 1080p output have and maybe how to fix some things.
    I'm just wondering if the color data can be still used but cant be displayed or the colors are just beyond repair, maybe we would need an additional hdr to sdr mapping, but i would need a direct recording from that device to test things.
    Also thx for confirming that the full res wont work on usb 2.0.

    @rasp Ahh, my fault, but ok, still can be tested, just do not test it on hyperion or rpi.
    Im looking for a way to determine where is the issue as i cant verify if it is in hyperion or rpi or device itself.


    It doesn't record anything on its own, you need to record it yourself with some software.
    So we need a way to test it without rpi or hyperion, thats what im trying to check.


    Maybe you did already check it, im asking just to be sure.


    The first method is just to preview the 1080p stream by using vlc or other capture software with hdr support (obs etc) and look at the colors and parameters the app provides, mpc-hc can open devices too. But this will not work sometimes if the pc screen is not a HDR screen, it depends if the software have a hdr to sdr mapping build in, i have also heard that linux has some issues with hdr, so windows is a better bet for this test. U can always connect the hdr tv to pc.


    The second and better method is to record a short video simple of the capture, a direct one without any changes so it can be examined with other media software, just save it like it was captured, then you could even upload it so others can also check the parameters of it. (obs probably cant record in hdr)


    Those test just needs some software tinkering.


    EDIT
    Turns out also ffmpeg could record and preview in HDR and has many options to tinker.

    @rasp Thank u very much.
    Can you also check if the ezcap 1080p output using the same video as input has also washed out colors on the tv?
    There is a tiny chance it doesn't strip hdr and just downsamples to 1080p HDR but a hdr tv should be able to show the colors good, if thats the case, then it would mean it works but rpi with hyperion has an hdr issue.
    I mean why would anyone want a 1080p recording function with washed colors? Something is not right. And they advertise it just for that, recording 1080p without issues

    @rasp Nice, so this is a viable device.
    So the HDR strip doesn't work, can u provide comparative screens?
    Maybe it is not striping hdr, but the rest dont understand it and we get bleak colors, can u do a test, do everything as before, run a hdr test video, but connect the 1080p output to a hdr tv and check the colors.

    Well if two rpis needs to be used with wireless connection for a better camera positioning then the ffmpeg method wound need tweaking, my initial guide is for one rpi that does it all, just as Flovie said, streaming full camera video over wireless could be not a good idea that introduces even more latency.


    But looking on the video, latency seems fine.


    If using ffmpeg it can be tweaked to be faster, first doing a crop, then scaling it down to half or even less res and last, correct the perspective, only then sending this to second rpi, this would be faster.


    The best option would be to send only the image parameters after hyperion processing if the latency is really bad and not the image itself, but im not sure hyperion can work like that.


    But it do supports multiple hyperion instances and forwarding data to other instances using json/protobuff server/client.


    So that rpi connected to camera would have also hyperion installed, but without leds, first the perspective correction then it would just calculate everything and forward it to another hyperion instance over wireless, the second instance would control the leds.


    There isn't so much detailed documentation but i see in the source that it is sending image data and many others like some color data, but not sure the image is full or after processing already.


    Well i was just looking for a simple way, configuring 2 devices needs additional testing, especially the latency.

    If your usb camera has high latency due to its hardware then nothing will fix that, maybe changing some configs on it, some of them work faster with less fps or resolution, but its just a bet, first of all i would check the camera latency on a pc just to be sure its not rpi related.
    Adding anything else will never make it faster, it will always take more time to capture it and send to other device than capture it on the same device where hyperion is installed.
    So if you absolutely need 2 devices for any reason other than trying to minimize the latency using a wireless connection then go for it, if not then maybe try just one rpi 4.
    If the pi camera is faster then ok, why not use it, i just wondering why 2 devices.

    Well i did explain everything the post above yours, but it is written with one rpi in mind and a normal v4l2 usb camera, not the pi one, but the methods are all the same, u will need something to at least crop the image and correct perspective like ffmpeg, if its two rpi then there could be latency issue, if one, then probably it needs to be the more powerful one, 3 or 4, but currently i have no rpi so i cant test the performance.

    For those that need this


    Perspective correction using ffmpeg for hyperion


    Do note that this will not work for fisheye or barrel lenses, u need a normal flat camera. If it is only slightly barrel lens, then maybe it would work with little crop. Do also note that i do not have rpi currently or camera, everything was tested on a virtual machine with Raspberry Pi Desktop and some things on ffmpeg windows and also virtual cameras.


    It uses an additional virtual camera on a RPI that hyperion uses, ffmpeg is grabbing the real camera stream, corrects the perspective and sends it to that virtual camera.


    Steps are


    1. Install virtual camera software: v4l2loopback-dkms
    2. Then run sudo modprobe v4l2loopback, this creates an additional virtual camera, if your real camera is video0, the virtual one will be video1
    3. Configure hyperion to use video1 as source
    4. Grab real camera stream and use ffmpeg to correct the perspective and send it to the virtual camera, the command is
    ffmpeg -re -i /dev/video0 -vf "perspective=382:127:1563:91:387:761:1495:986" -map 0:v -f v4l2 /dev/video1
    5. Thats it, after running this command the virtual camera will stream a perfectly flat rectangle image for hyperion to use as long as that command is running


    Remember this is just one time configuration, those command will not run automatically every time, if u want it to work every time rpi boots up, you need to run commands 2 and 4 from startup script.


    Additional information


    How to know what values to put in the perspective filter?


    - You can get the values using any image software that will show you the exact pixel location on the image like XnView, put your camera in a position it will be always used, do a camera screenshot, get the pixel locations in the corners of your tv, the order in the perspective filter is: top left, top right, bottom left, bottom right, so it will be 8 numbers in total. If u change your camera position, u will need to repeat the procedure again.


    What if my tv is just a small portion of the camera image?


    - If there are large portions of the image without tv u can cut the borders using crop before using perspective, but then u need to make a screenshot with crop alone and get the pixel pos for perspective from it, ffmpeg -re -i /dev/video0 -vf "crop=w=100:h=100:x=0:y=0,perspective=382:127:1563:91:387:761:1495:986" -map 0:v -f v4l2 /dev/video1


    What if i have a slight camera distortion, positive or negative?


    - If your camera have a slight distortion you can try to remove it with lens correction filter, the most popular distortion is barrel, type of distortions



    The command is
    ffmpeg -re -i /dev/video0 -vf "lenscorrection=k1=-0.0:k2=0.0,perspective=382:127:1563:91:387:761:1495:986" -map 0:v -f v4l2 /dev/video1

    You need to trial and error the k1 and k2 values, there are in -1,1 range, lens correction wont work for fisheye 180 lenses, at least from my tests i couldn't find values where the image is fine, but in some instances it could make it maybe a little better but probably still unusable.



    I have put and example media if you want to just test it, it consist of short sample tv angle video and a screenshot from it
    https://www120.zippyshare.com/v/oEKEnVCm/file.html


    Those values that are put into the ffmpeg command perspective filter in the steps are for this sample video.
    U can quickly test values yourself or with your own video with the command
    ffplay "sample tv angle loop.mp4" -loop 0 -y 980 -vf "perspective=382:127:1563:91:387:761:1495:986"
    For own video you just need to change the perspective values and/or add crop if it is needed.

    Im just afraid from my tests that opencv fisheye correction could be too resource intensive for rpi and also with opencv u need a camera setup procedure that involves calculating camera and distortion matrix with a calibration procedure that isn't easy to do, not possible to guess the matrix values or pick universal values, but ffmpeg has only 2 parameters so it can be even trial and error, it is much simpler.
    Also not all fisheye lenses are the same, not all are ideal high quality 180 degree.

    In the current state of hyperion the easiest way of using a 180 fisheye image would be to capture the camera image, de-fisheye it by own software and then send to hyperion.
    As of setup the easiest would be to have a virtual camera endpoint so u can send the de-fisheyed image to it to pick up by hyperion and at the same time rpi would capture the fisheye image from real camera, this way everything is done on the same rpi, it has 2 cameras input, one real, one virtual and it doesn't need any hack in hyperion.
    And as it seems using opencv would be the easiest way to de-fisheye, so maybe use python cv?
    And it goes like this, real camera video0, virtual camera video1, some python cv script capturing from video0, de-fisheyeing and sending to video0 and video0 is configured normally in hyperion as capture device, no need to use proto server or others.
    The only point of using proto server or others is when you would want to send the image from different device, but i assume it is preferred if it would work on single rpi.
    The only harder part is that python open cv script, to put it together.

    @andrewj88
    Well i'm a programmer and sometimes work with linux, but i do not know all the available commands for hyperion and v4l2 exactly, so i look up forums or github if i need too.



    But look at that, seems hyperion v1 also supports mjpeg and even more, someone added opencv grabber that can take most formats, i wasn't aware of this, but i do not know if it works out of the box or need to compile install yourself, but u can try and check if u didn't install ng already, just add the entries to the config according to that link.


    But i think its not there as the install procedure downloads release from 25 08 and this was merged into master on 30 09, maybe Panther the dev can help u more.


    Still if u have that gui, vlc is the simplest way to check if it actually send some images at least, but i think the grabber just works fine.

    Well it looks good, it is working with the system, its under /dev/video0 and even /dev/video1 not sure why two, but ok.
    Looks like the grabber is fine, but i can see u are not using hyperon ng, but the original and to my knowledge, only hyperion ng supports MJPG encoding.
    Thats not the first hdmi to usb3 grabber i see that uses MJPG, it is more and more popular amongst modern hdmi2usb capture devices.
    Well if its not rpi4 then it would probably not work either way, but u could try and install hyperion ng just to check diagnostics, make a screen etc.
    Or if u have system with gui u can easily run vlc visually on pi and open the device as a capture card, vlc should have no issues supporting mjpg and showing u an image from the device and allow to easily change configuration to at least check if maybe it can work in lower resolution.


    There was a similar issue on forum with ezcap and mjpg https://hyperion-project.org/threads/hdmi-grabber.3302/


    U can just tinker here and there if u want and tell what u have found.

    Im not sure if the lack of usb 3.0 would make such issue, at least on basic connection level, it maybe will not work for capturing as the bandwidth is too small but should work for diagnostics.


    First of all check if it is properly visible by the system not hyperion
    v4l2-ctl --list-devices
    And then check formats
    v4l2-ctl --list-formats-ext


    Then maybe try hyperion-v4l2 --screenshot


    The default v4l2 device should be at /dev/video0


    Post all log result here

    Well i try to eliminate any analog down the pipe and number of devices, so not worth going HDMI2AV anymore, it get quickly too advanced and those av analog boxed will die eventually so all digital setup IMO is much better, there also a lot of those 4k hdmi matrixes from china with hdr support and 1080p output, but they need checking too, digital is getting cheaper and cheaper.

    No one will guarantee that, but this is the same device

    And on producer website
    http://www.ezcap.com/index.php…ap269gamecapturelive.html
    You can download the manual there
    It is a UVC usb driverless device with linux support, amazon page says v4l2 support, so IT SHOULD WORK.


    I could not find any confirmation if it strips the HDR data from the 1080p capture, but it is supposed to work with streaming services like twitch, but there isn't any such service with HDR support, so probably it strips HDR data, so no color issues.