hyperion source from cam in front of tv - prepare input region

  • Flovie, I was talking to a coworker about a similar concept on the way back from a tech conference this past Friday. I agree with most of your pros/cons.


    Since I'm not familiar with the "proto" approach, I'd think I'm going to explore this V4L2 virtual camera in combination concept with the OpenCV fisheye undistort. I'm completely unsure of the performance impacts to the video stream. I'll play a bit with it, but I haven't purchased a true fisheye lens camera yet.


    Regarding "proto", are you talking about protocol buffer? Can you provide a hyperlink please?

  • I explored the V4L2 virtual camera in combination concept with the OpenCV fisheye undistort options. While I'm not 100% certain, they both look to be written for more of a still image scenario. Digging a bit more, I ran into GStreamer which seems to have some promise. It does look like it exposes a dewarp function based on opencv. I've run out of patience for today, and remain unconvinced I'll spend more time digging into this.


    Interesting:


    Those install instructions didn't work for me, and I gave up. The raspivid command looks like it's piping video through gst-launch-1.0. That gst-launch smells like it has a way to construct a pipeline...maybe that dewarp can be used in some obtuse gst-launch command to construct a "pipeline" to surface a dewarped video stream originating from the camera without constructing a special gstreamer plugin or writing code. I'm only 5% certain of that statement.


    Other examples:

  • I am not sure if gstreamer is even required. Because eventually you need only to calculate a dewarped image and send this via protobuffer to the hyperion instance, so you don't need to transform the image back to a virtual camera. Here is an example by the hyperion wiki how to work with the protobuffer:


    https://hyperion-project.org/w…uffer-Java-client-example


    However, it is based on Java. A few years ago, I worked on a music visualizer with hyperion. It was also written in Java and used the protobuffer to control the ambilight. It is still working pretty good. I will consider to publish the code, maybe it helps to understand the protobuffer.

  • Yeah, that example is pretty simplistic in that it doesn't really process a "real" image or video stream. As I don't fully understand the hyperion API, the documentation and source code aren't exactly clear if/who is responsible for obtaining an image for processing. I'd have no idea on how to handle the dewarping math and would have to rely on a library such as opencv. This StackOverflow post seems to delve into some of this and issues with dewarping a true 180 fisheye into a usable image. This was fun to look into but I'm running out of motivation due to many potential roadblocks.

  • In the current state of hyperion the easiest way of using a 180 fisheye image would be to capture the camera image, de-fisheye it by own software and then send to hyperion.
    As of setup the easiest would be to have a virtual camera endpoint so u can send the de-fisheyed image to it to pick up by hyperion and at the same time rpi would capture the fisheye image from real camera, this way everything is done on the same rpi, it has 2 cameras input, one real, one virtual and it doesn't need any hack in hyperion.
    And as it seems using opencv would be the easiest way to de-fisheye, so maybe use python cv?
    And it goes like this, real camera video0, virtual camera video1, some python cv script capturing from video0, de-fisheyeing and sending to video0 and video0 is configured normally in hyperion as capture device, no need to use proto server or others.
    The only point of using proto server or others is when you would want to send the image from different device, but i assume it is preferred if it would work on single rpi.
    The only harder part is that python open cv script, to put it together.

  • I have found an even easier way for fisheye 180 degree cams to work with hyperion, u can use single ffmpeg command to grab the camera image, correct the lens distortion and send it to a virtual camera. Still testing, not ideal, but it kinda works, will post the commands later.

  • I have found an even easier way for fisheye 180 degree cams to work with hyperion, u can use single ffmpeg command to grab the camera image, correct the lens distortion and send it to a virtual camera. Still testing, not ideal, but it kinda works, will post the commands later.


    This sounds like a good idea especially when you are on the same device. I also did some progress with opencv and protobuffer on a remote device for capturing the images. Since yesterday i have a working proof of concept, however, with a noticable latency. I need add some threading and will report back later with the source code.

  • Im just afraid from my tests that opencv fisheye correction could be too resource intensive for rpi and also with opencv u need a camera setup procedure that involves calculating camera and distortion matrix with a calibration procedure that isn't easy to do, not possible to guess the matrix values or pick universal values, but ffmpeg has only 2 parameters so it can be even trial and error, it is much simpler.
    Also not all fisheye lenses are the same, not all are ideal high quality 180 degree.

  • Im just afraid from my tests that opencv fisheye correction could be too resource intensive for rpi and also with opencv u need a camera setup procedure that involves calculating camera and distortion matrix with a calibration procedure that isn't easy to do, not possible to guess the matrix values or pick universal values, but ffmpeg has only 2 parameters so it can be even trial and error, it is much simpler.
    Also not all fisheye lenses are the same, not all are ideal high quality 180 degree.


    Totally agree. I only have a perspective distortion, so the matrix is simplier and easy to handle in opencv. Nevertheless, I will also take a look at the ffmpeg approach because it appears very comfortable and should also support ip cams, which should fulfill my requirement of an external device for capturing. Thank you for pointing in this direction.

  • Also it seems that ffmpeg has also more advanced methods to undisort fisheye than the basic lens correction filter, there also a remap filter and a new v360, both i could not test for now, but the v360 looks easy and promising.

  • For those that need this


    Perspective correction using ffmpeg for hyperion


    Do note that this will not work for fisheye or barrel lenses, u need a normal flat camera. If it is only slightly barrel lens, then maybe it would work with little crop. Do also note that i do not have rpi currently or camera, everything was tested on a virtual machine with Raspberry Pi Desktop and some things on ffmpeg windows and also virtual cameras.


    It uses an additional virtual camera on a RPI that hyperion uses, ffmpeg is grabbing the real camera stream, corrects the perspective and sends it to that virtual camera.


    Steps are


    1. Install virtual camera software: v4l2loopback-dkms
    2. Then run sudo modprobe v4l2loopback, this creates an additional virtual camera, if your real camera is video0, the virtual one will be video1
    3. Configure hyperion to use video1 as source
    4. Grab real camera stream and use ffmpeg to correct the perspective and send it to the virtual camera, the command is
    ffmpeg -re -i /dev/video0 -vf "perspective=382:127:1563:91:387:761:1495:986" -map 0:v -f v4l2 /dev/video1
    5. Thats it, after running this command the virtual camera will stream a perfectly flat rectangle image for hyperion to use as long as that command is running


    Remember this is just one time configuration, those command will not run automatically every time, if u want it to work every time rpi boots up, you need to run commands 2 and 4 from startup script.


    Additional information


    How to know what values to put in the perspective filter?


    - You can get the values using any image software that will show you the exact pixel location on the image like XnView, put your camera in a position it will be always used, do a camera screenshot, get the pixel locations in the corners of your tv, the order in the perspective filter is: top left, top right, bottom left, bottom right, so it will be 8 numbers in total. If u change your camera position, u will need to repeat the procedure again.


    What if my tv is just a small portion of the camera image?


    - If there are large portions of the image without tv u can cut the borders using crop before using perspective, but then u need to make a screenshot with crop alone and get the pixel pos for perspective from it, ffmpeg -re -i /dev/video0 -vf "crop=w=100:h=100:x=0:y=0,perspective=382:127:1563:91:387:761:1495:986" -map 0:v -f v4l2 /dev/video1


    What if i have a slight camera distortion, positive or negative?


    - If your camera have a slight distortion you can try to remove it with lens correction filter, the most popular distortion is barrel, type of distortions



    The command is
    ffmpeg -re -i /dev/video0 -vf "lenscorrection=k1=-0.0:k2=0.0,perspective=382:127:1563:91:387:761:1495:986" -map 0:v -f v4l2 /dev/video1

    You need to trial and error the k1 and k2 values, there are in -1,1 range, lens correction wont work for fisheye 180 lenses, at least from my tests i couldn't find values where the image is fine, but in some instances it could make it maybe a little better but probably still unusable.



    I have put and example media if you want to just test it, it consist of short sample tv angle video and a screenshot from it
    https://www120.zippyshare.com/v/oEKEnVCm/file.html


    Those values that are put into the ffmpeg command perspective filter in the steps are for this sample video.
    U can quickly test values yourself or with your own video with the command
    ffplay "sample tv angle loop.mp4" -loop 0 -y 980 -vf "perspective=382:127:1563:91:387:761:1495:986"
    For own video you just need to change the perspective values and/or add crop if it is needed.

  • Hi all


    Thank you to everyone for contributing towards this project. I stumbled across ambilight and hyperion a few weeks ago and wanted to implement it on my 55" 4k TV however, I like my TV's operating system and want to use the built-in apps so went for the camera solution. Here is a video of my current setup so far and what I am running:


    Externer Inhalt www.youtube.com
    Inhalte von externen Seiten werden ohne Ihre Zustimmung nicht automatisch geladen und angezeigt.
    Durch die Aktivierung der externen Inhalte erklären Sie sich damit einverstanden, dass personenbezogene Daten an Drittplattformen übermittelt werden. Mehr Informationen dazu haben wir in unserer Datenschutzerklärung zur Verfügung gestellt.


    • Raspberry Pi Zero W
    • Arduino Uno R3
    • WS2812B 5m LED strip
    • 5V 10 Power supply
    • Philips SPZ2000 USB camera
    • Hyperion NG


    This is just the beginning and still a working progress and there are hopefully many improvements to come.


    Things to do:


    • Obtain a Raspberry Pi Zero W + Pi camera module
    • Create a video stream from the camera module
    • Send the video stream to another Raspberry Pi running Hyperion NG
    • Add the video stream as a virtual webcam for modifying settings
    • Adding the virtual device to Hyperion NG as a input device
    • Write the ultimate guide to use Hyperion NG via webcam


    If anyone can contribute and can assist me then I would be grateful.

    • Raspberry Pi Zero 4
    • Raspbian Stretch lite


    • Hyperion NG - Alpha


    • Arduino Uno R3
    • WS2812B
    • 5V 10 Power supply
    • Philips SPZ2000 USB camera
  • Well i did explain everything the post above yours, but it is written with one rpi in mind and a normal v4l2 usb camera, not the pi one, but the methods are all the same, u will need something to at least crop the image and correct perspective like ffmpeg, if its two rpi then there could be latency issue, if one, then probably it needs to be the more powerful one, 3 or 4, but currently i have no rpi so i cant test the performance.

  • Well i did explain everything the post above yours, but it is written with one rpi in mind and a normal v4l2 usb camera, not the pi one, but the methods are all the same, u will need something to at least crop the image and correct perspective like ffmpeg, if its two rpi then there could be latency issue, if one, then probably it needs to be the more powerful one, 3 or 4, but currently i have no rpi so i cant test the performance.


    Thanks for your response. Fixing the perspective is not the issue here. The issue is to cut out the latency of using a USB webcam, mine is currently around 1000ms which means that the LEDs are always slightly behind. Also, the latency gets worse using longer USB extension cables. The purpose of adding another raspberry pi to the setup is for making latency as low as possible and to send the stream over wireless network instead of cable. From my research I have found that raspberry pi camera modules have significantly less delay than (<100ms) compared to USB. I have both a pi 4 and pi zero w for testing if someone would like to work with me?

    • Raspberry Pi Zero 4
    • Raspbian Stretch lite


    • Hyperion NG - Alpha


    • Arduino Uno R3
    • WS2812B
    • 5V 10 Power supply
    • Philips SPZ2000 USB camera
  • If your usb camera has high latency due to its hardware then nothing will fix that, maybe changing some configs on it, some of them work faster with less fps or resolution, but its just a bet, first of all i would check the camera latency on a pc just to be sure its not rpi related.
    Adding anything else will never make it faster, it will always take more time to capture it and send to other device than capture it on the same device where hyperion is installed.
    So if you absolutely need 2 devices for any reason other than trying to minimize the latency using a wireless connection then go for it, if not then maybe try just one rpi 4.
    If the pi camera is faster then ok, why not use it, i just wondering why 2 devices.

  • Ok you have a valid point. I am going to now do the following:

    • Compiling hyperion on my rpi4 and will test with the USB camera
    • Attach USB camera to Windows PC and check latency
    • I have ordered a rpi camera module and will test results


    The reason why I want to trial the two rpi method is because another use on the forum has been successful using this method and as seen in the video, he is sending the stream from one rpi to another and latency seems minimal. Also, I wanted to go wireless for the camera for better positioning in the other corner of the room and minimal visibility of the camera and cables. Let me know what you think..


    [MEDIA=googledrive]1FrkTOLvQ0xAVKE_ufErxC_CLmBWwgXdk[/MEDIA]

    • Raspberry Pi Zero 4
    • Raspbian Stretch lite


    • Hyperion NG - Alpha


    • Arduino Uno R3
    • WS2812B
    • 5V 10 Power supply
    • Philips SPZ2000 USB camera
  • Ok you have a valid point. I am going to now do the following:

    • Compiling hyperion on my rpi4 and will test with the USB camera
    • Attach USB camera to Windows PC and check latency
    • I have ordered a rpi camera module and will test results


    The reason why I want to trial the two rpi method is because another use on the forum has been successful using this method and as seen in the video, he is sending the stream from one rpi to another and latency seems minimal. Also, I wanted to go wireless for the camera for better positioning in the other corner of the room and minimal visibility of the camera and cables. Let me know what you think..


    [MEDIA=googledrive]1FrkTOLvQ0xAVKE_ufErxC_CLmBWwgXdk[/MEDIA]


    How did you watch the video of your usb cam? If you streamed it via LAN, there may be some buffering in the software (like vlc). Unfortunately, the ffmpeg approach didn't work for me. Streaming the complete video was too much data and induced a noticable latency (of around half a second). I wrote a little python script that streams and transforms using opencv (the biggest challenge is to get opencv installed on a RPi - but plenty of guides are out there). In my python script, only a matrix of 160x90x3 is transferred via Protobuffer. This reduces the latency induced by transforming the imange and transferring the data to around 1/25th of a second (i.e., one frame), which is acceptable in my opinion. I would have posted the code in a git repository, but I am still struggeling with the color calibration of my ps3eye camera and I hope to find a suitable color correction. If somebody is still interested to use the code before I am finished, please write me a PM or here and I try to publish the code as it is.

  • Good work @Flovie. Yes I watched the video via network. Yesterday I received better results by doing the following:


    • Upgraded from a Raspberry Pi Zero W to a Raspberry Pi 4 (4GB)
    • Upgraded from a standard USB extension cable to a repeater USB cable (10m)
    • Tweaked v4l2 - USB camera settings to the following: --set-ctrl=power_line_frequency=1 (from 0)
    • Now the latency has dropped to approximately <400ms which is acceptable for a USB solution


    Today I am receiving a rpi camera (ribbon connected) so there is no harm in trying a remote camera solution. If it doesn't work out for me, I will simply return the camera :)

    • Raspberry Pi Zero 4
    • Raspbian Stretch lite


    • Hyperion NG - Alpha


    • Arduino Uno R3
    • WS2812B
    • 5V 10 Power supply
    • Philips SPZ2000 USB camera
  • Well if two rpis needs to be used with wireless connection for a better camera positioning then the ffmpeg method wound need tweaking, my initial guide is for one rpi that does it all, just as Flovie said, streaming full camera video over wireless could be not a good idea that introduces even more latency.


    But looking on the video, latency seems fine.


    If using ffmpeg it can be tweaked to be faster, first doing a crop, then scaling it down to half or even less res and last, correct the perspective, only then sending this to second rpi, this would be faster.


    The best option would be to send only the image parameters after hyperion processing if the latency is really bad and not the image itself, but im not sure hyperion can work like that.


    But it do supports multiple hyperion instances and forwarding data to other instances using json/protobuff server/client.


    So that rpi connected to camera would have also hyperion installed, but without leds, first the perspective correction then it would just calculate everything and forward it to another hyperion instance over wireless, the second instance would control the leds.


    There isn't so much detailed documentation but i see in the source that it is sending image data and many others like some color data, but not sure the image is full or after processing already.


    Well i was just looking for a simple way, configuring 2 devices needs additional testing, especially the latency.

  • @Andrew_xXx thank you for breaking it down. I did not know that we could run two instances. However, I have no programming/coding experience so I think this task would be too complicated for someone like me. I may just stick to the USB camera method and live with the delay. If you guys would like to test the two rspi configuration then feel free to help me

    • Raspberry Pi Zero 4
    • Raspbian Stretch lite


    • Hyperion NG - Alpha


    • Arduino Uno R3
    • WS2812B
    • 5V 10 Power supply
    • Philips SPZ2000 USB camera

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!