Beiträge von Flovie

    Ok you have a valid point. I am going to now do the following:

    • Compiling hyperion on my rpi4 and will test with the USB camera
    • Attach USB camera to Windows PC and check latency
    • I have ordered a rpi camera module and will test results


    The reason why I want to trial the two rpi method is because another use on the forum has been successful using this method and as seen in the video, he is sending the stream from one rpi to another and latency seems minimal. Also, I wanted to go wireless for the camera for better positioning in the other corner of the room and minimal visibility of the camera and cables. Let me know what you think..


    [MEDIA=googledrive]1FrkTOLvQ0xAVKE_ufErxC_CLmBWwgXdk[/MEDIA]


    How did you watch the video of your usb cam? If you streamed it via LAN, there may be some buffering in the software (like vlc). Unfortunately, the ffmpeg approach didn't work for me. Streaming the complete video was too much data and induced a noticable latency (of around half a second). I wrote a little python script that streams and transforms using opencv (the biggest challenge is to get opencv installed on a RPi - but plenty of guides are out there). In my python script, only a matrix of 160x90x3 is transferred via Protobuffer. This reduces the latency induced by transforming the imange and transferring the data to around 1/25th of a second (i.e., one frame), which is acceptable in my opinion. I would have posted the code in a git repository, but I am still struggeling with the color calibration of my ps3eye camera and I hope to find a suitable color correction. If somebody is still interested to use the code before I am finished, please write me a PM or here and I try to publish the code as it is.

    Im just afraid from my tests that opencv fisheye correction could be too resource intensive for rpi and also with opencv u need a camera setup procedure that involves calculating camera and distortion matrix with a calibration procedure that isn't easy to do, not possible to guess the matrix values or pick universal values, but ffmpeg has only 2 parameters so it can be even trial and error, it is much simpler.
    Also not all fisheye lenses are the same, not all are ideal high quality 180 degree.


    Totally agree. I only have a perspective distortion, so the matrix is simplier and easy to handle in opencv. Nevertheless, I will also take a look at the ffmpeg approach because it appears very comfortable and should also support ip cams, which should fulfill my requirement of an external device for capturing. Thank you for pointing in this direction.

    I have found an even easier way for fisheye 180 degree cams to work with hyperion, u can use single ffmpeg command to grab the camera image, correct the lens distortion and send it to a virtual camera. Still testing, not ideal, but it kinda works, will post the commands later.


    This sounds like a good idea especially when you are on the same device. I also did some progress with opencv and protobuffer on a remote device for capturing the images. Since yesterday i have a working proof of concept, however, with a noticable latency. I need add some threading and will report back later with the source code.

    I am not sure if gstreamer is even required. Because eventually you need only to calculate a dewarped image and send this via protobuffer to the hyperion instance, so you don't need to transform the image back to a virtual camera. Here is an example by the hyperion wiki how to work with the protobuffer:


    https://hyperion-project.org/w…uffer-Java-client-example


    However, it is based on Java. A few years ago, I worked on a music visualizer with hyperion. It was also written in Java and used the protobuffer to control the ambilight. It is still working pretty good. I will consider to publish the code, maybe it helps to understand the protobuffer.

    Since I bought a 4k TV, I was thinking about implementing a webcam as a video capture device for my ambilight as well. During my research I found this topic. I already looks very promising. However, I was wondering why we don't just write a little (independent) program that captures the webcam stream, applies some sort of affine transformation and cropping, and delivers the new (low resolution) image to hyperion via proto. There are certainly some advantages and disadvantages:


    pros:
    - independent of the hyperion device - any second device can provide the transformed image via wifi
    - independent of hyperion and its version (despite the proto interface). It should also work with hyperion classic
    - filters, driver settings, auto border detections, and similar stuff related solely to this kind of capturing can be implemented in the software without messing with hyperion


    cons:
    - latency may be an issue - however since the transformation itself is pretty cheap and the resolution of the resulting image is not very important, it could be feasible
    - I fear that additional filters such as black border detection is not applied on images that are provided via proto (can someone confirm this?). So we woud need to implement this by ourself.


    Has somebody tried this approach yet? If nobody sees more disadvantages, I would try to write this (probably) small piece of software required for a proof-of-concept.


    Edit: Of course I am willing to share it and if someone has experiences in concepts like real-time video stream processing or helpful libraries like opencv I would appreciate any help ;)

    Thank you, if you stumble across that again, I would appreciate if you can send me the link :) I haven't seen it although I searched for possible solutions quite a while.


    However, it is not too hard to write it by my own because the Protobuffer java class is already available and I use the Minim library for audio processing (it took me only 5 hours to set both up without any previous knowledge and I learned a lot, so the time was well spent). Therefore, the whole challenge reduces to being creative in building an own visualization.


    Yes, I did, but thank you for the link. So far this very similar piece of software is limited to Linux because it uses gstreamer. Since I have no Linux computer around for that, I want to handle a simple audio stream coming in via lineIn or even microphone. This should be more platform independent (however you have to get the audio source from somewhere). Despite of that, the idea is the same.

    Hi,


    recently, I started to develop a tool for music visualization via Hyperion. The calculations are performed by a Java program on a windows computer. I will present a first alpha of this a little later.


    Anyways, I use Protobuffer for transferring the images to Hyperion. I basically took the provided Protobuffer[1] example without changing it. I experienced a considerably high latency for sending an image and receiving the reply via this program(~40ms). Suprisingly, when I increased the amount of data to be sent (by artificially increasing the image resolution), the latency decreased significantly (~4ms). I am not sure if this is some buffering problem of my windows system, network device, or protobuffer protocol or if it is really a bug of Hyperion. However, I wanted to start this thread for providing a possible solution for other developers facing the same problem.


    So if it is no real bug, nevermind ;) For everybody else experiencing considerably high latencies, try to increase the amount of data - maybe it helps.




    [1] https://hyperion-project.org/w…uffer-Java-client-example