Integrating custom code with already exiting Hyperion 2.0? Running two simultaneous Ambilights?

  • Hello all,


    I first discovered this forum years ago when I built my first ever Ambilight in 2017. I just recently noticed there's a new version of Ambilight that runs in the browser now instead of the java applet so I've been adjusting a little to that change. Here is my issue, though:


    I'd like to run a secondary ambilight simultaneously along the first. This second ambilight (Ambilight X) will only look at a subsection of the screen capture and output according to a preset code I've determined. (i.e. if the healthbar averages red, glow red, if it averages green, glow green). I wrote my code in python but I think Hyperion is written in C++? Is it feasible for me to marry the two codes together or will I either have to convert my pythonic code to C++, or write my own Ambilight software?


    Also, my original Ambilight 1.0 setup is with APA102s and a RPI. For my Ambilight X, I have WS2811s now so I think I'll need to also use an Arduino due to lack of clock input on these 3 pin LEDs?


    TLDR:


    1) Ambilight on TV to extend display

    2) Ambilight on room to only color with specific "event points" on screen

    • Offizieller Beitrag

    Hey gigahrebic


    One option to solution it could be leveraging Hyperion's JSON API.

    With your python program you can listen to the captured Image Stream, do your image evaluation and then trigger (color-) effects.
    Note: Make use of a Web-Socket communication in this case.


    From the above I was not clear, if the effect is send to the same LED-Device or a different.

    In Hyperion-NG you can define multiple LED (i.e. Output) Instances. So you can decide to which instance you send an effect from your python program.

    The above is just a quick outline. I suggest you explore the API details.

  • Hi Lord-Grey, thanks for replying.


    Zitat

    From the above I was not clear, if the effect is send to the same LED-Device or a different.

    In Hyperion-NG you can define multiple LED (i.e. Output) Instances. So you can decide to which instance you send an effect from your python program.

    Apologies, I'll explain the situation more clearly.

    I want TWO LED systems running from ONE input signal:

    • 1) 1 LED system will function like a normal Ambilight where it's reading zones around the border of the display signal and mirroring LEDs to extend that display from behind the TV. This is already fully functional from before.
    • 2) Another LED system will be used as background for my entire room. I want this to only read a specific zone. My code takes this specific zone and outputs colors based on the analysis. For example, health bar in a videogame. This is my brand new venture.

    The way my code for 2) works right now, is that it will output an RGB value based on whatever image it processed. However, I only have tested this with premade screenshots, not a live video feed. My biggest obstacles are as follows:

    • 2A) I need my code to access the same video feed that 1) is seeing. I have the older ambilight setup with the USB grabber + HDMI2AV converter. Will heavily consider upgrading to HDMI capture card.
    • 2B) I need to learn how to "rezone" the areas for analysis. i.e. instead of looking at the border, I want to crop a portion of the video feed and only use that cropped feed.
    • 2C) I need to then assign colors to 2) LED system that are outputted by analysis of 2B)'s cropped video feed.
    Zitat

    One option to solution it could be leveraging Hyperion's JSON API.

    With your python program you can listen to the captured Image Stream, do your image evaluation and then trigger (color-) effects.

    Note: Make use of a Web-Socket communication in this case.

    I think your proposed solution takes care of all my big obstacles for now? I have some coding experience but have not worked with using APIs before so this'll be a great learning opportunity. I only have 2 final concerns, though they are hardware related so feel free to defer me to the hardware forum.

    • Will I be able to get away with 1 RPI3B to handle all the processing? Or should I use another RPI in parallel?
    • My LEDs for 2) are WS2811s and I think I'll need to use an Arduino, my current 1) Ambilight are with APA102s so I can get away with just 1 RPI. I believe this is because the APA102s have a dedicated clock signal input, could you confirm?

    Thank you

    • Offizieller Beitrag

    Maybe I still do not get it, but what is possible out of the box with Hyperion-NG is:

    You capture 1 source.
    Captured image is provided to two instances in Hyperion.
    For each instance you can define which areas of the screen is mapped to the configured LEDs of a given type.

    In your case:

    1 Grabber captures the input and feeds:

    Instance 1: LED Type APA102, Layout TV
    Instance 2: LED Type WS2811, Layout e.g "only top left"

    With Hyperion Classic this was not possible and you needed to have multiple installations.
    With Hyperion NG this comes out of the box.

    I did a quick sketch to outline what is easily possible:



    Maybe you help me where you have additional needs....

  • Nope, that's it exactly! I was also going to do a quick sketch, haha.


    I just wanted to know if it were feasible and what kind of roadblocks I'd run into. It's great to know that Hyperion NG can fully support this without multiple installations!


    I will try my hand at this this week and get back to you with any further questions, if I have any. Thanks so much for your input.

    • Offizieller Beitrag

    Good to hear that it seems to cover your needs.

    On the 2nd LED instance you could use an Arduino and install an adalight sketch.
    In case you would like to have more flexibility where you want to put the LEDs in the room,
    you might use an ESP8266 and WLED. With the 2nd option you stream the output via Wifi to the LEDs (or use a USB connection as in 1).
    Again WLED device is supported out of the box...

  • Alright, it's been a few weeks and I have not had much luck. I spent a long time trying to wrap my head around the TCP Socket thing but am unable to simply find the video stream source. I thought it'd be as simple as have my second RPI read video stream-->execute code. But I can't even get that step.


    This is all I managed to do, based on this article: https://medium.com/nerd-for-te…-with-python-6bc24e522f19


    This is what I did for my code:

    "server side" = original RPI running current Ambilight for TV Edge, Instance 1


    "client side" = second RPI to run Instance 2


    and this is the error message, after confirming a socket connection between the two RPIs:

    Code
    Connection from: ('192.168.1.31', 55162)
    [ WARN:0] global /tmp/pip-wheel-2c57qphc/opencv-python_86774b87799240fbaa4c11c089d08cc3/opencv/modules/videoio/src/cap_v4l.cpp (802) requestBuffers VIDEOIO(V4L2:/dev/video0): failed VIDIOC_REQBUFS: errno=16 (Device or resource busy)
    [ WARN:0] global /tmp/pip-wheel-2c57qphc/opencv-python_86774b87799240fbaa4c11c089d08cc3/opencv/modules/videoio/src/cap_v4l.cpp (802) requestBuffers VIDEOIO(V4L2:/dev/video0): failed VIDIOC_REQBUFS: errno=16 (Device or resource busy)
    [ WARN:0] global /tmp/pip-wheel-2c57qphc/opencv-python_86774b87799240fbaa4c11c089d08cc3/opencv/modules/videoio/src/cap_v4l.cpp (802) requestBuffers VIDEOIO(V4L2:/dev/video0): failed VIDIOC_REQBUFS: errno=16 (Device or resource busy)
    [ WARN:0] global /tmp/pip-wheel-2c57qphc/opencv-python_86774b87799240fbaa4c11c089d08cc3/opencv/modules/videoio/src/cap_v4l.cpp (802) requestBuffers VIDEOIO(V4L2:/dev/video0): failed VIDIOC_REQBUFS: errno=16 (Device or resource busy)
    [ WARN:0] global /tmp/pip-wheel-2c57qphc/opencv-python_86774b87799240fbaa4c11c089d08cc3/opencv/modules/videoio/src/cap_v4l.cpp (777) requestBuffers VIDEOIO(V4L2:/dev/video0): Insufficient buffer memory
    [ WARN:0] global /tmp/pip-wheel-2c57qphc/opencv-python_86774b87799240fbaa4c11c089d08cc3/opencv/modules/videoio/src/cap_v4l.cpp (890) open VIDEOIO(V4L2:/dev/video0): can't open camera by index


    Am I thinking about this too hard, or missing something very obvious? I thought I could just open a specific port like 19445 and get the videostream directly from there? Any advice?

    • Offizieller Beitrag

    gigahrebic It might be good, if you could reiterate what you would like to solve for (to better understand, if the solution approach you have chosen is fit for purpose... :) ).


    The error messages you get in the log indicate that the video device is already used by some other process ("Device or resource busy").

    Are you already running Hyperion of the same video input in parallel?

    I suggest you have a closer look at the

    Code
    cv2.VideoCapture(0)

    statement and put some more error handling there.
    Or if you input is different from "/dev/video0" use a different index or a filename in the constructor...


    The documentation of OpenCV gives you a sample and more details:
    https://docs.opencv.org/3.4/d8…sscv_1_1VideoCapture.html

  • For some reason my post was deleted? I'll try to recap again.


    Okay, I will try to explain from the beginning what it is I'm trying to achieve.

    From one video signal, I'm trying to control two instances of LEDs, just as your picture described earlier.


    1) Instance 1 - Regular Hyperion Ambilight - Takes the input signal from the my HDMI device and sends one copy to the TV so it can display, and another copy to the USB Grabber so that RPI3B can produce LED values for the LEDs attached to the back of the TV. This works perfectly and has been working fine for the past 4 years.


    2) Instance 2 - New Code - Takes input signal like in 1) but instead of zoning around the perimeter of the signal, it's cropped to a specific zone depending on which media is loaded up. For example, in a fighting game, it's zoned around the health bar. I then use my code to determine its color based on image recognition. I tested this code with screenshot examples and a spare RPI3A + spare APA10s I had lying around but have not tried it with a live video signal yet. In practice, I will use my Arduino UNO and 13 m of WS2811s to control this Instance.


    My immediate problem is this: I cannot access or forward the video stream to my second RPI to test my code on a live video signal. I know the first RPI and Hyperion sees and uses this:

    but I've no idea how to get my second RPI (or perhaps first) to see it without having to open Hyperion's web configuration.


    With your python program you can listen to the captured Image Stream, do your image evaluation and then trigger (color-) effects.

    Note: Make use of a Web-Socket communication in this case.


    I tried to make sense of this but was not able to in these past few weeks.

    If I understood correctly, it's a matter of:

    1) opening a socket communication on my python program and

    2) using the following code to finally get access to the video stream for the image evaluation

    Code
    {      "command":"ledcolors",      "subcommand":"imagestream-start"
    }


    but I'm still stuck on even getting the socket communication established. I tried using the sample code above to set my original RPI as a server and my second RPI as a client so that with the second RPI I can access the video stream and use it to test the code before moving everything over to the WS2811s but have had no luck.


    I used ports 19444 and 19445 but I get "OSError: [Errno 98] Address already in use" which I assume because Hyperion already has them designated, but my client.py code for accessing the port doesn't return or display anything other than the debugging text I wrote inside the loops? So I used the default 10050 port provided in the example above but it returned the error above as well.


    Zitat

    Are you already running Hyperion of the same video input in parallel?

    Yes, I thought that was the goal here? To use the same video input in parallel for both Instances.


    PS - cv2.VideoCapture(0) I had this originally and it returned the exact same error.


    Does that help describe the problem?

    • Offizieller Beitrag

    I used ports 19444 and 19445 but I get "OSError: [Errno 98] Address already in use" which I assume because Hyperion already has them designated, but my client.py code for accessing the port doesn't return or display anything other than the debugging text I wrote inside the loops? So I used the default 10050 port provided in the example above but it returned the error above as well.

    I suggest you read the full documentation on hyperion API, especially on WebSockets and you find that the port is 8090.


    In addition, you might check, if the following might be of help:

    https://github.com/dermotduffy/hyperion-py

    Yes, I thought that was the goal here? To use the same video input in parallel for both Instances.

    But you cannot read from one device via two processes in parallel.
    If you look at my picture above, only hyperion is capturing.


    Question for me is still, why can't you define a layout on the 2nd hyperion instance that just maps the "health bar" of the game?
    and to how many LEDs you would like to map it.
    if that works we can discuss about dynamically changing predefined layouts...

    Maybe you ping me a game's screenshot and I try to outline what I am referring to. You can share the details also via a private conversation with me....

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!