Beiträge von mg3point

    I've had my setup for several years now. I have one instance on my TV and another built into my couch rail that gets its info forwarded from the TV instance. I've also been able to manage both manually leveraging the remote app on my phone and through the web extension. Recently I had a power outage that reset everything which usually has no effect. However this time my couch rail stopped working and the remote no longer connects. For some reason, the TV's pi unchecked the "forwarder" so i was able to get the lights back to working on the couch rail, but the remote will still not connect to either instance.

    I keep getting "unable to connect to server". I've double-checked the ports and IP addresses and they are correct. I've checked on the pi themselves and the ports are open. I am able to ssh into my Pi and trigger the remote from the terminal. I just can't get the actual apps to connect to the Pi. Any ideas of what happened and how to fix?

    Michael Rochelle Thanks for the info! I had discovered i was trying to manipulate the platform capture instead of the USB capture and wondering why it was having no effect haha. But yes, once i reduced the resolution its been great. I actually even just now reduced it down lower to your recommended 720x480 and its working like butter.

    Also that logic level shifter looks a lot cleaner than my home built one haha. Someday when i decide to finally clean all this up in a self contained box I'll have to update to that.

    Yeah, thats one of the reasons im reluctant. When I change them out i'd likely use the aluminum track similar to you. I may experiment with beveling the edges of the channel so that they meet continuously with a 45 degree angle towards the wall. If i'm going to change them, might as well get fancy haha. Love your 3D printed frame structure, mine is just an aluminum bar sitting on top the tv mount, but it never moves so it works.

    I don't know much aboue Coreelec, but i would agree with above, if you have a working system you like, stick with it. From my perspective the advantage to using a Pi like i do is that i can take the output from my stereo receiver which has all my other inputs (Apple TV, Xbox, External Link) and send it to the pi to process on its own from a hdmi to USB input.

    NIce work. My tv is still running on the original bullet style w2801s i started with 8 years ago. I've built an aluminum C-channel and drilled holes for each bullet (only 100 LEDs). It all sits behind a 2017 55" Samsung QLED. You see it upon entering the room from the side so i had to have something mostly clean. Though I'm getting closer and closer to pulling the trigger and rebuilding it with some new APA107s or APA102s.


    Michael Rochelle Curious what your capture settings are? (Decimation, sizes, smoothing, etc.)

    I just purchased the same HDMI to USB adapters. They work great, my color has greatly improved from being washed out by RCA degradation. However now im noticing extreme lag between whats on screen and the lights reacting that was not there before. I too am running a pi4B and had near zero lag previously. Im hoping there is a setting somewhere. Otherwise could there be a speed difference as i am running WS2801's instead of the newer APA107s like you?

    I've never been super well versed in any of this stuff, but i find solutions to hack my way through it. I do plan on looking into a wifi-controller, but for now the extra pi works ¯\_(ツ)_/¯ . I do want to create full write-up with pics and all someday, just gotta find the time.


    Essentially....

    I started with a service someone else wrote that hit a webserver for NHLStatsAPI every 2-3 seconds checking the score, if the bluejackets score changed I could set a function to happen. I would use the Hyperion remote feature and trigger a custom goal effect i created 8+ years ago. The problem i ran into was that depending on my tv stream, sometimes the lights would go off a few seconds before we scored and sometimes they would go 30 seconds after they resumed play from the goal. I needed to make them locally triggered based on what was on the screen.

    A majority of the Bluejackets games are on Fox Sports Ohio. When they score it would throw a flag in the corner over the scoreboard saying GOAL with a bluejackets logo. I originally went down the path of machine learning and started training models. After some discussions with some people who knew more than me that i found on youtube, i learned I didn't need it to learn anything if i was just looking for an exact match. So i went with OpenCV (computer vision). My only hiccup now was that the flag would change slightly for each type of goal. Short-handed, Game Winning Goal, Power Play Goal, regular Goal. They would also change the logo for what jerseys they were wearing each night and there was also the national broadcasts on NBCSports (only 1 or 2 but i wanted it to cover all).

    I split the RCA signal to the pi (this was already a little dirty causing some slight color shifting) but one continued to the tv pi and a new one went to my trigger pi. I had considered running this all on the tv pi, but the video signal could only be processed from one application and OpenCV on every frame can be processor heavy. There may even be a way to send the frames from the tv pi to the trigger pi via flatbuffer so i wouldn't need the other usb capture device, but again, im hacking here lol.

    OpenCV would take each frame and search for a particular image in each frame and i could adjust the variability rate to account for poor signal. This meant it only had to be an 85% match, i had to be careful though, if i set it too low it would give false positives for opponents scoring and thats not fun. So I screen capped each style of goal and cropped them to just the flags including 2 frames from the NBC goal style (it moved so i had to add an extra instance of it to improve the ability to catch it at my pi's framerate). This accounted for 16 different goal triggers. OpenCV would essentially analyze each frame looking for one of these images. I was able to crop to just the upper left region of the screen to improve frame rate. But with so many images to search for in each frame my FPS was around 4. But it worked!

    That lasted all of 10 games until Fox Sports decided to change to Bally sports and change their whole graphics package (#@&$). But alas it turned out to be a good thing. Even though the flag changed for each type of goal, the same bluejackets logo was used in the same location on the screen every time! This meant i no longer need 16 different images, just the one! My FPS jumped into the 28fps range. As an added bonus as a Reds fan the same regional network carries all the games and they use the same graphics package so i just have to throw a Reds logo in there and it will work for home runs!


    So now once my team scores the networks throw the flag up within 3 or so seconds of the actual goal, i then tell it to play my goal effect on the tvpi and my couch rail pi for 40 seconds (Average time it takes after the goal to get back to dropping the puck for play again)

    I still have to reset the NBC broadcast trigger, but since they are only 1 or 2 games a season, they are not a priority and could even just manual set a separate program for that. They also changed their graphics package so i'll have to wait until the Bluejackets score a goal on their network to capture it.

    I know that wasn't super short, but when i get a better write-up with pictures, i'll be sure to post it to the forums.

    Thanks for posting the link to the HDMI to USB adapters. I'm currently splitting the hdmi one to tv the other to a downscaler, then to RCA and split yet again from there one to separate pi.

    The first Pi is for the lights around the tv, its still the original 100 LED setup i made 8+ years ago, but i've got the bullet style WS2801's (washed color, i know) and they are built into an aluminum frame. It creates a nice visual feature since you see the side of the TV when you enter the room.

    The other Pi I created a script leveraging OpenCV to check for Bluejackets hockey goals and by default since the regional sports network uses the same graphics package it works for Reds home-runs, to trigger a goal light effect on the TV and another set of lights I built into a drink rail behind the couch.

    https://photos.app.goo.gl/YRyR1dsXdQd7mEbY9

    One problem I've run into from day 1 was some smaller shifts in color due to poor signal quality, I'm hopeful by sticking to HDMI i can reduce the transfer points and get a clear signal to analyze.

    I've got 100 LEDs on my original setup around my 55" Samsung QLED. I later built a couch drink rail around the back and side of my couch in which i in-layed 253 LEDs. I was using the flatbuffer to forward from the tv pi to a pi connected to the couch rail. I then only selected the bottom edge and part of the left edge of the screen for the couch rail to replicated. Its a nice immersive effect.

    https://photos.app.goo.gl/YRyR1dsXdQd7mEbY9

    As a side project I also have a pi analyzing the screen and using computer vision to detect when the Bluejackets (Hockey) score a goal, which i then trigger a custom goal light effect (seen above) on both set of lights.

    I'm working through a similar issue. I have two pi4 on Alpha9, one for lights around my tv, the other for an led strip i inlayed into a couch drink-rail. Seems if I reboot everything the connection works fine.


    You essentially want the pi with the grabber to have forward enabled and the flatbuffer pointed to the pi you want receiving. Then on the receiving pi you want the flatbuffer checked so it will receive the data. This defaults to a timeout of 5 seconds and it should "soft timeout" but I have noticed that sometimes it does not come back after the signal to the grabber has been turned off and turned back on later.


    It's a bit of an intermittent issue, I'm going to try setting the timeout on the receiving pi to 999999999 (that's like 11 days or something) and see if it keeps it online. If this does not correct the issue, I suspect it's something on the forwarding pi disabling something when it detects no signal.

    I've attempted to create some effects via animated gif, however, when I select them from "choose file" to test, nothing happens. Is there a size or dimension limitation I need to follow? I am using Hyperion.NG and its online configurator. Do I need to drop the gifs onto the pi directly? I noticed the "fire" effect is a gif because you are not able to edit it in the configurator.


    Any help is appreciated!

    Is there a way to access the hyperion captured image data from a separate application on the same pi running Hyperion. With HyperionNG running I cannot access the capture device. My goal is to use openCV to analyze the image and react to items that appear. I could split the signal and run it all on a separate pi, but was hoping I could just stay all on my pi4 or at the very least somehow connect openCV on another pi to the forwarder of the Hyperion pi.

    Just to confirm, is that splitter converting the HDR signal to SDR for the LED video feed? If not, are you sure your TV is actually receiving the signal as HDR? I thought sending an HDR signal through Hyperion resulted in washed out LED colors, and the only HDR-strippers I’m aware of come from HDFury.


    Im fairly certain its getting the HDR signal and colors are washed out a little, but since its just an accent lighting behind the tv, its close enough for me.

    I recently just updated my old setup that was running on a pi2 and leveraged an HDFury Linker. The linker gave me a ton of trouble to get working. New setup is with a pi4 and set using the new HyperBian, was back up and running very quickly and super easy.


    1) Setup sounds correct to me. (caveat around HDFury
    2)This can be accomplished with this one by EZCOOTECH. I just switched mine to this without issue. ($41 vs $200 or whatever)
    3) Should not effect the soundbar unless your soundbar was the input before in which you can just split that line with the above product.
    4)Doesn't really matter too much, but newer ones will be more performant so i would suggest a 3 or newer.
    5)It is more processor intensive on pis to run the hdmi input. There may be people out there doing it, but my understanding is it doesnt work well.
    6)Can't really speak to the fury but you would most likely need to have a hdmi switch in front of it.


    Not sure what standards are needed for Dolby Atmos and Vison to work, i do know however that the splitter i used above can handle HDR. You may need an Dolby Atmos and vision capable receiver that can have to hdmi outputs at different standards. I know in my case my stereo receiver requires both outputs to be the same but it works for me because i have the splitter and downscaler and am not doing Dolby stuff. Stereo receiver also works in place as my device switch. Everything is plugged into it and one output out gets split to the tv and pi.

    You are most welcome - glad it worked!


    Can't help you with the rest, no idea how Google Home works. My SmartThings hub is on my local network, so it can call the endpoint directly. You are probably going to have to open a hole in your firewall and port forward from your router to port 8080 on your Pi's internal IP address. Good luck!


    milhouse


    Yep, i've got that part working now. Now i just have to set up different recipes on ifttt. Right now its just set to the one effect and i can have it set any color i specify. One thing to note is that google does not capitalize the words said so it calls /effect/candle instead of effect/Candle. I could rename all my effects to all lower case, but figure its probably just as easy to set a recipe for the few i actually want to use via voice. I do want to go in and create an option to control the brightness. Then i will have to go back and redo my pi setup on my tv to openelec and run the same.