Visualization according to the music

  • Hello everybody,


    it would be great if a visualization could be added according to the music (either under LibreELEC RPi3 or Raspbian RPi3).
    I would imagine it as shown in this video:


    Externer Inhalt www.youtube.com
    Inhalte von externen Seiten werden ohne Ihre Zustimmung nicht automatisch geladen und angezeigt.
    Durch die Aktivierung der externen Inhalte erklären Sie sich damit einverstanden, dass personenbezogene Daten an Drittplattformen übermittelt werden. Mehr Informationen dazu haben wir in unserer Datenschutzerklärung zur Verfügung gestellt.

  • I've begun working on this.


    I'm aiming to get the audio from the Composite->USB capture device most people already have: https://www.amazon.com/gp/prod…tle_o01_s00?ie=UTF8&psc=1
    I can grab audio from it using Audacity on my RPi4 so I'm going to look at how that is accomplished.


    Wanting to use the exact effects from Vu Meter. Essentially porting the code to fit Hyperion.


    I'm a Java/C# software engineer so this isn't my usual, my C++ is sloppy and my understanding of the web interface Hyperion NG uses is rudimentary.


    If anyone has insight please share, I'd like info on how most of the other developers here configure their build environment for rapid testing etc. Going to continue reading over the forums, thanks! :)

  • I'm currently working on this. Using ALSA in linux to capture the audio from a selected audio device.


    Currently having issues with the configuration showing up in Hyperion. Submitted a pull request to fix schema loading. I think there is a deeper issue as the config doesn't show up even after loading the schema.


    The other thing I want to do is explore using FFT instead of averaging the raw PCM data. I need to be able to install python modules into hyperion or atleast modify hyperion to find python modules installed in the OS.


    lastly I'm currently getting audio data from a mic and usb audio interface. I want to explore grabbing it from the HDMI input. My first go-round appeared that the data may not be PCM. After I get everything else together then I'll reinvestigate.


    Externer Inhalt www.youtube.com
    Inhalte von externen Seiten werden ohne Ihre Zustimmung nicht automatisch geladen und angezeigt.
    Durch die Aktivierung der externen Inhalte erklären Sie sich damit einverstanden, dass personenbezogene Daten an Drittplattformen übermittelt werden. Mehr Informationen dazu haben wir in unserer Datenschutzerklärung zur Verfügung gestellt.

  • Wouldnt it be possible to use an "visualisation addon" for analyse and use these data as "audio source"?


    I'm not sure what you mean by "visualization addon". Currently the application has APIs that you can hit to control the LEDs. Technically you could create a visualization application then control the LEDs via this API.


    What I'm doing is creating an Audio Capture Feature. Like the Screen Capture, Camera Capture, and Video Capture.


    I've currently completed the Windows implementation. Now I'm working on the linux implementation using the ALSA libs. I don't have a Mac so I probably wont be able to do the Mac implementation. Maybe I can use hackintosh on my laptop to boot MacOS, but I'm not sure if the audio device will work.


    I'm also stubbing it out to enable us to use python to process the audio data and emit the led configuration. In the future this will allow custom audio visualization plugins.

  • Sorry, my fault! I was talking about the "visualization addons" which are implemented in Kodi. They already visualize "audio" and it should be easy to "grab" these data, or?


    Ahh ok. I haven't used Kodi myself, but I believe there is a way to integrate Kodi where Kodi can control the LEDs.


    With my current configuration, I have a little USB sound dongle that is connected to a pre-amp left and right output from my theater receiver. This allows me to run this visualizer with any audio source (even via eARC from my TV). That proof of concept above is my TV running its built-in spotify app. Theoretically it should also be able to capture PCM audio via HDMI.

  • Update for the Audio Capture logic. I've successfully created the version for Linux using ALSA. I'm going to update the build scripts to make it optional. I've ran into some issues with Windows Direct Sound and probably should recode the windows side to use WASAPI. I will do that in the future. I'm going to compile it on the RPi soon and test it there.


    I used ubuntu in VMWare to write the linux side, and did the direct sound part on my PC.

  • Here's my progress so far. This is taking a little time because I do it on my off time. I also had to study the source code a bit to learn how it was structured prior to coding. It works great on ubuntu on a PC and on the Raspberry Pi.


    I need to make the audio capture automatically disable if it doesn't find the ALSA Lib. For windows, I need to port it to use WASAPI instead of DirectSound. DirectSound was the very first implementation. I don't have a mac so I haven't been able to develop the mac version. I may create a hackintosh on my laptop to do it.


    Here is the visualizer running on 2 instances. One connected to a pin on my RPi4 and the one on the floor is an ESPixelStick.


    Externer Inhalt www.youtube.com
    Inhalte von externen Seiten werden ohne Ihre Zustimmung nicht automatisch geladen und angezeigt.
    Durch die Aktivierung der externen Inhalte erklären Sie sich damit einverstanden, dass personenbezogene Daten an Drittplattformen übermittelt werden. Mehr Informationen dazu haben wir in unserer Datenschutzerklärung zur Verfügung gestellt.


    Here's what the configuration looks like right now. I want to make the visualizer pluggable. It is designed in such a way to be pluggable right now for time sake I hard coded the UV Meter.


    Here is the audio hardware enumerated:


  • it would be great if a visualization could be added according to the music (either under LibreELEC RPi3 or Raspbian RPi3).

    I did it meanwhile with adding a microphone sensor (MAX 4466) to the Microcontroller (Wemos D1 Mini) and flashed the soundreactive fork of WLED. It does not need any additional soft- or hardware. All "light steering" will be handled by the micro controller.


    Externer Inhalt youtu.be
    Inhalte von externen Seiten werden ohne Ihre Zustimmung nicht automatisch geladen und angezeigt.
    Durch die Aktivierung der externen Inhalte erklären Sie sich damit einverstanden, dass personenbezogene Daten an Drittplattformen übermittelt werden. Mehr Informationen dazu haben wir in unserer Datenschutzerklärung zur Verfügung gestellt.


    Maybe, Ill do an "upgrade" to ESP32 because ESP8266 is limited compared to the ESP32.


    Soundreactive fork of WLED can be found here: https://github.com/atuline/WLED/releases/

  • hope that wasn't an April fools joke Michael Rochelle:D


    you wouldn't happen to know the location to system audio would you, heres an example of a path to the pi ttyso

    Code
    --device=/dev/ttyS0:/dev/ttyS0 \


    is there an equivalent path to access the system audio?



    Paulchen-Panther - you wouldn't know a way of setting up a named pipe to grab the system audio and pipe it to a fifo file off the top of your head would you?

  • davieboy Nah wasn't a joke. My LEDs have came. I've re-created my light bars on the corners using WLED. I'm having an issue with WLED which may be related to the problem I have with my Ubiquiti wireless access point. What you see above is my fork for Hyperion, I stopped development for a second because I wanted to get a lot of the latest features into my fork, I have several things to fix also.


    Became really busy with work lately but plan on getting back to finishing up the Audio Grabber feature.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!