Output color determination algorithm

  • I just saw a real nice article at Hackaday.com discussing a method and implementation for determining the perceived dominant color in an image. In a quick search I could not clearly find what algorithm/logic Hyperion is currently using to determine the output color(s); though it does a great job. Not necessarily a meant as an improvement, but this article might bring some inspiration for future development.
    When someone can point me to the code/algorithm used by Hyperion; I'd love to get to know more about it --> someday I will probably build a FPGA based setup (I'm a FPGA/VHDL developer by daylight).


    edit: just found some extra info in the linked articles about the Hue camera app

    2 Mal editiert, zuletzt von René Arts ()

  • here my approch.
    [MEDIA=pastebin]E05dA4Un[/MEDIA]
    replace imagetoleds.h with my file. I recommended to enable smoothing, because we cant calculatate over more than one frame asa suggested in the article.


    Perhaps I did domething wrong or sombody can make it better .....


    edit:
    switch back to partSize=16 and in line 226 the "C" from 0.2 to 1.0
    added ignore partion 0, for some reason all "none important colors" will be co8unt there. (it increases to much). Todo so change in line 233 i=0 to i=1


    now I get OK results - watching "big buck bunny" lokks good. watching "star trek horizon" is mostly OK but somtimes the color switches to fast between to diferent colors.


    mybest smooting settings so far for that: time: 600, frq: 24, delay: 0

    Einmal editiert, zuletzt von redPanther ()

  • I tested another thing. avarage over whole pic. transform resulting color to HSL space. then limit L in range 0.4-0.6 and set S to 1, then transform back to RGB spce. This looks quite nice. With smoothing I can adjust how jumpy the whole thing is

  • The original article is about finding the best matching color over an entire picture. Maybe it does not suit as well as we think/hope with only partial images? Or maybe noise can get too dominant when only using a few pixels (compared to the entire picture) are used?

  • Just for some clarification; you are currently trying to mimic the Hue camera functionality, with all leds displaying the same color derived from the entire screen?

  • Ah yes, I do agree with your approach. Would be a nice feature indeed. Perhaps it is possible to do a dry run; just process some images on a pc with the resulting color as output, just to verify and tune the algorithm and the parameters?
    The test can be enhanced by sending the resulting color (manually) to a running hyperion instance.

    • Offizieller Beitrag

    yes thats right. I miss a functionality like in ambi-tv that all leds do the same ...


    nice to hear that! for know i create 1 pixel with 5-10% cuts from bottom and top for my esp lights an copy all leds the same hscan and vscan :)


    but it is only a "workaround" and doesn't work "smooth"

  • I wrote a qt program to test the stuff. (easier to learn octivae ;) ) I found the error (the modulao for hsitogramm wasn't correct) and now looks pretty cool. I will fix my hyperion hack and I hope that it looks good with video to.


    her my algorithm:
    [MEDIA=pastebin]nFwkEbN9[/MEDIA]

  • Sorry I'm late to the party. The one liner you were looking for should be this:

    Code
    hueMap[m_color.hslHue() / (360/partSize)] +=imp;


    I think it would be better to choose a partSize which divides 360 like 12 or 18 ;)
    Maybe you could reduce the pixels to increase the speed? Or just skip every other pixel (x+=2)?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!