Beiträge von redPanther

    In last line in the loop there is a time.sleep. this loop will generate the frames. Fps depends on sleep time.


    Other reasons:
    in rainbow the increment between colors is calculated and depends on the rotation time and the leds you have the rainbow will be more smooth or not.


    performance - perhaps python effects are less performant and scheduling interrupts the effect to much.
    I wrote a small profiling class to measure execution times in the code (in my logging branch). When I'm done with my logger I will have a look and use the effects as my "code under test"

    Hi,
    as I understand the code, there is no "framerate" for effects.


    when you send "setColor" or "setImage" to hyperion core (doesn't matter from where you send, python-effect, proto, json, grabber). It will update the leds according your command once a time immediately. If you not send any new command, leds won't be updated.


    the grabbers have constantly new data, this will send directly to hyperion. Because video signals have some defined constant rate, we will get leds constantly updated -and of course there are options where you can limit fps to fit it toyour needs.



    with effects there is another situation. They don't neccesaily produce new frames in a constant rate. Example: the xmas effect switch between red and white pattern. there is a "sleep" between the change. If sleeptime is low our eyes will smooth the colors. if sleeptime high, framerate drops and we will notice the hard change more strongly. The effect should not "sleep" it should produce intermediate frames (like I does in the cinema lights effects)


    When smoothing is enabled, the setImage/setColor commands are decoupled from output (we discoverd that in another discussion) and smoothing will generate intermediate frames in a constant rate.


    as effect developer it is to much overhead to maintain constant fps and intermediate frame generation.


    -> solution is, activate smoothing, this is exactly for that situation.


    I have a hardware that does smoothing, so I switch smoothing of -> thats another solution ;)


    hope that clears the thing a bit


    cheers

    For my taste a python app will be the best. From there I can read json files and control the apps. communication with hyperiond's can be stablished through jsonserver. Because it's python no compilation is needed and this can be platform independend.


    If this works, we can delete lots of c++ code in hyperion :)

    thats why we need a hyperion-launcher. that thing should take multiple configs and start multiple hyperions and the needed grabbers.


    ok one thing we can do is to kick out all other grabber sections (including v4l2) and add a "type" in the framegrabber section. This will select the framegrabber. This is the way the device section work

    hyperion show this:


    ssh in: ERRROR: The dispmanx framegrabber can not be instantiated, because it has been left out from the build
    ssh in: AMLOGICGRABBER INFO: [AmlogicGrabber::AmlogicGrabber(unsigned int, unsigned int)] constructed(160x160)
    ssh in: BLACKBORDER INFO: threshold set to 0 (0)
    ssh in: BLACKBORDER INFO: mode:default
    ssh in: INFO: AMLOGIC grabber created and started


    hyperion tries to start the dispmanx grabber (because of framegrabber section) and then amlogic grabber is clearly to see in the log

    I recommend to delete the framegrabber section in config when using aml grabber, because then you have 2 active screen grabbers on the same host -> can lead to unwanted effects


    the "framegrabber" stands for one of this grabbers
    - "dispmanx" (raspi/broadcom videocore)
    - osx
    - framebuffer (linux without x11)


    which one depends on how you compile hyperion.


    some "nice" config facts:


    instead of "framegrabber" you can also write "osxgrabber" when you use it on osx.
    on linux you can replace the section name "framegrabber" with "framebuffergrabber" when you intend to use the framebuffer grabber.



    BTW adding the amlgrabber section for amlogic devices is no workaround.

    It would be great to abandon qt4. It will reduce codesize of hyperion.


    But for now we can change default to qt5 in cmake. And for precompiled bins we use qt4. So we can get some experience with qt5

    From my experience with network transfer of frames (the hyperion forwarder) network is fast enough for this task. Important is to scale down the captured frames like e.g. the hyperon-x11 grabber does.


    For my scenarios transfer over lan is much faster as transfer with 115k to arduino over usb.