Posts by dinvlad

    ptrh yep, I'm in the same boat - I set up a Picam with a fisheye lens as explained a few posts above. This works "ok" but the color reproduction on this particular camera is not the best. I'm still going to play with manual white balance adjustment on the camera though, so if that works I'd recommend going that path, since I haven't yet found as cheap fisheye cams as this one. For fisheye, you need ~170-180deg horizontal (!) lens. Some manufacturers claim 165-170deg fisheye but it's actually closer to 130deg horizontal, which wasn't sufficient for my 77". Even for 65", I'd still recommend not less than 170deg horiz. Mine is actually 194deg, which makes it a bit harder to calibrate the camera: https://smile.amazon.com/gp/product/B013JWEGJQ. So if you can find a 170-180deg horiz that'd be ideal I think. Maybe try this one also (almost as cheap, and comes in a nice mount): SVPRO 170degree Megapixel Resolution Computer


    For calibration, you can use my scripts here: https://github.com/dinvlad/HyperVision

    Overall, I'd say fisheye is much trickier to set up than what luchow3cu has. But the reward is there :-)

    Thanks @luchow3cu! There might still be some hiccups as stewe93 discovered, so I might still need to polish a few.


    Since Mon, I've also noticed some delays when switching from a dark scene to bright, for example. I think this has to do more with the camera since it takes time to adjust its "perception" of the color/brightness (possibly due to auto-exposure, but maybe something else), but it's not too bad in any case.


    One thing I'm still planning to work on (in addition to fixing any more issues you discover) is to improve calculation of the areas towards the corners of the screen. Right now, they include "out of screen" pixels and that's why the colors there are off, I think. I'll look into how to "cut off" those areas so that only the pixels of the screen are counted. The ultimate solution would be however to produce non-rectangular areas, if that was supported by Hyperion. But for now, even the cutoff workaround seems like it would work.

    OK, I finally have my fisheye test results! All I can say is wow - it totally worked. Most notable things:


    1) it produces this nice "Hue" effect (without any smoothing, but purely thanks to my Hue-like LED strip and it being ~6" away from the wall)


    2) very small, almost imperceptible delay, thanks to Pi camera in 57fps capture mode and LED output on the same Pi. This is exactly what I was going for, to reduce any hardware-related latency.


    3) color reproduction towards some edges is not the most accurate. This is likely due to positioning of the camera directly in front of the screen, so the geometric distortions become more noticeable. Likely I can tweak it further with some proper positioning, but even this might just be "good enough".


    4) unfortunately, the Pi camera suffers from a blue hue (basically anything white is seen with a heavy blue tint). I've played with gamma and v4l2 settings, but nothing worked "great" so far. Some options seem promising, but they reduce or over-exaggerate other colors. Perhaps I will have to find a USB fisheye that works better :-/


    All in all though, this is looking very nice even without any tweaks, other than a couple geometric ones for the screen calibration script. Here're a couple first tests :)

    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

    stewe93 hmm, looks like the entire screen is not in the field of view? The script won't work if it isn't..


    Also, there's a delay when flashing the blank white screen, maybe it's not sufficient enough that the script takes the image of your desktop before the white screen is shown? In that case, try increasing the delay as such:

    Code
    python3 calibrate_screen.py -fs 5000

    Wow the skript does a very good job calculating the coordinates.

    Thanks! Could you test it as well if you can?


    I'm planning to 3d-print a AmbiVision-like camera holder after all else is figured out ^^ Right now just need to mount the strip, but also to somehow compensate for the blue tint introduced by Pi camera. I've briefly played with gamma but have not yet found the right settings (coming back to the original topic lol..). Going to play with v4l2-ctl settings also.

    @stewe93 interesting, thanks a lot for sharing the details! I'll take a look on the weekend. Sounds like maybe something is up with the camera resolution (and I have to swap the dimensions for Pinhole, looks like). Could you confirm you cam supports 1280x720 (they should all do, but just in case)?


    Also, for the 2nd error, did you make sure to turn off all lights (as the readme says)? :) This is for sure what would explain it, as the script needs to find the 4 corners of the screen by flashing a blank white full-screen background; and if there's external lighting, then this process fails.

    stewe93 thanks for trying it out! This looks like an OpenCV library problem. Did you install it using `pip3 install opencv-python` or some other method? Tbh mine was cross-compiled on another host, so even pip3 install might not be sufficient.


    Digging deeper into the error, looks like it complains about BLAS library - could you try the following?

    Code
    sudo apt-get install libblas3

    Btw, I just pushed a fix for the final calibration script - you might have to re-pull.


    For the second error, looks like the problem is in `cv2.getOptimalNewCameraMatrix()` function, and you're using Pinhole camera model (is that the case?). Tbh I haven't tested this model yet - was half-expecting it to fail ;) I'll try to take a look with C920 this weekend.


    Btw, could you post your `params.json` file generated by the first script (maybe via a gist)? That would help with troubleshooting.


    I'll also update README with your and other clarifications..

    (In other news, been too busy to mount my strip to the TV - will probably get to it this weekend ?)

    Hi folks,


    So I've got a ws2815 strip with 300 LEDs, and can't get it to work with GPIO 18 on RPi 3B+. I've tried it both with and without level-shifter. Not only Hyperion doesn't make the strip light up, but also other software like Adafruit Neopixel library. Best I could get via Neopixel is either all-white or random colors. I tried various settings (freq 400 and 800 kHz, DMA channel 5 and 10). It feels like the problem is on software side, but I don't know what else to try, as others have reported this chip working with rpi281x library just fine.


    At the same time, the strip works perfectly fine with the stock SP107E controller. So at least I know it's alive. I've also measured the output levels on the Pi, and level-shifting seems to work correctly.


    Thanks a lot for any help!

    I've received the strip (ws2815) but believe it or not, I just can't get it working from Pi for whatever strange reason (electronics huh). It even works with a stock SP107E controller, but won't light up when I connect it to GPIO 18 (with or without a level-shifter..). Not even with Adafruit NeoPixel library. So I'll have to ponder on that until I can finally test the whole setup :/


    EDIT: Solved it! As explained here RE: Anyone having success with WS2815?. Finally, I'll be testing the new setup tomorrow!

    Lord-Grey ah, I see, understood. I'll check for that then!


    David Hequet that's an interesting idea! If this proves useful to folks, we can think about developing this into a plugin for Hyperion perhaps - it only needs OpenCV library, which is easily usable from C++ (I can draft later if there's enough need).


    Another slight improvement we could consider is to add support for non-rectangular LED areas. This will provide more accuracy for color matching, compared to the current "rectangle aligned with x-y axes only" model. Let me know what you think :)