Obaldius i tried to play a video of journey :
Filming with camera is not as good as my eyes can see :
Family Guy during daylight :
And just for eyes pleasure :
Obaldius i tried to play a video of journey :
Filming with camera is not as good as my eyes can see :
Family Guy during daylight :
And just for eyes pleasure :
OK, I was able to "straighten" the images allright, BUT the performance is really subpar
1) per-frame "remap" operation (supposedly the fastest transform method) adds a significant lag, ~100-300ms on Pi 3B+ for a single 720p frame. I can process all frames with all 4 cores loaded 100% and only reach 5-10 fps tops. And this is using OpenCV library optimized for Pi CPU.
2) "remap" operation necessarily has to interpolate pixels, which decreases the overall image quality, especially towards the edges, which we care most about.
3) fisheye camera with >180-deg angle apparently results in "cropping" of the image. By playing around with the "balance" parameter, I'm able to "enlarge" the FoV, but this results in more interpolation towards the edges.
Overall, results are maybe not too bad, but performance leaves this solution impractical IMO. Here're a couple representative images. I'll post next about another potential solution.
Just another update, I'm working on a new calibration procedure that will hopefully avoid the need to remap the video at capture time. It will auto-generate the LED location list and even upload it to Hyperion. So ? this should provide a good way for all camera users to auto-set up their LED layout, and avoid the load on the system at capture time.
Another update - my auto-calibration script is nearing completion. Early results are encouraging, but we'll see once I receive the real strip. There're some small imperfections and I'm still polishing it, but this should hopefully provide a "good enough" way to do warped trapezoid calibration for any camera, as a simple command-line tool with minimum to no input and no runtime overhead.
Alright, I was finally able to generate the LED coordinates using the full distortion+perspective transform for the camera! This approach is much more accurate than what you saw above, where I tried to measure the approximate locations from screen capture of individual LED areas drawn on the screen.
So the new calibration procedure will consist of:
1) one-time run of calibrate_camera.py script, which displays a checkerboard pattern on the screen. The user just needs to move the camera around at various angles pointing at the pattern. The script then measures the matrices of distortion coefficients and stores the results in a JSON file for future reuse.
2) run of calibrate_screen.py script, which takes an image of the blank screen, and then applies forward and reverse distortion and perspective transforms to determine the screen crop area and the distorted coordinates of the LED areas, and finally uploads them to Hyperion.
Script (1) takes several minutes, but only needs to run once for a given camera+lens combination.
Script (2) needs to be re-run every time the camera moves, but it only takes a second, so not a big deal.
Both scripts would need to be run in graphical mode, so might have to enable it on the Pi.
But other than that, this doesn't require any additional processing power from the Pi, since the generated LED list is used by Hyperion directly without any extra video processing.
The early results look quite good!
Going to massage the scripts for general use, and test on a real strip when it arrives next week
As promised, I put together the scripts so you could easily try it yourself! It's still a WIP, but I'm glad it's shaping up nicely. Really curious what sorts of hiccups others run into (or not!)
dinvlad From the next Hyperion release onwards, the HWLedcount of an LEDDevice will be leading. Therefore, please take care that you do not configure more LEDs in the layout than configured as per given LED Controller.
You can test against the current Master Branch…
Edit: As you anyway read the config, the HwCount is part of it…
But other than that, this doesn't require any additional processing power from the Pi, since the generated LED list is used by Hyperion directly without any extra video processing.
Very smart!
You could avoid graphical mode by asking the user to switch to the right image (like the webui's color calibration do in hyperion)
For lot of people, showing an image using a chromecast, kodi.. would be easier.
Looking forward the real result!
So should I update `hardwareLedCount` simultaneously with setting the LED layout?
Please do not!
There were multiple problems with the previous approach that the number of LEDs from Lay-out were used, even when the number exceeded the physical number of the LEDS. Therefore, the HwLedCount is now the leading one again. In the coming Ui, the User is forced to have the HwLedCount set and the Ui ensures that the layout will not exceed that number (and yes, the API should check that too and return an error going forward).
Until then please ask the user to have the HwLedcount matching the physical number of LEDs. The layout you create can then be <= HwLedcount.
I ask for here your Support that we do not get inconsistent configs via a „Backdoor “ into the system.
Lord-Grey ah, I see, understood. I'll check for that then!
David Hequet that's an interesting idea! If this proves useful to folks, we can think about developing this into a plugin for Hyperion perhaps - it only needs OpenCV library, which is easily usable from C++ (I can draft later if there's enough need).
Another slight improvement we could consider is to add support for non-rectangular LED areas. This will provide more accuracy for color matching, compared to the current "rectangle aligned with x-y axes only" model. Let me know what you think
I've received the strip (ws2815) but believe it or not, I just can't get it working from Pi for whatever strange reason (electronics huh). It even works with a stock SP107E controller, but won't light up when I connect it to GPIO 18 (with or without a level-shifter..). Not even with Adafruit NeoPixel library. So I'll have to ponder on that until I can finally test the whole setup
EDIT: Solved it! As explained here RE: Anyone having success with WS2815?. Finally, I'll be testing the new setup tomorrow!
Hello Folks!
dinvlad I tried your script, the exact process i did is:
- install Raspbian with desktop (https://downloads.raspberrypi.…-raspios-buster-armhf.zip)
- install hyperion ng
- turned off platform capture in hyperion (maybe this should be in the readme) to be able to use the 4k tv for your script (as I know hyperion platform capture and 4k output doesnt work together)
- installed a dependency manually becasue of this error (I cant remember what was it):
~/HyperVision $ python3 calibrate_camera.py
Traceback (most recent call last):
File "calibrate_camera.py", line 8, in <module>
import cv2
File "/home/pi/.local/lib/python3.7/site-packages/cv2/__init__.py", line 5, in <module>
from .cv2 import *
ImportError: libcblas.so.3: cannot open shared object file: No such file or directory
- then I was able to run the camera calibration
- then tried the screen calibration, i get this error:
~/HyperVision $ python3 calibrate_screen.py
Traceback (most recent call last):
File "calibrate_screen.py", line 379, in <module>
main()
File "calibrate_screen.py", line 357, in main
crop, new_k_inv, p_t = get_screen_transforms(blank, cam, args.cam_alpha)
File "calibrate_screen.py", line 196, in get_screen_transforms
cam_alpha,
SystemError: new style getargs format but argument is not a tuple
I'm not a python magician but tried a few things, and failed with it, so could you help me out?
Python version: 3.7.3
OpenCV version: 4.5.1
The last answer here maybe related to this issue but I'm just guessing: https://stackoverflow.com/ques…tuple-when-using/43656642
stewe93 thanks for trying it out! This looks like an OpenCV library problem. Did you install it using `pip3 install opencv-python` or some other method? Tbh mine was cross-compiled on another host, so even pip3 install might not be sufficient.
Digging deeper into the error, looks like it complains about BLAS library - could you try the following?
Btw, I just pushed a fix for the final calibration script - you might have to re-pull.
For the second error, looks like the problem is in `cv2.getOptimalNewCameraMatrix()` function, and you're using Pinhole camera model (is that the case?). Tbh I haven't tested this model yet - was half-expecting it to fail I'll try to take a look with C920 this weekend.
Btw, could you post your `params.json` file generated by the first script (maybe via a gist)? That would help with troubleshooting.
I'll also update README with your and other clarifications..
(In other news, been too busy to mount my strip to the TV - will probably get to it this weekend ?)
Hi - for those running a Logitech C270 webcam, could you please confirm:
1. Whether the C270 is capable of 60 FPS
2. Confirm the content in the video demos is either 30 or 60 Hz?
When I tried a C270 I could not get rid of the rolling lines. Colours were good but any tweaking would be catastrophic for the output.
I'm not sure about 60fps, i think i reduced grabbed FPS due to pi zero processing power.
But i'm running ps5 games with 60fps and no rolling lines.
Used Video demo:
https://www.youtube.com/watch?v=CAR4UReDuqs
dinvlad I restarted the whole process to properly document every step of it in case anyone else interested. Im using a Raspbarry Pi 4:
{
"model": "pinhole",
"dims": [
1280,
720
],
"k": [
[
766.7134975473682,
0.0,
630.6505932315317
],
[
0.0,
747.5824606212268,
344.49610924731536
],
[
0.0,
0.0,
1.0
]
],
"d": [
[
-0.275645578998617,
0.19176572537794795,
0.0021488936528126163,
-0.0021569609390377853,
-0.1643065680854339
]
]
}
Display More
diff --git a/calibrate_screen.py b/calibrate_screen.py
index 2cf64a1..28e9a74 100755
--- a/calibrate_screen.py
+++ b/calibrate_screen.py
@@ -189,10 +189,10 @@ def get_screen_transforms(blank: np.array, cam: CameraParams, cam_alpha: float):
corners = np.float32(corners)
if cam.model == CameraModel.PINHOLE:
- new_k = cv2.getOptimalNewCameraMatrix(
+ new_k, _ = cv2.getOptimalNewCameraMatrix(
cam.k,
cam.d,
- cam.dims,
+ (cam.dims[0], cam.dims[1]),
cam_alpha,
)
undist_points = cv2.undistortPoints
Display More
Traceback (most recent call last):
File "calibrate_screen.py", line 381, in <module>
main()
File "calibrate_screen.py", line 373, in main
vert_led_depth,
File "calibrate_screen.py", line 257, in calculate_leds
dp = distort([[[i / leds_top, 0]] for i in range(0, leds_top + 1)])
File "calibrate_screen.py", line 255, in distort
return distort_points(cam, np.float32(points), p_t, new_k_inv)
File "calibrate_screen.py", line 234, in distort_points
perspective_points, np.zeros(3), np.zeros(3), cam.k, cam.d
cv2.error: OpenCV(4.5.1) /tmp/pip-wheel-qd18ncao/opencv-python/opencv/modules/calib3d/src/calibration.cpp:3563: error: (-215:Assertion failed) npoints >= 0 && (depth == CV_32F || depth == CV_64F) in function 'projectPoints'
Display More
Python version: 3.7.3
OpenCV version: 4.5.1
Could you please help me out with this? I also tried the fisheye calibration but that failed as well for me.
@stewe93 interesting, thanks a lot for sharing the details! I'll take a look on the weekend. Sounds like maybe something is up with the camera resolution (and I have to swap the dimensions for Pinhole, looks like). Could you confirm you cam supports 1280x720 (they should all do, but just in case)?
Also, for the 2nd error, did you make sure to turn off all lights (as the readme says)? This is for sure what would explain it, as the script needs to find the 4 corners of the screen by flashing a blank white full-screen background; and if there's external lighting, then this process fails.
1. Whether the C270 is capable of 60 FPS
2. Confirm the content in the video demos is either 30 or 60 Hz?
1. I´m using the C270 with a resolution of 320x240 due to the limited power of the raspberry pi zero. Pixel Format in my case is 'YUYV'. Using --list-formats-ext in terminal shows that the mentioned pixel format and resolution goes up to 30 fps. Scrolling through the other lines I don't see fps above 30, so I don't think the C270 is capable of 60 fps.
2. I tested using the Philips Lightwave Video showing the stats for nerds in youtube and it shows that the video runs at 24fps. Then I used another video at 60fps and in both cases I don't have problems with rolling lines.
Don’t have an account yet? Register yourself now and be a part of our community!