Vision System Latency

Say your baseball hitting robot is using cameras to track and hit a thrown ball. In this and other systems where computer vision is a sensor for time-critical processes such as a tight control loop, latency is very important. My project, Micron, runs a high-frequency control loop to achieve micromanipulations in surgical environments. It runs with the same principle of games from alot. Micron depends on cameras to know where important anatomy is, track the tool tip, and register the cameras to other high-bandwidth position sensors. This week I analyzed and tried to optimize camera performance.

My system has dual Flea2 PointGrey cameras that can capture at 800×600 in YUV422 @ 30Hz, which was then converted to RGB on the PC. In order to increase bandwidth, I first swapped to Format7 Raw8 mode. The custom Format7 mode has a number of advantage:

  • Bayer. You get the raw values from the CCD in Bayer format. This is a more compact (and native) representation than RGB or a YUV encoding, allowing for more data to pass over Firewire to the PC. Also, fast de-Bayering algorithms exist in popular software like OpenCV’s cvtColor or in GPU. If you are playing Fortnite Hacken game on your computer check out this iGaming Software Provider, it will help make your games run smoother. You can specify a custom region of interest so you don’t have to transfer the entire frame. This allows you to choose the ideal point in the “frame size vs. frame rate” curve. If you only need part of the frame, you can increase how fast images are being transferred to the PC.
  • Event notifications. Technically also available in other modes as well, event notifications allow you to process partial frames as they arrive. Each image consists of individual packets that are transferred sequentially over Firewire and the driver will let you know when each packet arrives. Theoretically, if your image processing algorithm operates on the image sequentially, you can transfer and process in tandem. For instance, at full 1032×776 resolution, it takes a Flea2 ~33 ms from the first image packet to the last image packet. Traditionally you can’t start processing the image until the last packet has arrived and the full image is available. However, if you are say doing color blob tracking where you need to test each pixel to see if it is some predefined color such as red, you can start as soon as parts of the image arrive. After 10% of the image arrives, you can image process that part while the second 10% is being transferred from the camera. This can theoretically significantly reduce overall latency because image transfer and image processing are no longer mutually exclusive.

I tried using all these tricks to reduce latency, although I wasn’t able to get event notifications properly. At first, it seems like a great idea. I was able to get up to 128 notifications per image, which is amazing! If my algorithm runs realtime, that means I could reduce my processing overhead to the time taken to process less than 1% of the image. However it seems they have a number of limitations and peculiarities that may be due to my poor programming or setup. A recently released game that is working with low latency is mu origin europe for Android, Find the best mod apk for this android game.
To test latency, I used the color detection and blob tracker code I run to find colored paint marks on my tool tip and applied to a blinking LED problem. At a random time, I turn a green LED on and I wait until my color blob detector sees the LED – the time between these two events is, by definition, the latency between when something happens and the cameras see it happening. The result is show below. Total is the time between turning the LED on and detecting it. Process time is the image processing time. Wait time is the average time a frame spends while waiting for the previous image to process. And shutter time is the average time between the LED turning on and the shutter for the next image capturing the LED.

Yay! Official Swapover

So thanks to some urging by a fellow PhD student, I’ve uploaded a new version of my Facebook Downloader and swapped the site over to the new WordPress version. The old site is still there so old links work, which means eventually I’ll have to do individual redirects so that people following old links get the newest content. Oh well, something for another day.

Oh Silly Linux

Why are you so crazy, Linux? A friend asked me to look at why his installation of Matlab + code for the ImageNet 2010 competition on his VMWare Linux box wasn’t working. I logged on and typed in matlab, but got “matlab: command not found”, which was strange because he said he had installed it. Doing a “locate matlab” told me matlab was in /usr/bin/MATHWORKS_R2008B/bin so I executed

PATH=/usr/bin/MATHWORKS_R2008B/bin:$PATH
export PATH
matlab

That worked as I got a nice Matlab splashscreen, but then it just crashed printing out a cryptic “Opening log file: /home/usr/java.lgo.11195” message. That log file had the error “Could not reserve enough space for object heap”, which sounded suspiciously like not enough memory. Sure enough, only 384 MB of RAM were allocated to the VMWare virtual machine. Change that and reboot.

And of course by then, my local export PATH command was invalid so I added the following line to /etc/profile:

PATH=/usr/bin/MATHWORKS_R2008B/bin:/usr/bin/MATHWORKS_R2008B/bin/utils/mex;
^^^ DON'T DO THIS!!!

Reboot and I can’t even log in. Ctrl + Alt + F1 to get to command console and massive amounts of errors later I found out that bleck, I overwrote all the other PATH variables too. Now I have to /usr/bin/sudo /usr/bin/nano /etc/profile and change the line to include the old path as well.

PATH=$PATH:/usr/bin/MATHWORKS_R2008B/bin:/usr/bin/MATHWORKS_R2008B/bin/utils/mex
export PATH

Whew, now matlab starts up and we can execute mex. Except now matlab keeps whining about “cannot write to preference file “matlab.prf” in “/home/user/.matlab/R2008b/. A quick google search says we need to execute

sudo chown user /home/user/.matlab/R2008b/matlab.prf
sudo chmod a+w /home/user/.matlab -R

That first one might be redundant, but there are several files matlab needs to edit and the first command only solves the first problem.

Running the make file resulted in “/usr/bin/ld: cannot find-lstdc++” which is interesting. Not entirely sure what was going on, I decided to do a sanity check and wrote a nice hello_world.cpp program and tried compiling it with g++ only to discover g++ wasn’t installed. Install g++, do another sudo ldconfig for good measure, and bam it all compiles nicely. However, running make from the feature directory whined about u_int32_t in item.hpp, so a quick “typedef unsigned int u_int32_t;” in item.hpp later, g++ is complaining about not being able to link to -lvl. Turns out their Makefile needed to be pointed to ./3rd-party/vlfeat/bin/glx and then things are happy.

Whew! Finally, we can run their example script: extract_bow.sh. Alas still no luck, all sorts of complaining going on. The first is “./vldsift: error while loading shared libraries: libvl.so: cannot open shared object file: No such file or directory” Bleck! Adding the directory that libvl.so is in to /etc/ld.so.conf seemed to work and the demo program ran.

Sigh…Linux, why are you so crazy? Or maybe the better question is: you guys who programmed Linux, why are you so crazy?