Yet Another Swap

WordPressEvery so often I decide for no good reason that I should redesign my website and swap content management systems. I’ve been through a lot, and now I’m trying WordPress, the blog-a-jiggerie thing. Right now I’m testing out the sticky feature to see if my introduction blog post with a purty picture of me will stay at the top of the page or be pushed down by this more recent post (i.e. testing the quality of WordPress programmers). Let’s see. [edit: It works, yay for WordPress so far.]

Actually, I am quite impressed with WordPress. Nearly everything I tried to do was super easy from the admin side. Upgrading, changing permalinks to clean urls, and installing templates could all be done inside the admin GUI without having to go dig through code. This is a welcome change from other wikis/cmses I’ve used before. I did have to modify some CSS and the template code a bit (remove nested pages from showing up as buttons on the top left side of the header bar), but it was all quite painless. Importing text from Word and my old webpages was surprisingly easy. So really within a day, I have most of the relevant stuff from my old webpage and am almost ready to go live with the new site with WordPress as the backend.

Why I love my PPC-6700

Back in summer 2006, I went from no phone to (then) state-of-the-art Sprint PPC-6700 running Windows Mobile 5.0. It was awesome back in the day with it’s Pocket Outlook, IE, Word, and any app you could load on it. With MobiPocket and Microsoft Reader, I could load up on several hundred books for reading anyhwere. I cross-compiled a BASIC interperter and could even run some scripts on it (alas I was not able to get gcc to run on it). However, I discovered the best thing: my $15/mo data plan allowed tethering! I could hook it up to my laptop via USB and get internet – as a bonus it was unlimited and nearly everywhere. Of course, time does what time does best: slowly reduce every electronic gadget to a uselessly outdated mere curiosity. The iPhone, Droid, EVO phones all sport slimmer, sexier, much more capable features than my sad PPC-6700. But as a poor college boy, I have been resisting updating because when I moved to Pittsburgh to attend CMU, I didn’t shell out for cable + internet. Instead, I used my tethered phone as my primary internet in my apartment. I can’t watch SD or HD movies like Hulu or Netflix, but YouTube works pretty well and it lets me read papers, check email, listen to Pandora, and generally surf the Internet, but with the usb c to Ethernet connection I can manage to connect my smarphone to the TV and watch everything i want with a great quality.

But the allure of the sexy new phones, especially the EVO with it’s touted tethering capabilities and 4G (which Pittsburgh is getting this year), is so scintillating! So I did some analysis. Below is a graph of my usage of the PPC-6700 data and voice usage over the past year. Data usage is for both tethering and smartphone usage; voice is only daytime minutes.

Smartphone Usage

The amazing thing to note is that I’ve been averaging 4 GB a month of tethered internet with a peak of 6.5 GB. That pretty much rules out the iPhone for me. For the same $15 a month with AT&T, I can get a paltry 200 MB. Their highest data plan for $25 a month caps the data usage at 2GB, which would only cover two months of my past year – which incidentally coincides with my international traveling where I wasn’t using my phone for weeks at a time. So how about EVO? Sprint is still unlimited data, but I have to pay $70 (well technically $69.99, but let’s round stuff off here) for 450 minutes + unlimited data. To tether, it’s another $30. So $100 a month total. That’s a lot of money, more than double what I’m currently paying ($30 for 200 minutes call time + $15 for unlimited tethering). And what I do get? 250 minutes I don’t use anyhow, faster internet speeds, and a much nicer phone with a poorer battery life. The question I ask myself is: Can I justify $100 upgrade fee + 12*$50 = $600 = $700 a year extra on a new phone like the EVO? So far…the answer has been no.

Aiding surgeons using real-time computer vision

Operating rooms are scary places at the best of times, and when going under the knife, you’d like to know that a surgeon’s itchy nose or her morning’s double espresso isn’t going to cause any accidents. This is where medical robotics can ride in to the rescue. As a PhD student developing intelligent surgical instruments, I design algorithms that aid surgeons in procedures. One of the fundamental problems of helping someone is knowing what they are trying to do. Handing a hammer to a construction worker when he is pouring concrete is not terribly helpful.

As in case of convex mirrors for savety vision when driving, here a popular solution is to use computer vision: attach some cameras to the microscope, see what the surgeon is doing, figure out what her goals are, and then provide assistance. For instance, microinjections of veins smaller than a human hair are very useful procedures that are currently too difficult to perform reliably in the operating room.  Our tool reduces surgeon tremor and uses image analysis at high magnification to guide the needle into the vein, increasing the success rate of the procedure. Unfortunately, many of the useful algorithms are difficult to perform in real-time. Furthermore, different algorithms are often required in parallel: several methods might track the tip of the instrument, another builds 3D representations of the tissue surface, and still others run analysis to detect and diagnose blood leakages or diseased areas.

Currently, I run a number of algorithms and am forced to make compromises even with powerful quadcore machines. With modest stereo 800×600 resolution cameras that run at 30 Hz, 5 gigabytes of images need to be processed every minute. This increases to upwards of 40 GB/min with high speed or high-definition cameras. Trying to analyze the sheer number of pixels coming is much like the analogy of trying to drink from a fire hose. Simply encoding and saving the video in real-time for post-op review becomes challenging. Consequently, I run tracking algorithms at lower resolutions, sacrificing precision for speed. Diagnoses are performed pre-operatively or manually on demand during pauses in the procedure. 3D representations of the tissue are built initially and updated only infrequently. This affects the level of assistance the system provides to the surgeon.

In fact, significant amounts of my time go towards optimizing C++ or even assembly level code to maximize performance. Reducing L1 cache misses, utilizing branch predictions, and rewriting code to take advantage of SIMD instructions let me run more algorithms and provide better aid to the surgeon. Even with such optimizations, I am still hitting the limits of what four cores can do. However, a most encouraging aspect of computer vision is its often embarrassingly parallelizable nature.  With a 48 core machine, I could do a significantly better job. I would move to higher definition video for much greater precision, parallelize my tracking algorithms for enhanced speed, run advanced stereo algorithms for high quality 3D reconstructions, and thus more effectively provide the surgeon with aids that make the operating room a less scary place.

Google Fail – Don’t Re-invent the Wheel

So this is twice that I have been upstaged by my lack of Google skills. I have some small background in compilers, languages close to the processor, and optimization. So when I see something obviously lacking in even rudimentary speed-ups, I think about taking a look and doing a better job myself. Usually I restrain myself otherwise I’d spend all my time re-writing Matlab to support NOT ridiculously slow for loops (please please please Mathworks, can’t you hire a decent compilers person or two and at least get your for loops to run faster than QBasic?). But sometimes, when running batch processes takes days, I start getting involved.

My first attempt was to make some Chi-Squared distance metric Matlab scripts run faster – a friend needed to run this over gigabytes of data, which was going to take months using his simple Matlab script. Months is bad so I sat down with him and we optimized the Matlab scripts, then moved to C with a mex interface, then delved into assembly language, and then into the SSE2 optimizations. In the end we had something that was an order of a magnitude faster, so it would only take several days to run. Of course, a few days later I found a guy at Google who had done the exact same thing. And oh look now that I’m trying to find a link to his website, I’ve found somebody else with the similar code with Matlab wrappers. Sigh…that was a wasted night and a half. Coulda gone to sleep at a decent hour instead of going home at 7 am after working all night.

And the story doesn’t end there! When running some of my algorithms, I was seeing rather slow JPEG decoding speeds so I went out searching for faster implementations. I found the libjpeg-SIMD version, but it was only 32-bit and since it was coded in assembly language, porting it to 64-bit would be non-trivial (I only use 64-bit for my research work so I can access more than 3 gb of RAM). I figured somebody must have done it somewhere on the Internet, but nope, the I couldn’t find anything. I even spent an extra day just researching this to prevent the same thing that happened with the Chi-Squared code from biting me once again. The closest I could find was some discussion on tightvnc-devel about attempting to port it to 64-bit, but it didn’t seem like anything came out of it (as Internet discussions have a tendency to do, the threads degenerated into open-source vs. Microsoft and cygwin vs. Mingw). So I spent a good 4-5 nights porting the key bits of libjpeg SIMD to 64-bit one frightful file at a time, as detailed in a previous blog post. And of course it turns out I again duplicated somebody else’s work. One of the developers of TightVNC posted a comment to my blog entry announcing the completion of the port informing me that they had indeed successfully ported libjeg SIMD to 64-bit a while back and it was in their SVN. Sigh…on one hand that’s really discouraging because well that is a good 25 or so hours I could have spent doing better things (like sleeping!), but on the other hand I did get a brush up on assembly language and especially some of the new 64-bit assembly language extensions.

So moral of the story is search hard and long so you don’t have to re-invent the wheel.

libjpeg SIMD x64 port

So with all my JPEG decoder analysis, the library that performed the best was the libjpeg SIMD extension. It is several times faster than the normal libjpeg because it uses the modern SSE instructions for the core encoding/decoding routines. Not only does it use the SSE instruction set, but it does so in assembly language – not using compiler intrinsics. This means it is even faster because compiler intrinsics are notorious for producing inefficient or even wrong code. Unfortunately, it is only 32-bit (x86) – and trying to compile a 64-bit version of the library would mean porting over all the assembly language code.

At first glance, porting the assembly code from 32-bit to 64-bit seems intimidating, but after a while you realize that there are many versions of the same thing. Each routine is coded in MMX, 3Dnow, regular SSE, SSE2. And all these options can be applied to the slow integer, fast integer, and floating-point algorithms so you get dozens of assembly language files that you really just don’t need. The fastest combination is SSE2 and fast integer math. We can remove everything else because almost all recent processors in the past 5 years or so support SSE2 (Intel Pentium 4 and Intel Pentium M and AMD Athlon 64 and up). The fast integer math algorithms might cause a small reduction in image quality, but it’s not very noticeable and you get a solid 3% improvement in speed.  Disabling everything but SSE2 fast integer algorithms leaves you only with 10 assembly language files to modify, which isn’t too bad.

Now don’t get me wrong, as you can see from my previous post, converting from 32-bit to 64-bit assembly is a giant pain. It took me several days, putting in hours each day to carefully convert and debug each one to  make sure it was at least working with my data. I finally got it all seemingly working 64-bit (it doesn’t crash when loading some images off Facebook or from my camera), which is quite a good feeling because to my knowledge nobody else has done that.

I even mexed my port in a 64-bit Matlab mex function named readjpegfast. Unfortunately, Matlab stores its images all weird so a lot of time is wasted just re-arranging the pixel data into a column-first, RGB plane-separated format. For small images roughly 640×480, I get an impressive ~2X improvement on loading 2000 images over imread (Intel Core2 Duo 2.4 GHz T8300 laptop):

>> tic, for i = 0:2000, img = imread(sprintf(‘%0.4d.jpg’, i)); end, toc
Elapsed time is 21.120399 seconds.
>> tic, for i = 0:2000, img = readjpegfast(sprintf(‘%0.4d.jpg’, i)); end, toc
Elapsed time is 9.714863 seconds.

Larger images unfortunately don’t fair so well just because I have to do this format conversion (it would be better if I modified the library to load images into memory using the Matlab format, but that’s way too much work. The 7 megapixel pictures from my camera only saw about a 1.25X improvement:

>> tic, for i = 0:35, img = imread(sprintf(‘big\\%0.4d.jpg’, i)); end, toc
Elapsed time is 13.393228 seconds.
>> tic, for i = 0:35, img = readjpegfast(sprintf(‘big\\%0.4d.jpg’, i)); end, toc
Elapsed time is 10.068488 seconds.

Oh well, since I plan to be mostly using this in C++ where the default data-format of libjpeg is perfect for my uses, this is still a huge win. Soon I hope to be releasing a package that includes my 64-bit port of libjpeg SIMD.

Porting 32-bit NASM code to 64-bit

It’s terrible…yeah you heard me. Assembly language is hard even at the best of times and it is all so hardware dependent that porting is a super pain in the neck. Even on the same processor, the change from 32-bit code to 64-bit code is annoying.

First rule of business, change BITS 32 to BITS 64. It seems obvious but when you have a bunch of *.asm files and you are doing them one by one, forgetting one can cause “error: impossible combination of address sizes” which will proceed to befuddle you for the next 10 minutes. Or not as it seems I’m the only one on Google has actually gotten this error.

I also found my first use for Wolfram Alpha: taking the modulus of 16-byte hex numbers to determine if they are 16-bit aligned. Yeah, I couldn’t even find a quick way to do that in Matlab, which is surprising. But then I realized I was being really braindead because it is really simple to see if a pointer address is 16-byte aligned: the last digit should be zero! Oops…I’m being silly again.

The errror “error LNK2017: ‘ADDR32’ relocation to ‘.rdata’ invalid without /LARGEADDRESSAWARE:NO” means you forgot to add “default rel” to the top of your assembly language file.

Apparently you also have to preserve some of the new registers across function calls. None of the NASM/YASM manuals or anything I read mentioned this! Code after I ran one of my functions was crashing and through turning off bits of the code, I was able to narrow it down to mov r12,ecx to store the first parameter. Of course then the thought struck me: maybe I need to preserve r12 so I finally had to Google “preserve r12 assembly.” I found some 64-bit sample assembly code from Mark William’s blog which had some comments about preserving r12-r15 and that seemed to fix the problem.

Analysis of JPEG Decoding Speeds

As a computer vision/controls grad student at CMU, I often work with very large image/video datasets. My latest project involves a terabyte or so of image data – all compressed in JPEG format to save space. I usually analyze the data in batches and am interested in maximum speed for minimum cpu usage. Recent 1 TB harddrives have linear read speeds of 50-100 MB/s. With a quad-core or higher processor, often the bottleneck is not the CPU. This is especially true if you want to do simple image processing (image resizing, corrections, converting, face detection, etc) that can sustain significant throughput. The bottleneck is often decoding the data.

JPEG Decoders

In my case, I want the fastest JPEG decoder that could take a JPEG image in memory and decompress it into a RGB pixel array. I didn’t find much in the way of JPEG decoder benchmarks so I decided to evaluate some of the more promising ones I found. Here is the roundup:

  • Mini JPEG Decoder: A simple C++ port of NanoJPEG that is one single file header and very easy to use.
  • STBI JPEG Decompression: Another simple one C file JPEG decompression algorithm.
  • Libjpeg: Probably the gold standard for JPEG encoding/decoding, but developed quite a while ago (1997) and not entirely trival to use. Libjpeg has the additional annoyance that the original source code doesn’t support decoding from memory, so I used the TerraLib wrapper class and their libjpeg extension that adds support reading/writing JPEGs directly to and from a buffer in memory.
  • Matlab (commercial): While not a JPEG decoder, it does contain one and is a widely-used package for researchers, so it is useful to compare against.
  • Intel IPP (commercial): Perhaps one of the most well known “optimized” libraries for the Intel platforms. It contains a UIC JPEG decoder that is highly optimized for most varieties of Intel CPUs (it has separate code that is specifically optimized for each type of Intel CPU).
  • Libjpeg SIMD (page in Japanese): A port of the popular libjpeg to use modern SSE processor instructions by optimizing the core routines in assembly language. The interface is supposed to be compatible with libjpeg. Currently only in 32-bit. I used the same TerraLib libraries for using memory buffers with libjpeg as before.

Comparison of JPEG Decoding Speeds

To compare these 6 JPEG decoders, I devised an experiment: Load ~2,000 small images (each image was ~50 KB in size for ~100 MB of compressed JPEG data) and decode them. The compression was 102 MB compressed expanded out to 1526 MB uncompressed, or a ratio of 6.7%. That’s pretty good compression there. The whole 100 MB was loaded into memory to avoid harddrive slowdowns (4 GB RAM with no paging file) and each algorithm was run 3 or more times to get an average. The JPEG data was directly decoded from memory except in the case of Matlab, where the JPEG files were read off a virtual harddrive backed by memory using ImDisk Virtual Disk Driver. So all results should very comparable. Below is a graph detailing the decompression speeds in MB/s on my Intel 2.4 GHz Core2 Duo laptop. I compiled everything with Visual Studio 2008 Professional with all optimizations on, including SSE2 instructions in hopes that the compiler could optimize the plain C or C++ code.

JPEG Decoder Comparison (Small Images)

The results actually surprised me in some ways. Being focused on simplicity, STBI and NanoJPEG’s poor performance was to be expected. Also, Matlab does try to make the core routines fairly fast. However, I know Matlab uses the Intel MKL math libraries, so it surprised me that Matlab wasn’t using the Intel IPP libraries for faster JPEG decompression. Of course, I was using Matlab 2008 so perhaps newer versions have added this feature. What surprised me the most was that the libjpeg SIMD outperformed the Intel IPP. Intel has put a lot of effort into providing libraries that squeeze the most out of their processors, so it was a definite shock to see them outperformed. The results were fairly encouraging overall, especially with libjpeg SIMD performing over 3X faster than Matlab and over 4X better than the gold standard libjpeg. It is unfortunate it is only 32-bit.

Speeding Up JPEG Decompression on Multi-Core Systems

Given that the top performer’s speed in JPEG decoding is still ~2-5 X slower than what current harddrives can pump out, it makes sense on multi-core/processor systems to use multiple threads to parallelize the decoding. To test how each algorithm scales with multiple threads, I tried the same test except splitting the ~2,000 images over 1, 2, or 4 threads for decoding. I tested on a 2 core system, so realistically I should not see any improvement with the 4 thread decoding. Matlab and the Intel IPP both had built-in options to use multiple cores when performing operations so I used these options instead of splitting the files off into different threads. The results of these tests are shown below

JPEG Decoder Comparison on Small Images with Threads

The results in general make sense: 2 threads is roughly twice as good as using 1 thread, but using 4 threads doesn’t help and in some cases hurts performance. The two big trend-breakers were Matlab and the Intel IPP. It seems Matlab has not added any multi-threading capabilities to its JPEG decoding routines since the performance difference was negligible. However, the Intel IPP performance gets worse the more threads you add! This doesn’t make sense at all, unless you take a look under the hood. While I am splitting my 2,000 files over multiple threads (say 1,000 images for Thread 1 to decode and 1,000 for Thread 2) for the rest of the algorithms, the Intel IPP package is trying to use multiple threads to decode each image (so part of one image gets decoded by Thread 1 and the other part by Thread 2). This works well if you need to load big images, but for small images, the overhead of creating and destroying threads for each image is so much that it not only counteracts the gains but causes worse performance as additional threads are used. While I didn’t test running the Intel IPP in the same manner I did the rest of the algorithms, I suspect the results would improve: 2 threads would nearly double performance, and moving from 2 to 4 threads would have very little effect at all.

Effects of JPEG Compression Level and Image Resolution

Finally, I was curious about how the JPEG compression level (usually specified 1-100%) and the image resolution effected the decompression speed. This is important if you are looking at the storage vs. decompression speed trade off. Let’s say you have 800 MB of uncompressed images (BMP, PPM, RAW, etc) and want to save them for maximum decompression speed later. What compression is best? What size should you store them at? To analyze this, I took typical large images from my 7 megapixel camera and the smaller, web-sized images at 1/4 megapixels and compared the decompression speed as I varied the JPEG compression level. NOTE: I have changed how I measure decompression speed here from MB/s compressed JPEG to MB/s uncompressed JPEG. In another words, I am measuring how fast I can decode the 800 MB since the compressed JPEGs change size as I change the compression ratio. For these experiments, I only used the best performing JPEG decoder, libjpeg SIMD.

Libjpeg SIMD Extension Comparison across Image Size and Compression

At first, the results caught me off guard. I was expecting lower compression percentages to take longer to decode as more work was being done for the compression. However, I suppose when you think about it, the best possible compression would be to reduce each image into one pixel with the average RGB – and to decompress that you just fill all the pixels in the image with your average RGB which would be really fast. So higher compression percentages don’t really gain you any speed increases, which is interesting. Also, it is interesting to see that the larger images decode much faster than smaller images, which is probably due to caching of images and reduction of setting up the JPEG decoder for each image (reading headers, allocating/deallocating, etc, etc).

Conclusion

From these results, we can conclude that libraries optimized for modern processors are several times faster than their simple C/C++ counterparts. Also, JPEG decoding generally scales pretty well with the number of threads as long as each image is independent of the other. Trying to use multiple threads to decode a single image is counterproductive for small JPEG images, and only somewhat useful for larger images (I tested the Intel IPP with the 7-megapixel images and found that 2 threads is only ~1.3X faster than using 1 thread compared to ~1.7 faster for using threads on separate images. When choosing a compression setting, lower is better for both conserving storage space and compression speed). Also, decompressing smaller pictures is noticeably slower than decompressing larger pictures.

So, let’s talk about the take home message. What is the best JPEG decoder for you? It depends on your needs:

  • Simple, easy, leisurely decompression: NanoJPEG or STBI. I find NanoJPEG easier, just include one *.h file and you are done. It’s got a nice little class and everything.
  • Fastest possible decoding of large images: Intel IPP is your friend because you can use multiple threads per image.
  • Fast decoding of images, only need 32-bit: libjpeg SIMD is really amazing, but it currently is only 32-bit and takes some work to compile (the website does have precompiled binaries that I haven’t tried). For instructions on compilation, check out the post by sfranzyshen (the exact instructions are a bit down on the page, search for SIMD).
  • Fairly fast, stable, 64-bit decoding: libjpeg is the gold standard and is fairly good for what it does.

So there you have it, a brief analysis of JPEG decoding. Go out and enjoy some JPEG decompression goodness!

Giving Thanks for Christmas Trees

This weekend was Thanksgiving and I drove 8 hours from CMU in Pittsburgh, PA to my grandparent’s place in Newland, NC.  Well actually although I’d rather not admit it, the 8 hours turned into 11 hours because I read the map wrong. I can understand multiple roads merging and sharing the same physical road. For instance, one road going south and another going east can share a physical road going southeast for a while. However, a road designated as traveling south and a road claiming to go north should NOT be sharing! My directions said follow I-81 S/US-52 N – in what crazy system is going down this road both north and south?!? Very confusing it is, and caused me to travel an extra 150 miles.

Regardless of how many times I got lost (once! OK? No need to insult me!), I arrived the night before Thanksgiving. My grandparents have a Christmas tree farm and late every fall they cut, haul, bail, and load Christmas trees – some wholesale, some for their retail customers in Florida where my Uncle lives, and some for the Choose & Cut customers. Often the Choose & Cut customers are families who want to come up and have a good time looking around for a tree, have us cut and load it for them, and then go home with a frees tree they selected themselves. It is really quite nice to see families having fun with this – some spend hours analyzing what seems like every one of the thousands of trees to find the perfect one. However, this means the whole family works over Thanksgiving and the weekend. Which means I spend all day outside in freezing weather trying to process 6-14 foot Christmas trees. The first day it snowed on us while we were trying to bail 50 or so trees. My shoes got soaked, my jeans got soaked, everything. It was quite miserable, and worst of all it started snowing just as the sun was going down and the wind was picking up. By the next day the temperature had dropped to 15 F with winds 15-20 mph gusting up to 50 mph. It was quite bitter. But luckily my grandmother and mom were making cookies and hot apple cider for the customers who came so we could go to the barn to warm up and get some warm snacks. Fortunately, Sat and Sun warmed up significantly to 45 F or so without much wind, so I was able to work without being miserable. All and all, I think I helped cut, haul, bail, and load a few hundred trees. Wow, I’m tired just thinking back on it. We would always dread the monster ones – or sequoias as we called them – that were 12+ feet tall and weighed…well I’m not even sure other to say I couldn’t lift it myself. Which I’m embarrassed to say isn’t really saying all that much because I’m a skinny nerd. Suffice to say, I was very sore and tired and amazingly enough went to bed before midnight almost all the nights.

But it was a good time and I made some money out of it. It’s always good to help out the family biz – and I got some great meals out of it too. If you haven’t made up your holiday plans yet kensingtontours.com/tours/africa may be a great choice.

Taxing Target: Students

So Pittsburgh this year has a $15 million deficit, mostly in the pension funds. But the Mayor of da Pitts has a solution: tax all the students %1 of the tuition! Brilliant I tell you – I couldn’t have done better myself. Oh wait I forgot to use my sarcasm voice. Yeah the one that sounds all high pitched like somebody drove a pitchfork through my stomach. Now while I’m generally not really all that political, an extra $400 a year just because I’m a student sort of rubs me the wrong way. I know students get a lot of tax breaks, but seriously, most of the people I graduated who went into industry are making $60-80k. In contrast, I’m pulling down many factors less than that. And they work 9-5 while I’m…well yeah I’m still up at 6 am posting to my blog after having disassembled and reassembled the lab microsope all night to figure out how to save my advisor some money on scope attachments. Sigh…anyhow, I did go ahead and write my representatives on the city council with the following nice little letter that they will promptly ignore except to squirrel away my email address to spam me. For student that are struggling with school. There are sites that would be honored to help you in subjects such as English and help you to write and recognize synonym. English, Math, Algebra, Sciences, and many many more. Check out Englishlinx now!

Dear X,

Last week, “Luke Ravenstahl hosted the Graduate Pittsburgh Summit to increase public awareness of the dropout and college/life readiness crisis in Pittsburgh” (source: Mayor Ravenstahl’s website). How is taxing students going to alleviate this crisis and increase the graduation rate?  According to “Pittsburgh’s Dropouts: A Look at the Numbers” (source: Mayor Ravenstahl’s website), the Mayor’s Office sponsored a survey asking high-schoolers “What would keep you interested in graduating from high school?” and the highest response at 79% was “Money for College.” So further taxes on students are going to encourage them to stay in school? With SoFi, they can calculate your student set you up with a student loan from them. They offer students with lower costs and better/ lower interest rate.

As a PhD student of the Robotics Institute at CMU, I urge you to reconsider your support for the tuition tax.  I do not pay tuition as a funded graduate student. Instead I get a small stipend from the school and my advisor’s grant money covers what tuition the school charges for classes, resources, etc. Essentially, my relationship with CMU is that of a very low paying job to advance medical robotics research. I could be earning significantly more in industry yet believe that the valuable research I am doing and will be enabled to do in the future is worth a currently much leaner lifestyle than friends I know who did not pursue graduate degrees. Charging me a tax on an amount I do not currently pay is a large burden, one I feel is unwarranted. Seeking advice from the national tax experts bbb has opened up many possibilities, there are ways to avoid paying more than you need to.

Furthermore Pittsburgh has earned such a great reputation for promoting academic progress, and I feel this step towards taxing students is counter-productive and will lead to a lessening of Pittsburgh’s attractiveness to brilliant new students evaluating where they want to study.  Let’s be honest here, Pittsburgh is not all that an attractive of a place to the outside world compared to other basins of higher-learning such as San-Francisco, Boston, etc. Let’s not make it any less attractive by adding student taxes. While an extra $400 a year might not seem significant, it is. Many graduate students I know are funding their education through loans, some of them internationally with large interest rates.

In conclusion, I feel that students are unfairly being targeted to carry the cost of the budget deficit and charging $16 million to a population that is already making sacrifices in time and money to better not only themselves individually but society as a whole simply seems unprofessional.

Sincerely,
Brian C. Becker

First Journal Paper

So my first journal paper was written in 13 days. My advisor originally wanted it to be written in a single day but I certainly do not posses such superpowers – not in the least. The reason for the massive rush was my advisor promised a journal paper in the grant proposal by a certain timeframe (which has long since passed). Long story short, I constructed a journal paper from the proposal and an earlier conference paper in record time. Oh and learned how to calculate p-values (ttest2 in Matlab). Anyhow, I did a final proof-read, corrected some errors and sent the last draft off to my advisor at 6 AM. I woke up at 9:30 AM, had a bunch of corrections to make, and we had a nice back and forth until lunchtime. After my advisor submitted the paper, I went back to sleep around 2, thinking I would get a good solid couple of hours nap time. Alas this was not to be as my advisor called me 30 minutes later to schedule a meeting discussing what I would be working on next. Sigh…

One of the most annoying things was the format of this paper. They required Word and not just that, but they insisted on Word 1997 format with figures in TIFF and tables on separate pages. Annoying to say the least. Unfortunately for me, I had written the paper in Word 2007 with the new fancy equation editor they introduced. Saving the document in the old format converted all my beautiful equations into terribly rendered picture representations of my equations. It made me want to cry. I had exactly 100 equations in my paper (and no I didn’t aim for that number) so I didn’t want to retype them. So I resorted to some VBA trickery. First I increased the font of the Word 2007 document by 5-10X. Then I saved to the old format and my giant font equations got saved as giant graphics. Then I wrote a VBA script to go through the document and resize all my equation graphics to get high DPI equations. This approach met with limited success. I was successful, but the side effects were terrible. First, the vertical alignment was way off so I had to wind up cropping the graphics to pad the bottom of the equation so that it aligned with the rest of the sentence. I was all happy that this worked until I realized that this totally messed up the print to PDF function, so I decided to convert all the files from PDF format using a sodapdf software fo this. Cropping the equations even in the slightest caused all the equations to come out with black backgrounds. Gar! Foiled! Finally, I decided I’d re-write them all by hand in the old version of MathType so they would be compatible with the old Word format. But to my surprise, I discovered the new MathType library has a function to automatically convert Word 2007 Equations to old Equation 3.0 style equations. Viola! It worked quite well although it made a few mistakes and mangled some of my paragraph formatting. But it was way better to watch it work for several minutes scrolling through my document and converting the equations by hand!

Tower of Bab…oxes???

After only getting 3 hours of sleep yesterday due to me creating the next homework for computer vision and an early morning meeting with my advisor, I learned that all the testing has to be done this week for the paper abstract deadline next week. Oh joy joy. So instead of going home to sleep off the afternoon, I had to do horror of horrors…actual work. However, it was SOOO cold down in the dungeon. Usually it is quite chilly and I’ll wear a light sweater but it was just completely rediculous today. Somebody must have left the AC on manual override or maybe just sucking in 40F air from houtside. Anyhow, there is a giant vent right up and to the left of my workstation. And with the surgeon coming in the following morning, I was going to be working most of the night. So I decided to do something about this infernally powerful AC draft. After several ideas, I came up with a box wedge solution. The only problem? What do I wedge the box against to block the vent? Turns out the solution is to construct a giant stack of boxes. And it worked! Oh the flush of warm success cascaded down on me like the harsh crushing clank of a dropped needle.

Tower of Babel
Tower of Babel

Interview Questions on Language Specs

So today I heard of a CMU graduate student who was interviewing with Microsoft and they asked him what the result of the following C code would be:

int i = 10;  printf(“%d %d %d\n”, i++, i++, ++i);

First off, this is a dumb question to ask a student who is about to graduate with a PhD in robotics. Second, this is utterly retarded code that should be avoided. Third, this question really would only apply to a compiler developer or say if you were interested in seeing how well somebody knows language specifications. Regardless of what this question was supposed to be testing, it is an interesting piece of code. For instance, a first guess might be 10, 11, 13. However, I know the C calling convention specifies arguments are passed in reverse order on the stack. So my guess on the output of this code would be: 11, 11, 13. The acid test however is to run this code. Let’s compile a test program and see what we get:

gcc on Ubuntu: 12, 11, 13

What? This answer doesn’t match up to either guess and makes no sense. Let’s try it on Visual Studio 2005:

VC++ 2005: 12, 11, 13

Well at least the result is consistent if not terribly intuitive. However, just for kicks let’s try release mode:

VC++ 2005 Release: 11, 11, 11

Whoa! What’s going on here? This answer makes even less sense! Not only is this totally unexpected, but we have just lost consistency between the same compiler. This is very very bad. Code should NOT run this differently between debug/release modes.  For a complete roundup, let’s try some other compilers:

VC++ 6.0 Debug: 11, 11, 13

VC++ 6.0 Release: 11, 11, 11

gcc on cygwin: 12, 11, 11

Wow, so this is faith shattering. The same piece of code runs totally differently depending on compiler/configuration. What is going on here? Doing some sluething on Google, it turns out the root of the problem is the C/C++ standard leaves the evaluation order of arguments passed to functions unspecified. Also, we are mixing reading and writing variables in the same expression (the ++ operator does both read/write). Again the result is undefined. However, not ONE of the compilers warns about this! In fact, let’s enable all warnings on gcc (-Wall). I now get “warning: no newline at the end of file.” Great! gcc warns about meaningless issues while totally ignoring completely unspecified behavior. So helpful!

Now it might seem this whole thing is academic. After all, who would ever write code as silly as that printf statement? Actually it’s not quite as far fetched as you might think. Let’s take a hypothetical example of saving and updating some records where both functions update and save return an integer error code indicating success/failure:

printf(“Updating records: %d\n; Saving records: %d\n”, records.update(data), records.save());

Oops, which one gets executed first? Did you save the record before or after updating it? Major oops. Moral of the story: knowing language specifications can help you avoid very subtle bugs that not even the compiler will warn you about. Also you can make a fool out of your interviewer by proving more knowledgeable than him when he asks dumb questions (this probably won’t get you hired though).

Some good references on the subject:

Kalman Filters + QNX madness

Today was a packed day, full of excitement. As the TA for computer vision, I had to give the lecture today since my professor is in Kyoto at the ICCV conference. My lecture was on SIFT, arguably the most important concept in computer vision. And so many glazed looks from the class….sigh….at least I don’t think it went too poorly. I debated recording it so I could analyze it later to see how badly I did, but didn’t get around to it. Next time….procrastination strikes successfully again. Interestingly enough, the back row was apparently the place to be. I had one friend fall asleep during my lecture and two others were apparently arguing on whether I was a controls or vision person. I personally maintain that I’m neither: while attempting to do both, I do neither well.

I am also working with the new visiting Spainish student in my lab on Kalman Filters and developing a model of Micron. The results so far are looking promising with a very basic Kalman filter with an identity A matrix and no inputs. It is able to filter stationary noise by several factors to an RMSE of 1-2 micros. Not bad, but then again we are using the Kalman filter under the most idealistic scenario. It will be interesting to see what happens when we add in a model of hand movements and the kinematics of the system. On an unrelated note, I spent some time working with Uma to get his new computer which uses PCI instead of ISA to work in the realtime operating system QNX. He is trying to interface with various electronics such as DACs mounted to PCI expansion cards. That still needs more work as we keep getting weird errors where functions compile and link just fine but then spit out “Error not implemented” when you run them. Oddnesses abound.

Simple Kalman Filtering
Simple Kalman Filtering

No payments until next Jan

It’s like those very clever advertisements: “Don’t have cash at the moment? Buy it now anyhow, and we won’t charge interest until next Jan.”  Of course, by the time you’ve gotten through Christmas and New Years, you still won’t have any money and they’ll slap on months and months of interest. Gotcha! Life is like that. Today I went for lunch, did some grocery shopping, baked, and went to a birthday party – all with some friends. It was a great time, no doubt about it. But….it is reminiscent of the clever ad: have fun now, overpay later – meaning lots of work for tomorrow. Moral of the story is make sure you plan into the future to have enough cash/work done before you purchase/have fun. Otherwise you might be paying/working more tomorrow than you originally anticipated.

Blog up, Surrogates

One of the things I’ve been missing is a good blog that does all the fancy stuff I’ve been needing: categorizing, easy editing, RSS, etc. I figured I’d install the ever popular WordPress and be done with it, but it requires MySQL and I’m pretty alergic when it comes to databases. Not that I have anything against them, but I like the surety of seeing my content as files somewhere that I can backup. When searching for alternatives, I discovered that it is really quite hard to find a blog that doesn’t use MySQL. All the ones that do look terrible and have lame features. I finally hunted down NinjaBlog which appears to be a modified version of WordPress that uses flat files as the storage mechanism. Hurrah! Anyhow, we’ll see how it does. EDIT: It goes really poorly, I had to go ahead and swap to WordPress, which is actually quite nice, even if it does hide all my data away in a database.

On a more personal note, I saw Surrogates today. I had been excited about it for a while, but recent reviews have not been kind so I went in with low expectations. I found it quite enjoyable, if predictable at times. It makes one think about life and the lies we use to reassure ourselves. But the best part was it featured Takeo Kanade, the professor at CMU/Robotics Instititue that is largely responsible for the field of robotics. Way to go Takeo! With Randy Pausch in Star Trek and now Takeo Kanade in Surrogates, perhaps being featured in a major Hollywood film is closer to me than I think 🙂

January 18th, 2009

Well! A new year already. This break I was working on organizing my media collection. I’ve been playing around with minishowcase, which is a very nice little AJAX web gallery. I was looking for something pretty much like Picasa, and this is the closest I’ve found. Light, easy to install, well coded, it is quite nice. I’ve always thought gallery2 and coppermine were utterly useless when it comes to gallery managment. minishowcase doesn’t have all the admin features, but the UI is sweet! Just throw all your photos in a folder and bam, you have a nice interface to select galleries, and move between photos (with keyboard, just hit right/left arrow keys) without reloading the page (I’m looking at YOU Flickr! Grrr….). I also tried out ampache, which seems pretty nice as well. I think I’ll upload my photo and music collections to this website (after all I do have like 150GB)…

Posted in: old Filed under:

October 2nd, 2008

Well I had a good time in Amsterdam for the FG2008 conference and then in Barcelona and Rome a few days afterwards. The conference was useful and the time off was fun. But I have gotten behind in research and coursework unfortunately. For some reason I’ve found it hard to get back into the swing of things. I really need to crack down and get a lot of stuff done. On a related note, I’ve thrown up my Face Recognition Evaluator, which is a MATLAB package for comparing face recognition algorithms.

Posted in: old Filed under:

September 13th, 2008

Wet, cold, stressed, sick, and exhausted would best desribe me today. 3 days until ICRA paper submission and my advisor has totally changed how the underlying algorithm should work. I have no idea how I’m going to get it coded, tested, and written about in 3 days. On the upside, Enrique is creating the PowerPoint presentation for FG2008 and I uploaded my Facebook Downloader program for getting face recognition data. This allows you to collect data from Facebook for research and academic purposes in the field of face recognition.

Posted in: old Filed under:

August 31st, 2008

Well, I survived my first real graduate conference in Vancouver, Canada. It was pretty fun, although I did hit a new low – I actually fell asleep during a presentation! At least I still have my record of never falling asleep during class…we’ll see how long that lasts. My advisor and I took one of the more uninteresting days off, rented some bikes, and toured the town. I didn’t find out until a little bit into this that he routinely bikes 15 miles in hilly Pittsburgh. And I hadn’t been on a bike in 2 years…a recipe for disaster. If he didn’t think I was a wimp before hand, I’ve definitely cemented the impression. It was touch and go for a while as I wasn’t sure I’d actually make it. Thankfully he made frequent stops on my behalf, otherwise I wouldn’t have survived. But it was fun, we biked about 25 miles all told that day and got to see a lot of the area.

I went to go install filethingie on my new webserver after like a year (I figured it was time) and to my surprise there was a new version out! It is awesome! All nice and AJAX, with background saves for editing files. I’ve decided to use it for this site for now. I hacked it a bit so that it can build my site with txt2tags and the default operation is edit instead of view, but otherwise it’s quite nice. Well I have two weeks until the ICRA paper deadline so I better get cracking. Last week I had two nights where I only got an hour of sleep, looks like that might be a continuing trend. Sigh….

Posted in: old Filed under:

August 9th, 2008

Well it’s been half a year since I last posted news and over a year since I last really changed up the website. Thus, it was time to redo the website. I ripped off a template design from Steve’s Template that I sorta liked and then modified it to suite my needs. The new theme looks a bit dated, but it is a bit cleaner and definitely more logically laid out. While I was at it, I re-wrote the PHP code that generates my website for me. The old code was terrible, literally. The new code is much cleaner and better organized. However, it is still a bit kludgy. I have a new tag-line for the website: “Arcane Robotic Incantations.” I’m not sure if I like it all that much but it’s better than my old “Escapades in Arcane Programming.” I have a hunch it is too esoteric; maybe “We’s gots robots” would be better. In any case, there is still a lot to do on the website and once again it probably won’t get done. But one can hope, right?

Posted in: old Filed under:

April 12th, 2008

I’m always surpised at the long lengths of time I neglect this site (unfortunately). One day I’ll either dedicate more time or admit defeat and give up. Or just have it automatically redict to a Rick-Roll video. Speaking of which, my KDC class (Kinematics, Dynamics, and Control) team rick rolled our professor this semster. He didn’t get it, but the class laughed. Our prof thought we were going to get up and sing-a-long to this weird guy singing “I’m never gonna let you go”. In other news, CMU is definitely putting me through my paces. The KDC course is really time consuming as our assignments are to simulate a 2D walking 5-link biped. It is really intense, but at the end of it we generate some cool videos which you can find on my youtube site. I’ve also written two papers, that due to some mystical bad luck (Murphy cough cough) wound up due on basically the same day. I was able to get them both done, which is good. I also failed my 3D vision midterm (got a 51% on the exam and yes that was below the class average). So I need to do a really good project and study hard for the final. Speaking of which, is only a few weeks away. Back to work, sigh…

Posted in: old Filed under:

October 28th, 2007

I have pretty much settled into “da Pitts” and CMU. It’s getting cold though. This poor Florida boy is not liking the cold sweeps of air moving down out of Canada. Tomorrow it is supposed to potentially freeze and go into the high 20s. Brrr! In Central Florida, it only freezes a few times a year max. And going into the 20s is very rare. Oh well…I tried to buy some hypnosis balls and swing them in front of my eyes and use self-hypnosis to convince myself that I liked cold weather, but it’s not working so far. Actually I didn’t buy hypnosis balls and try that, but somehow I doubt it would work. But if it gets cold enough, I might just give the crazy idea a try. So I mentioned earlier that I might try making my own CMS using the Google Web Toolkit. This is the first news item that uses the new half-baked CMS I’ve made with it. I really like how it is turning out. There are some frustrating things with the Google Web Toolkit (GWT), especially with sizing and stilly tree items, but overally, it is a nice package. Sure beats doing it through straight Javascript. I really don’t have that much yet (a few hundred lines of code in Java and PHP), but I have it so it loads in my existing page/file structure in a tree that I can browse. I can double click and open up pages that load in tabbed text area boxes. Since this site is done in txt2tags, a simple text area does fine. I could do it in HTML as well by using the RichText control. There is a lot that is still ghetto (not counting the fact I don’t have creating pages or editing templates or anything) and hardcoded, but I’m proud of it so far. Of course, it’ll probably crash when I try to save this page, but… Anyhow, when I slack and make time to work on this, I’ll update it some more. I’m hoping to make it a bit more generic and release it as a very simple CMS (yes I know there is a CMS with basically that same name). Well it’s late so time for bed. Gotta get up in 4 hours and I still haven’t showered tonight. Sigh…This is going to be a very bad week for me.

Posted in: old Filed under:

August 27th, 2007

So yes, an auspicious start to the whole “beginning classes at CMU tomorrow” deal. First I discover that apparently I left the lights on in my car – for an entire week. So yes, I don’t have a volt-meter handy (gasp!), but I’m guessing the battery is much, much less than 12 volts. So much for that RI picnic I was supposed to attend this afternoon. Unluckily, I don’t know anybody well enough to ask for a jump. Luckily, I now live in Pittsburgh were stores are crammed ontop of each other with absolutely no woods, lots, or any sort of green growing fields anywhere. Wait, why is that lucky again? Oh right, that’s good because I have an Advance Auto Car Parts only a half mile or so away. Thank you Google and thank you Pittsburgh. And lo and behold, according to the Advance Auto website, they have a rechargeable jumpstarter for only $40 and it is in stock. Hurrah! Of course, when I arrive I discover that they lied and the jumpstarters were all sold out. OK, so I’ll make do with a plain old car battery charger. Sure it takes 8 hours to charge, but it’ll get the job done. And if I grab another car battery and an inverter, I can make my own UPS in the future. So I buy my battery charger, great. I spend 10 minutes unscrewing things to get my car battery out of the car, but I get it out and myself dirty in the process. Oh well, I get it charging and everything will be good to go tomorrow. However, that’s not the end. I decide to have some spaghetti for supper and get a pot of water to boil and turn my gas stove to “LITE” – and nothing happens. Hmm…the range top is still pretty warm so the pilot light must still be on. How about another burner? Nope. Feel up the stove again and it’s cooler. Drat! I must have extinguished the pilot light. It is times like these that I’m glad I’m good at software because hardware stuff sure doesn’t like me. Now I have to decide, should I try to toaster oven to make some supper or will it break too? Hmmm…difficult decisions ahead…

Posted in: old Filed under:

May 2nd, 2007

Tonight was the Honors Banquet, which was sorta fun/sorta boring. It was fun because my sister and her boyfriend came in place of my parents (they are in Scotland living it up). It was also fun because I got to sit with Enrique, Chris, Christina, and Jason. My advisor Dr. Gonzalez was supposed to be there but he was in Jordan (see the trend!) so Dr. Georgiopoulos was my “substitute advisor” for the night. It was funny because when they called me up to get my medal (everybody got one), they didn’t say “Dr. Georgiopoulos filling in for Dr. Gonzalez.” Nope, Dr. Georgiopoulos was announced as Dr. Gonzalez. That was pretty funny. Some of the Honors in the Major undergraduate thesis titles were pretty funny too. One was about Superman and the legacy of heroes or something. It was boring because of all the speeches and the giving of the medals. I suggested they just do what the rock concerts do: just throw the medals out to the audience and let them catch them. The speeches were extremely boring except one by the student. As Jaryd (my sister’s boyfriend) said, you can totally tell she thesaurus’ed the whole thing. Half the time she was bragging about her accomplishments (without making it seem obvious…and failing) and the other half of the time she was simply making stuff up that sounded so “dressed up” that it was incredibly cheesy and laughable. I think I laughed my whole way through it as she threw out words like “indubious” and phrases “consciously committed this moment to memory.” Ahh me…at leas the cake was good; in fact, I stole a piece from the table next to us on the way out.

On the technical side, I was thinking over my pretty ghetto website and CMS that I made. I was thinking it could be so much better with a (gasp!) AJAX interface. I know, I know, AJAX is the newest buzzword. But what I want is a more “desktop-like” experience when developing my website, but from the web. I’ve been using HTTPS://WWW.EATELBUSINESS.COM/ for all my internet based telecommunications. For instance, auto-save anybody? Ever entered a post/website content into an edit box only for the page not to load and then when you hit back it’s all gone? Anybody else do the “copy before you submit so you don’t loose everything you typed for the past 15 minutes?” Ever wanted to edit multiple pages at the same time in tabs? So yeah, AJAX would be an awesome application of this (in my opinion). And not too many CMSes seem to have these features. So I’m investigating the Google Web Toolkit, which compiles Java code to Javascript. That means you can develop your code in eclipse and then have it compiled to HTML and Java. I was initially suspicious (and still sorta am), but looking at it more, it seems not only a cool idea, but one with merit too. It would be cool to have an AJAX CMS with a PHP backend. I found this link helpful when trying to access PHP from my local machine using Google Web Toolkit http://www.drivenbycuriosity.com/mywp/?p=52#more-52. I never have enough time, so this is probably a passing fancy, but I thought I would share my initial (and favorable) impressions.

Posted in: old Filed under: