Wednesday, May 27, 2009

rising edge

this evening, in a brief reality check for maker faire, jas asked an important question at this part of the process. something to the effect of "at this point, how hard would it be to just stitch the images and hack tbeta to read the stitched stream instead of a camera?"

we discussed it briefly, but it became a backburner thought pretty quickly.

after arriving to my new room at my new home, and doing my usual google reader thing, an idea struck me.

how hard would it be to just hack tbeta to do away with the camera handling all together and link it to our blob objects right now?

there's definitely some evasion in the question, and it doesn't solve all of our problems immediately, but i think it's worth considering.

falling edge

sometimes coming home is the best idea, even if it's a new home.

it's been nearly a week since the last time i legitimately updated this thing, so how about an update?

remember my goals from last week? time got the better of me, and moving took up pretty much my whole weekend.

let's go line by line:
- migrate to a CMake building system -
decided that it was a procrastination idea. good in theory, but since our makefile works, let's not fuck with it. there are two more weeks in the project, and maker faire is this weekend. we need to get something up and working, stat.

- successfully thread blob detectors -
i tried two different approaches to this. the first involved creating a single pthread for each detector, a pthread for stitching, and have the main thread do bloblist sending. the threads were programmed to wait for each other to finish before looping around, so really only the blob detectors were running in parallel, while the stitcher and sender were separated but synchronized sequentially. the idea was to lay the framework for a pipeline without optimizing it. my first stab resulted in deadlock and i spent about a half an hour debugging it before calling it quits that night.

my second approach involved creating and destroying pthreads for each every time around the loop. the approach is obviously less robust, since it introduces the overhead of creating and destroying the threads every time around, but it meant that i could avoid mutexes and not worry about deadlock. it also meant that i was only threading the blob detection. i got it to work, but it appeared to perform worse in that it used the cpu more. matt brought up that this might indicate that it's performing better in that it's processing more frames per second, which would account for the higher cpu usage. that's under the assumption that we're getting what we pay for.

ultimately, i had to put this on the backburner so we have something to show at maker faire.

- finish hacking up the cameras we have and see about putting as many of those as we can in the table

just finished this today. bought 2 cameras from best buy, one from gamestop, and left one on hold at the other gamestop, unreasonably located less than a mile from the first. if anyone else was trying to get a ps3 eye webcam today in santa cruz, it's safe to say they had quite a hard time, considering what i bought was all of the stock at both locations i visited. i also cut my finger pretty mean on our janky rusty-looking dull ass xacto knife. eddie warned me. i didn't listen.

- bezel, if it arrives

still hasn't arrived. jas needs to call mcmaster carr tomorrow.

- ask dilad if/when they shipped the screen / tracking number

the dilad screen arrived friday, damaged during shipping. thankfully, michael from tcl in vancouver was readily available by phone and email, so we were able to send some pictures and figure out the best course of action for our project. the problem was that the tube had buckled in the middle, denting the rolled screen. the dent left a mark that repeated a few times down the length of the screen. michael advised us to proceed with the applying the screen since it wouldn't make sense for us to send it back and there was the off chance that it would still work. it didn't, and the dents showed up as dark spots in the projection. conveniently, monday wasn't a holiday in canada, so we received a shipment today of another roll, this time the same tube was double boxed as well. it was definitely appropriate -- neither of us wanted this to happen again, but comical nonetheless. there was enough undamaged screen from the first shipment to apply to the prototype without any wrinkles, and the results are pretty spectacular. the screen is completely visible with all of the lights on, though slightly annoying since the glass surface reflects the lights overhead.

- implement basic single-value x and y offsets for multiple detectors

done and done. next up is line offsets -- offset x a certain amount based on y, then offset y by a certain amount based on the original x (not offset). shouldn't be too hard. need to select a data structure.

damn, i've already populated a giant post, and i haven't even come close to touching on what i wanted to write about in the first place. i think i'll double post for organization.

Friday, May 22, 2009

Thursday, May 21, 2009

goals for tomorrow

some personal, some whole team.

- migrate to a CMake building system
- successfully thread blob detectors
- finish hacking up the cameras we have and see about putting as many of those as we can in the table
- bezel, if it arrives
- ask dilad if/when they shipped the screen / tracking number
- implement basic single-value x and y offsets for multiple detectors

reading for tomorrow evening: tbeta calibration source code.

i'm honestly not looking forward to it.

tonight i read about makefiles, recursive make, and cmake. i think cmake is the way to go, though it is a little more to type (i.e. cmake && runtest), but it seems more efficient.

one reasonable reference i found on makefiles was a bit redundant for my previous knowledge, but spelled out some things i only sort of knew in further detail.

that brought me to a quite dated but worthwhile paper on the problem with recursive makefiles, leading me to look into cmake.

so far we've been building all of our dependency lists by hand, which i don't think is the best idea. but right now it works, which makes pushing for cmake now a lower priority.

looking back at the list, it's probably in reverse order..

Tuesday, May 19, 2009

Calibration: My Greatest Fear

Matt's recent post on config files got me in a blogging mood, which is probably a bad idea. I have an interview tomorrow in Sunnyvale, so I need to wake up at 6, which is only 4 and a half hours from now. Might as well keep pushing.

At the end of last week, I wrote a basic parser for our config file that contains the calibration data we need to properly initialize BlobDetector and BlobStitcher objects. The parser is wrapped in a Calibrator object that populates an array of Camera objects with the calibration data. The Calibrator is then passed by reference to a funtion in each of the Blob objects (Blobjects?) to read data from the Camera array and set values accordingly.

I spent yesterday and today reading about clipping and how to determine if a point is within a polygon, only to wind up using a slight modification of the pnpoly algorithm linked at the end of my last post.

Needless to say, by the end of the evening, I had a basic setup with two cameras that only output blobs that were within a defined region of the capture space.

The milestone (if you can call it that) is promising, in that it means stitching blobs is doable, at least in a crude, by-hand-calibration, good-enough-to-work engineering sense. It doesn't mean we're ready for Maker Faire yet, nor does it mean we're in the clear for our presentations in June, but it does mean we're close.

My next biggest worry is dealing with calibration and the overlapping region of the cameras. Basic decisions are worrying me now, like is it better if we always cut cameras off from the left, or do we give some full view and inhibit others more?

And we haven't even gotten into how we're calculating the calibration parameters based on user input ("touch here please" , "swipe there please")...

Wednesday is the IEEE Dunk Tank. We're still trying to convince Petersen that IEEE is a legitimate pre-professional organization.

Tomorrow night I'll get offsets to work, and Wednesday I'll try to get touch point-to-image point mapping to work.

Oh, and we ordered our projection surface from Dilad in Canada.

I guess I'd better rest up for that interview tomorrow..

Sunday, May 17, 2009


dropping blobs should be easy, but i'm not really in a blogging mood, so i'm not going to bother explaining why it's not right now..

just here to post links for reference.. these are similiar, so adaptations will be necessary. thought for food.

also a minor status update: the blob server is working nicely with a basic keyboard application that davide wrote.

eddie's working on tracking IDs, and i'm working on doing the stitch transformations, dropping, and then hopefully next week, calibration.

edit on monday -- one more

Thursday, May 7, 2009

Wednesday, May 6, 2009

eddie fixed the problem i decribed in the life of a blob by adding a solid black border to the images before running the blob detection algorithm. slightly genius maneuver.

a couple of random, possibly useful resources

which i discovered from

Monday, May 4, 2009

the life of a blob

admittedly, i've spent the past five minutes browsing facebook instead of writing in this little blog box.

i'm a little disappointed that the rest of the team isn't really keeping up on the blog front. we've made a lot of progress in the past month, and while it doesn't always seem like there's something to say, there probably is.

last night i took a scalpel to eddie's blob detection code (which works!) and turned it into an object instead of a single program. my motivation was to make the blob server use the blob detection object in conjunction with a blob stitching object (yet to be written) to create the data it needs to send out to clients.

the surgery went well, and i made sure that the blob detection still worked when i was done with it, however it was still pretty ugly, so we spent an hour or so today going through and tidying things up, modularizing what needed to be modularized, and then we got interrupted by my desire for food and eddie's desire to walk home before it started raining.

now, we have blob detection, and we have more than a few properly filtered cameras, so i've started to investigate blob stitching more thoroughly.

with just one camera, and a simple test, my entire conception of what a blob is was thrown out the window.

as it turns out, if what would otherwise be considered a blob happens to be touching the border of the camera, the cvBlob library say "ohai, i can haz solid borders.. i aint no blabs", which, in english means, white pixel splotches in an image are only blobs if they don't touch the edge of the image.

the inherent problem is that if a single blob is touching the border of two cameras and is not fully contained by either, then it never gets detected.

so, about half of my worldview is completely fucked right now.. this means one of two things. either we go into the cvBlob library and make some changes (extremely undesirable), or we ensure that each camera has an overlap of at least one blob (mechanically difficult, poses philosophical issues).

i'm going to investigate the second option here briefly. when we first started the project, jose mentioned the desire to have a swiping type function where the user would use the side of his hand (pretend like you're playing rock paper scissors, make a rock with your right hand, and put it down on the table. now unclench your fist and straighten your hand.. then swipe your hand to the left.)

if this sort of blob behavior is to be expected or accepted, then any notion of blob size is now completely out the window, making the second option ineffective.

so now it's looking like the time to put on my white hat and get to hacking.. i wonder what licensing model they're using...