Friday, June 5, 2009

moment of truth

the past three days have been grueling.

i've slept 3 times since waking up wednesday morning (my birthday, no less), each time for two hours.

since wednesday, however, we've made immense progress. we went from six cameras to four for two reasons. one, we only have four cameras with lenses that focus reasonably. all of the other ones result in large, blurry blobs, which is problematic when there are two touch points near each other, as well as for consistency.. we don't want some areas of the table to perform better than others. the second reason is USB bandwidth.

wednesday night, I tried running stitching and detection on four cameras (while we were still planning on six), and the lag was unbelievable.. i think we hit 2 or 3 seconds per frame.

we've since added a PCI card, switched computers, switched drivers and switched kernels to get it to work at a reasonable speed..

last night at around 2am (technically friday morning) we had a huge morale booster -- matt and jas got projection distorting to work, meaning that we can definitely get everything up smoothly for the presentation and design competiton. there's a $2000 prize for the winning team, so we're hoping to win and have a nice team celebration with that.

right now, 3/5 of the team is resting up, jas is working on the math to execute display correction, and i'm waiting on a key from BELS to swap out the crappy ATI video card for a nice nVidia card, giving me the time to blog at this crucial time.

our check-off is in 3 hours. between now and then, i need to make stitching work across the whole table.

doable, but a tight squeeze. i should probably go look for someone at BELS right about now.. or bob vitale.

Wednesday, May 27, 2009

rising edge

this evening, in a brief reality check for maker faire, jas asked an important question at this part of the process. something to the effect of "at this point, how hard would it be to just stitch the images and hack tbeta to read the stitched stream instead of a camera?"

we discussed it briefly, but it became a backburner thought pretty quickly.


after arriving to my new room at my new home, and doing my usual google reader thing, an idea struck me.

how hard would it be to just hack tbeta to do away with the camera handling all together and link it to our blob objects right now?

there's definitely some evasion in the question, and it doesn't solve all of our problems immediately, but i think it's worth considering.

falling edge

sometimes coming home is the best idea, even if it's a new home.

it's been nearly a week since the last time i legitimately updated this thing, so how about an update?

remember my goals from last week? time got the better of me, and moving took up pretty much my whole weekend.

let's go line by line:
- migrate to a CMake building system -
decided that it was a procrastination idea. good in theory, but since our makefile works, let's not fuck with it. there are two more weeks in the project, and maker faire is this weekend. we need to get something up and working, stat.

- successfully thread blob detectors -
i tried two different approaches to this. the first involved creating a single pthread for each detector, a pthread for stitching, and have the main thread do bloblist sending. the threads were programmed to wait for each other to finish before looping around, so really only the blob detectors were running in parallel, while the stitcher and sender were separated but synchronized sequentially. the idea was to lay the framework for a pipeline without optimizing it. my first stab resulted in deadlock and i spent about a half an hour debugging it before calling it quits that night.

my second approach involved creating and destroying pthreads for each every time around the loop. the approach is obviously less robust, since it introduces the overhead of creating and destroying the threads every time around, but it meant that i could avoid mutexes and not worry about deadlock. it also meant that i was only threading the blob detection. i got it to work, but it appeared to perform worse in that it used the cpu more. matt brought up that this might indicate that it's performing better in that it's processing more frames per second, which would account for the higher cpu usage. that's under the assumption that we're getting what we pay for.

ultimately, i had to put this on the backburner so we have something to show at maker faire.

- finish hacking up the cameras we have and see about putting as many of those as we can in the table

just finished this today. bought 2 cameras from best buy, one from gamestop, and left one on hold at the other gamestop, unreasonably located less than a mile from the first. if anyone else was trying to get a ps3 eye webcam today in santa cruz, it's safe to say they had quite a hard time, considering what i bought was all of the stock at both locations i visited. i also cut my finger pretty mean on our janky rusty-looking dull ass xacto knife. eddie warned me. i didn't listen.

- bezel, if it arrives

still hasn't arrived. jas needs to call mcmaster carr tomorrow.

- ask dilad if/when they shipped the screen / tracking number

the dilad screen arrived friday, damaged during shipping. thankfully, michael from tcl in vancouver was readily available by phone and email, so we were able to send some pictures and figure out the best course of action for our project. the problem was that the tube had buckled in the middle, denting the rolled screen. the dent left a mark that repeated a few times down the length of the screen. michael advised us to proceed with the applying the screen since it wouldn't make sense for us to send it back and there was the off chance that it would still work. it didn't, and the dents showed up as dark spots in the projection. conveniently, monday wasn't a holiday in canada, so we received a shipment today of another roll, this time the same tube was double boxed as well. it was definitely appropriate -- neither of us wanted this to happen again, but comical nonetheless. there was enough undamaged screen from the first shipment to apply to the prototype without any wrinkles, and the results are pretty spectacular. the screen is completely visible with all of the lights on, though slightly annoying since the glass surface reflects the lights overhead.


- implement basic single-value x and y offsets for multiple detectors

done and done. next up is line offsets -- offset x a certain amount based on y, then offset y by a certain amount based on the original x (not offset). shouldn't be too hard. need to select a data structure.


damn, i've already populated a giant post, and i haven't even come close to touching on what i wanted to write about in the first place. i think i'll double post for organization.

Friday, May 22, 2009

Thursday, May 21, 2009

goals for tomorrow

some personal, some whole team.

- migrate to a CMake building system
- successfully thread blob detectors
- finish hacking up the cameras we have and see about putting as many of those as we can in the table
- bezel, if it arrives
- ask dilad if/when they shipped the screen / tracking number
- implement basic single-value x and y offsets for multiple detectors

reading for tomorrow evening: tbeta calibration source code.

i'm honestly not looking forward to it.

tonight i read about makefiles, recursive make, and cmake. i think cmake is the way to go, though it is a little more to type (i.e. cmake && runtest), but it seems more efficient.

one reasonable reference i found on makefiles was a bit redundant for my previous knowledge, but spelled out some things i only sort of knew in further detail.

that brought me to a quite dated but worthwhile paper on the problem with recursive makefiles, leading me to look into cmake.

so far we've been building all of our dependency lists by hand, which i don't think is the best idea. but right now it works, which makes pushing for cmake now a lower priority.

looking back at the list, it's probably in reverse order..

Tuesday, May 19, 2009

Calibration: My Greatest Fear

Matt's recent post on config files got me in a blogging mood, which is probably a bad idea. I have an interview tomorrow in Sunnyvale, so I need to wake up at 6, which is only 4 and a half hours from now. Might as well keep pushing.

At the end of last week, I wrote a basic parser for our config file that contains the calibration data we need to properly initialize BlobDetector and BlobStitcher objects. The parser is wrapped in a Calibrator object that populates an array of Camera objects with the calibration data. The Calibrator is then passed by reference to a funtion in each of the Blob objects (Blobjects?) to read data from the Camera array and set values accordingly.

I spent yesterday and today reading about clipping and how to determine if a point is within a polygon, only to wind up using a slight modification of the pnpoly algorithm linked at the end of my last post.

Needless to say, by the end of the evening, I had a basic setup with two cameras that only output blobs that were within a defined region of the capture space.

The milestone (if you can call it that) is promising, in that it means stitching blobs is doable, at least in a crude, by-hand-calibration, good-enough-to-work engineering sense. It doesn't mean we're ready for Maker Faire yet, nor does it mean we're in the clear for our presentations in June, but it does mean we're close.

My next biggest worry is dealing with calibration and the overlapping region of the cameras. Basic decisions are worrying me now, like is it better if we always cut cameras off from the left, or do we give some full view and inhibit others more?

And we haven't even gotten into how we're calculating the calibration parameters based on user input ("touch here please" , "swipe there please")...

Wednesday is the IEEE Dunk Tank. We're still trying to convince Petersen that IEEE is a legitimate pre-professional organization.

Tomorrow night I'll get offsets to work, and Wednesday I'll try to get touch point-to-image point mapping to work.

Oh, and we ordered our projection surface from Dilad in Canada.

I guess I'd better rest up for that interview tomorrow..

Sunday, May 17, 2009

clipping

dropping blobs should be easy, but i'm not really in a blogging mood, so i'm not going to bother explaining why it's not right now..

just here to post links for reference.. these are similiar, so adaptations will be necessary. thought for food.

http://en.wikipedia.org/wiki/Weiler-Atherton

http://en.wikipedia.org/wiki/Clipping_(computer_graphics)

also a minor status update: the blob server is working nicely with a basic keyboard application that davide wrote.

eddie's working on tracking IDs, and i'm working on doing the stitch transformations, dropping, and then hopefully next week, calibration.

edit on monday -- one more http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html

Followers