Friday, June 5, 2009

moment of truth

the past three days have been grueling.

i've slept 3 times since waking up wednesday morning (my birthday, no less), each time for two hours.

since wednesday, however, we've made immense progress. we went from six cameras to four for two reasons. one, we only have four cameras with lenses that focus reasonably. all of the other ones result in large, blurry blobs, which is problematic when there are two touch points near each other, as well as for consistency.. we don't want some areas of the table to perform better than others. the second reason is USB bandwidth.

wednesday night, I tried running stitching and detection on four cameras (while we were still planning on six), and the lag was unbelievable.. i think we hit 2 or 3 seconds per frame.

we've since added a PCI card, switched computers, switched drivers and switched kernels to get it to work at a reasonable speed..

last night at around 2am (technically friday morning) we had a huge morale booster -- matt and jas got projection distorting to work, meaning that we can definitely get everything up smoothly for the presentation and design competiton. there's a $2000 prize for the winning team, so we're hoping to win and have a nice team celebration with that.

right now, 3/5 of the team is resting up, jas is working on the math to execute display correction, and i'm waiting on a key from BELS to swap out the crappy ATI video card for a nice nVidia card, giving me the time to blog at this crucial time.

our check-off is in 3 hours. between now and then, i need to make stitching work across the whole table.

doable, but a tight squeeze. i should probably go look for someone at BELS right about now.. or bob vitale.

Wednesday, May 27, 2009

rising edge

this evening, in a brief reality check for maker faire, jas asked an important question at this part of the process. something to the effect of "at this point, how hard would it be to just stitch the images and hack tbeta to read the stitched stream instead of a camera?"

we discussed it briefly, but it became a backburner thought pretty quickly.


after arriving to my new room at my new home, and doing my usual google reader thing, an idea struck me.

how hard would it be to just hack tbeta to do away with the camera handling all together and link it to our blob objects right now?

there's definitely some evasion in the question, and it doesn't solve all of our problems immediately, but i think it's worth considering.

falling edge

sometimes coming home is the best idea, even if it's a new home.

it's been nearly a week since the last time i legitimately updated this thing, so how about an update?

remember my goals from last week? time got the better of me, and moving took up pretty much my whole weekend.

let's go line by line:
- migrate to a CMake building system -
decided that it was a procrastination idea. good in theory, but since our makefile works, let's not fuck with it. there are two more weeks in the project, and maker faire is this weekend. we need to get something up and working, stat.

- successfully thread blob detectors -
i tried two different approaches to this. the first involved creating a single pthread for each detector, a pthread for stitching, and have the main thread do bloblist sending. the threads were programmed to wait for each other to finish before looping around, so really only the blob detectors were running in parallel, while the stitcher and sender were separated but synchronized sequentially. the idea was to lay the framework for a pipeline without optimizing it. my first stab resulted in deadlock and i spent about a half an hour debugging it before calling it quits that night.

my second approach involved creating and destroying pthreads for each every time around the loop. the approach is obviously less robust, since it introduces the overhead of creating and destroying the threads every time around, but it meant that i could avoid mutexes and not worry about deadlock. it also meant that i was only threading the blob detection. i got it to work, but it appeared to perform worse in that it used the cpu more. matt brought up that this might indicate that it's performing better in that it's processing more frames per second, which would account for the higher cpu usage. that's under the assumption that we're getting what we pay for.

ultimately, i had to put this on the backburner so we have something to show at maker faire.

- finish hacking up the cameras we have and see about putting as many of those as we can in the table

just finished this today. bought 2 cameras from best buy, one from gamestop, and left one on hold at the other gamestop, unreasonably located less than a mile from the first. if anyone else was trying to get a ps3 eye webcam today in santa cruz, it's safe to say they had quite a hard time, considering what i bought was all of the stock at both locations i visited. i also cut my finger pretty mean on our janky rusty-looking dull ass xacto knife. eddie warned me. i didn't listen.

- bezel, if it arrives

still hasn't arrived. jas needs to call mcmaster carr tomorrow.

- ask dilad if/when they shipped the screen / tracking number

the dilad screen arrived friday, damaged during shipping. thankfully, michael from tcl in vancouver was readily available by phone and email, so we were able to send some pictures and figure out the best course of action for our project. the problem was that the tube had buckled in the middle, denting the rolled screen. the dent left a mark that repeated a few times down the length of the screen. michael advised us to proceed with the applying the screen since it wouldn't make sense for us to send it back and there was the off chance that it would still work. it didn't, and the dents showed up as dark spots in the projection. conveniently, monday wasn't a holiday in canada, so we received a shipment today of another roll, this time the same tube was double boxed as well. it was definitely appropriate -- neither of us wanted this to happen again, but comical nonetheless. there was enough undamaged screen from the first shipment to apply to the prototype without any wrinkles, and the results are pretty spectacular. the screen is completely visible with all of the lights on, though slightly annoying since the glass surface reflects the lights overhead.


- implement basic single-value x and y offsets for multiple detectors

done and done. next up is line offsets -- offset x a certain amount based on y, then offset y by a certain amount based on the original x (not offset). shouldn't be too hard. need to select a data structure.


damn, i've already populated a giant post, and i haven't even come close to touching on what i wanted to write about in the first place. i think i'll double post for organization.

Friday, May 22, 2009

Thursday, May 21, 2009

goals for tomorrow

some personal, some whole team.

- migrate to a CMake building system
- successfully thread blob detectors
- finish hacking up the cameras we have and see about putting as many of those as we can in the table
- bezel, if it arrives
- ask dilad if/when they shipped the screen / tracking number
- implement basic single-value x and y offsets for multiple detectors

reading for tomorrow evening: tbeta calibration source code.

i'm honestly not looking forward to it.

tonight i read about makefiles, recursive make, and cmake. i think cmake is the way to go, though it is a little more to type (i.e. cmake && runtest), but it seems more efficient.

one reasonable reference i found on makefiles was a bit redundant for my previous knowledge, but spelled out some things i only sort of knew in further detail.

that brought me to a quite dated but worthwhile paper on the problem with recursive makefiles, leading me to look into cmake.

so far we've been building all of our dependency lists by hand, which i don't think is the best idea. but right now it works, which makes pushing for cmake now a lower priority.

looking back at the list, it's probably in reverse order..

Tuesday, May 19, 2009

Calibration: My Greatest Fear

Matt's recent post on config files got me in a blogging mood, which is probably a bad idea. I have an interview tomorrow in Sunnyvale, so I need to wake up at 6, which is only 4 and a half hours from now. Might as well keep pushing.

At the end of last week, I wrote a basic parser for our config file that contains the calibration data we need to properly initialize BlobDetector and BlobStitcher objects. The parser is wrapped in a Calibrator object that populates an array of Camera objects with the calibration data. The Calibrator is then passed by reference to a funtion in each of the Blob objects (Blobjects?) to read data from the Camera array and set values accordingly.

I spent yesterday and today reading about clipping and how to determine if a point is within a polygon, only to wind up using a slight modification of the pnpoly algorithm linked at the end of my last post.

Needless to say, by the end of the evening, I had a basic setup with two cameras that only output blobs that were within a defined region of the capture space.

The milestone (if you can call it that) is promising, in that it means stitching blobs is doable, at least in a crude, by-hand-calibration, good-enough-to-work engineering sense. It doesn't mean we're ready for Maker Faire yet, nor does it mean we're in the clear for our presentations in June, but it does mean we're close.

My next biggest worry is dealing with calibration and the overlapping region of the cameras. Basic decisions are worrying me now, like is it better if we always cut cameras off from the left, or do we give some full view and inhibit others more?

And we haven't even gotten into how we're calculating the calibration parameters based on user input ("touch here please" , "swipe there please")...

Wednesday is the IEEE Dunk Tank. We're still trying to convince Petersen that IEEE is a legitimate pre-professional organization.

Tomorrow night I'll get offsets to work, and Wednesday I'll try to get touch point-to-image point mapping to work.

Oh, and we ordered our projection surface from Dilad in Canada.

I guess I'd better rest up for that interview tomorrow..

Sunday, May 17, 2009

clipping

dropping blobs should be easy, but i'm not really in a blogging mood, so i'm not going to bother explaining why it's not right now..

just here to post links for reference.. these are similiar, so adaptations will be necessary. thought for food.

http://en.wikipedia.org/wiki/Weiler-Atherton

http://en.wikipedia.org/wiki/Clipping_(computer_graphics)

also a minor status update: the blob server is working nicely with a basic keyboard application that davide wrote.

eddie's working on tracking IDs, and i'm working on doing the stitch transformations, dropping, and then hopefully next week, calibration.

edit on monday -- one more http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html

Thursday, May 7, 2009

Wednesday, May 6, 2009

eddie fixed the problem i decribed in the life of a blob by adding a solid black border to the images before running the blob detection algorithm. slightly genius maneuver.

a couple of random, possibly useful resources

http://en.wikipedia.org/wiki/Kalman_filter

which i discovered from http://bradhayes.info/thesis/

Monday, May 4, 2009

the life of a blob

admittedly, i've spent the past five minutes browsing facebook instead of writing in this little blog box.

i'm a little disappointed that the rest of the team isn't really keeping up on the blog front. we've made a lot of progress in the past month, and while it doesn't always seem like there's something to say, there probably is.

last night i took a scalpel to eddie's blob detection code (which works!) and turned it into an object instead of a single program. my motivation was to make the blob server use the blob detection object in conjunction with a blob stitching object (yet to be written) to create the data it needs to send out to clients.

the surgery went well, and i made sure that the blob detection still worked when i was done with it, however it was still pretty ugly, so we spent an hour or so today going through and tidying things up, modularizing what needed to be modularized, and then we got interrupted by my desire for food and eddie's desire to walk home before it started raining.

now, we have blob detection, and we have more than a few properly filtered cameras, so i've started to investigate blob stitching more thoroughly.

with just one camera, and a simple test, my entire conception of what a blob is was thrown out the window.

as it turns out, if what would otherwise be considered a blob happens to be touching the border of the camera, the cvBlob library say "ohai, i can haz solid borders.. i aint no blabs", which, in english means, white pixel splotches in an image are only blobs if they don't touch the edge of the image.

the inherent problem is that if a single blob is touching the border of two cameras and is not fully contained by either, then it never gets detected.

so, about half of my worldview is completely fucked right now.. this means one of two things. either we go into the cvBlob library and make some changes (extremely undesirable), or we ensure that each camera has an overlap of at least one blob (mechanically difficult, poses philosophical issues).

i'm going to investigate the second option here briefly. when we first started the project, jose mentioned the desire to have a swiping type function where the user would use the side of his hand (pretend like you're playing rock paper scissors, make a rock with your right hand, and put it down on the table. now unclench your fist and straighten your hand.. then swipe your hand to the left.)

if this sort of blob behavior is to be expected or accepted, then any notion of blob size is now completely out the window, making the second option ineffective.

so now it's looking like the time to put on my white hat and get to hacking.. i wonder what licensing model they're using...

Sunday, April 26, 2009

TUIO output from Tbeta

this is more for my own reference than anything, but here is some TUIO output from tbeta.

interestingly enough, when i tried the new version from source (graciously provided by cerupcat over at NUI), tseq was incrementing regardless of set and alive, while here it's stuck at 0 no matter what... making me wonder how important that message is for the protocol, especially considering MMF has been working fine thus far. i thought about posting this last night, but hesitated.. i suppose now i'll update with the output from my laptop with tseq increasing when i get home...


binding to port 3333
/tuio/2Dcur alive
/tuio/2Dcur fseq 0
/tuio/2Dcur set 7 0.77944 0.158674 0. 0. 0. -0.085531 -0.021343
/tuio/2Dcur set 8 0.475432 0.70227 0. 0. 0. -0.03334 -0.037312
/tuio/2Dcur set 9 0.444521 0.990623 0. 0. 0. -0.029568 -0.024974
/tuio/2Dcur set 12 0.703296 0.162337 0. 0. 0. -0.009897 -0.008351
/tuio/2Dcur set 14 0.842647 0.155566 0. 0. 0. -0.009897 -0.008351
/tuio/2Dcur alive 7 8 9 12 14
/tuio/2Dcur fseq 0
/tuio/2Dcur set 7 0.782767 0.162893 0. 0. 0. -0.085738 -0.025441
/tuio/2Dcur set 8 0.475432 0.70227 0. 0. 0. -0.033139 -0.033217
/tuio/2Dcur set 9 0.444521 0.990623 0. 0. 0. -0.029568 -0.024974
/tuio/2Dcur set 10 0.388484 0.996394 0. 0. 0. -0.03232 -0.016834
/tuio/2Dcur set 12 0.697136 0.1665 0. 0. 0. -0.013265 -0.0125
/tuio/2Dcur set 13 0.260356 0.998814 0. 0. 0. -0.013472 -0.016598
/tuio/2Dcur alive 7 8 9 10 12 13
/tuio/2Dcur fseq 0


edit -- decided to chop this WAY down, since i really only need a few lines for reference..

in examining this more, i've noticed some interesting traits of tbeta (the posted binary anyways).

the 2Dcur set message in the TUIO protocol only takes six arguments, not 8, which is perplexing, though i vaguely remember hearing that the extra two numbers describe a blob's area.

also, the three 0. arguments, according to the protocol, should be change in x since last frame, change in y since last frame, and acceleration based on the previous two values. apparently tbeta isn't (wasn't?) calculating these values.

Saturday, April 25, 2009

expo & crunchtime

this past wednesday thru friday, i attended expo '74, a convention put on by the folks at cycling '74, the people behind max/msp/jitter. i expected the endeavor to be entirely tangential to my work on this project, but it turned out to provide us with another possible resource for procuring a projection screen, since RP Visuals turns out to be over our budget.

the event was also entirely revitalizing for me. i met a lot of incredibly nice people from all over the world who were just as geeky about music and max/msp as i am. some were big names, like robert henke, co-creator of ableton live, and many were small names that i had never heard before, but were inspiring nevertheless. there were even some folks from UCSC there who i had not met before.

i'm now a lot more motivated, and it's time to crack down on these apps.

we still haven't received the filters. i'm getting worried, and i'm incredibly pissed at myself for not having the sense to have them rush shipped, especially after what happened last time.

wednesday evening we met with some nui group folks and drank free beer from the interactive displays conference. seeing the nui folks in person was a cool experience, and we got to put the pressure on them to get us the tbeta source code. finally, today, i've checked it out of their SVN. now comes the fun part where I attempt to merge it with our own svn so that i can butcher it.

Friday, April 17, 2009

overdue update

i haven't updated in quite a while, and to a certain extent, it's a reflection of my current mind state.

as i mentioned, we had the fancy internet/phone outage which pretty well destroyed any momentum i had going. thankfully, that wasn't the case for the whole team.

david did suggest that we take a break, and to that extent, it doesn't seem like the lack of productivity is a bad thing.

i think part of it is that my workload this quarter is so inconsistent with what i had going on fall and winter. i went from being enrolled in 20 credits, then 22 credits, to being enrolled in 13 credits, and though i'm still in lab just as much, i feel like not nearly enough is getting done.

but things are getting done.

we met on wednesday night to discuss our software framework and decided on c++ as our language, thanks in no small part to opencv (which is also c++).

we also discussed our data structures and object model. eddie is going to pass me an object that contains a camera reference and set of blob objects. i'll use the camera reference to look up calibration data to convert the blobs from local blobs (per camera) to global blobs for the whole table.

we also received shipment of the new, correctly sized aluminum channel that we're using as a bezel. last night we put it all together on the prototype with a full strip of ir leds on each long side. the effect this had on our touch detection is remarkable. the amount of pressure required to generate blobs is much less, and even matt's cold, bloodless hands generated blobs.

heh. i should clarify, in previous tests of our prototype, matt's fingers have consistently been very poor in generating blobs upon touch. they're the worst of our whole group, but it's a great asset to the project since it forces us to consider the spectrum of users who will be using the table. we've also discovered that our fingers work better as input devices when they're hotter or when they're wet, hence the joke about matt's hands being cold and bloodless.

we're at another point where we're waiting on shipments, again of the ir bandpass filters for the webcams. it's putting a slight delay on my, eddie, and zach's work, but we're diverting ourselves in the meantime to other tasks that need to get done.

we have all of the wood, and if all goes well, tonight we will complete the construction of the final table, sans glass, of course. i've been lending my hands to the woodworking so that i feel productive, but i still feel like i'm not doing much.

i still need to finalize this letter to porter college to get a nice DSLR camera for our lab (and for IEEE after this quarter) for documentation.. i'll be helping build the table tonight though.

next time i should remind myself to touch on our relations with projection screen suppliers and showcase shower door company.

time for woodworking.

Thursday, April 9, 2009

reflecting on first meeting w/ professors

we've gotten a bit done since monday, but it feels like more than it actually is.

yesterday was our first meeting with our professors. tuesday evening (around 11pm really), matt, jas and i went into the nuigroup IRC room where cerupcat gave me the idea of installing Tbeta and Max Multitouch Framework on my laptop. so we stayed a bit late and sure enough, i got it up and running.

this made our meeting much more exciting -- we had a legitimate application demo for our professors, as well as refreshments and two weeks of progress to report since we started during the break.

the following is a video that illustrates some what we showed professor laws and david munday, the course's TA (professor petersen wasn't there). in addition to what we show below, we were more informal and thorough with showing the behavior of tbeta and the ir emission strip.



today we received shipments of the LED reel and the aluminum channel/bezel. the channel turned out to be the wrong size, but we're going to see if there's anything we can do to use it.

other than the shipments, today was very slow-- phone and internet were down for most of the morning. someone disconnected the tubes.

the battery is low on my laptop and i left my charger in lab. i'll post more about the specifics of our prototype hardware next time.

Monday, April 6, 2009

weekend update #1

first weekend of the quarter down..

climbed tree nine on friday (slightly irrelevant, occurred with 3/5 of the team, made me feel good as a human being in a way that i dont get working in a lab)

i spent saturday considering chromium and its implications on image stitching and blob detection.

i spent today showing off the lab (new video of touch images soon), reading Learning OpenCV, and assisting a recording session.

the reading was more like skimming, but gave me some food for thought regarding matrix operations, convolution, and filtering. participating in the recording session was refreshing. a bunch of people were sitting in, and we all collaborated to get a remarkably decent sound out of a couple of takes using a few mics. definitely an experience i miss.

i also got to check out some of the work they're doing in the electronic music studios with their reactable project. so far they have a camera and image recognition software that is doing some serious fiducial orientation work. in the next month or so, a friend will be performing a piece in which he plays dominoes against an opponent. the dominoes are fiducials for the reactable making music. their image detection is pretty well handled, though the composition interface and the IR image detetion are not yet implemented.

back to scimp though,...

although matt's research indicates that chromium itself is not likely useful, considering it as an option has opened my mind to some alternative ideas regarding webcam stitching. the idea is pretty well described in matt's post and my comment in response.

the idea is to stitch blobs rather than the camera images. run a thread for each camera that does blob detection (maybe even on the gpu?), then take and aggregate of all blobs and discard duplicates based on calibration parameters. boundary conditions require special cases (depending on the type of overlap in image), but these cases are few enough for the application to be reasonably uncomplex.

the next few days will bring meetings and hopefully shipments. our gantt chart indicates that we should be finished with the prototype by tuesday, but we don't expect to hit that date due to shipping on our LEDs and our bezel. i do think, though, that we've managed to get started on the software research sufficiently early to meet other key deadlines in our schedule.



looking at it from another perspective, we're one out of ten weeks in.. have we really completed 10% of our project?

maybe..

Friday, April 3, 2009

filter woes and broken silicon

we finally received our ir bandpass filters today. we had a rather prolonged shipping issue with our supplier, omegabob2's ebay store, a resource for omega optics filters: my old mailing address propagated through ebay to the first filter purchase, but my new address went along with the second one. there were special instructions to ship both together using UPS 2-day, but the package went to the old address and arrived there on monday. when i called omega filters on monday to get the status, the person handling the sale was out of town for the week. i wound up getting some bogus tracking number from fedex before they told me they shipped it to my old address.

a bit of a pain, but this morning i went over there, knocked on the door, and got my package.

the filters were a little too thick for the eyetoy, and we managed to bust the ccd, so now the first order of business tomorrow morning is to buy a new webcam.

jas made more progress on the prototype and we had the realization that the only things between us and a finished prototype are the projection surface (which we can use paper for in the meantime) and the IR led source, which we have yet to order.

another exciting realization is that our gantt chart does not count weekends as work-days. we're actually not scheduled to finish the prototype until next tuesday.

week one comes to completion tomorrow. i feel like we're doing pretty well

Sunday, March 29, 2009

Brief thoughts on glass

i was going to post a brief thought on glass, but in checking back at what i've posted here so far, i realize that it would be out of context. so, here's the low down.

on thursday, we received and tested the glass from showcase shower door company with no luck. it simply doesn't exhibit the same TIR properties as acrylic.

we also went to classic glass with my laptop, the hacked ps3eye, and the ir emisison circuit to test a variety of glass thicknesses, all of which completely and utterly failed at reflecting IR back to the webcam. the eye was unfiltered -- we ordered an 850nm and a 940nm filter on thursday as well.

ultimately, we expect that this will mean we'll have two surfaces. glass for the structural support, and acrylic for the touch sensing. acrylic's succeptibility to scratching remains a problem.


yesterday was the IEEE Region 6 Central Area meeting. i spent friday picking up the micromouse maze from santa clara university with craig. the meeting was an exciting experience, and far less taxing than i had expected. i wound up winning second prize in the student paper competition for a paper on my convolution verilog module from 125.

anyways, for the brief thought that prompted me to pull up the blog posting engine, it occurred to me that perhaps we should investigate whether or not FTIR works for visible light in glass and acrylic, and how many different display angles we can find on LEDs from suppliers.

it wouldn't benefit our final product for masc because the cost and time efficiency of the ribbons from environmentallights.com. but since we're here doing the research, i feel like we might as well run these tests so we can say with certainty whether or not glass really is as physically unfeasible as we currently think it is.

Projection Surface tests

we received two essential components for testing yesterday: the IR LED ribbon sample kit from EnvironmentalLights.com, and samples of the Digiline-White and the Digiline-Contrast projection screens from IFOHA in Germany.

matt and i tried a simple projection test, holding the samples up in front of the projector to see which one looked the best. we also used the pvc sample which professor renau obtained. previously, the pvc was our best option for display, and it probably still looks the best, but it's damn horrible at passing infrared from the FTIR through. the videos below show off the FTIR tests through the three media. in the first video we (think we) are using 850nm LEDs, while in the second video we are using 940nm LEDs.





you'll notice that the digiline contrast passes 850nm about as well as the digiline white (the white is just marginally better), but it doesn't pass the 940nm nearly as well. both digiline screens work much much better than the pvc material.

personally, i also thought that the digiline white looked more appropriate than the digiline contrast. it was brighter and looked less glittery.

everyone on the team should view all three before we make a decision, but white seems to make the most sense to me, both for its IR passing properties and its display quality.

the quarter officially begins tomorrow, so now it's time to read up on OpenCV. also, we should receive the IR bandpass filters tomorrow, so the prototype is not too far away from being finished.

until next time,
-k

Thursday, March 26, 2009

the beginning of the beginning of the end

progress is seen on our team blog in the form of FTIR through an unfiltered webcam.

our (and more specifically, my) progress today is as follows:

- built ir emission apparati from provided LEDs and leftover mechatronics parts. to be specific, 270 ohm resistors, a couple of 7805's, some capacitors, and seemingly excessive surface-mount sized solder.

- tested said emission apparati with a sheet of acrylic i've had sitting in IEEE for the past year. i've been meaning to turn it into a stribe enclosure with the laser cutter, but the full sheet is helpful, so my procrastination has actually paid off.

tomorrow:

- receive glass "samples" from chris at showcase shower door, santa cruz, ca. these samples are 18"x24" tempered sheets with finished edges.

this essentially prompts us to acquire index-matching epoxy. jas will be working on the prototype's structure in the meantime.

the structure will be a basic trapezoid, with a flat surface when rested on the short parallel side. the angled edge will be at 60º, so that the entire apparatus can rotate and rest on the side with a 60º drafting table style interface. the whole thing will act similar to a podium, so tilting it will not be a huge ordeal. the distance between the glass and the projector will be minimal.

we will also post videos of tests with the variety of 850nm LEDs we currently have in lab: 5mm radial, 3mm radial, and these weird black-housed, lower power variety.

as a side note of clarifiation to the last bunch of LEDs, we tested these strange black-housed LEDs at the end of the day, and 6 of them turn out to be less bright than the 2 5mm LEDs we tried in the video at the beginning of this post. part of me wonders about superpositions and beat frequencies, but i'm pretty sure we'd still be seeing a lot of light even in the presence if these phenomena.

eagerly awaiting tomorrow,
-k

Tuesday, March 24, 2009

grades

got a b in feedback. nothing to worry about.

i didn't get in to mills, so i guess i'm looking for a job.

got an a+ in architecture, though. thinking about tutoring for 112, but want to seriously evaluate my schedule and my workload before making any decisions.

tomorrow we ramp up to scimp full time.

Saturday, March 21, 2009

the calm before the storm

finished finals yesterday. despite my thorough understanding of feedback control systems, i'm now worried that i won't pass the class because of my homework grade. i didn't turn in the last two assignments because i got sick and chose to sleep instead of getting them done. when i looked at my "current grade" at the end of the final, i was at 38.1% (out of a possible 50%), and i know i missed a couple of things on the final. if the grade scale is straight, then i need at least a 70% on the final to have a comfortable c grade. hopefully i didn't screw it up too hard.

this is the calm before the storm. i've spent all day in la, received an award for our ieee student branch, and had a nice little meetup with other monome users in the la area.

in browsing around for parts to order, i've just now realized that my task of webcamera stitching is going to be substantially more difficult than i had previously expected.

not only am i going to have to recompose the image (what i naively perceived to be the extent of the problem), but i'm going to have to devise a method of calibrating the cameras so that the image recomposition doesn't contain any redundancies from overlapping computer vision.

my intuition is telling me that a set of blocks with protruding silicone fiducials is going to be the best approach for calibration...

if there's guaranteed overlap and only 4 webcams, then i think only 5 markers are needed to fully calibrate. if i'm wrong, then up to 9 markers might be needed for 4 cameras. for 6 webcams, up to 12 may be needed.

at the end of the day, the projectors can display an image instructing the person performing the calibration where to place these markers, but that's assuming that the projectors are independently calibrated..

i'll close this with a few of my curiosities:
- will the table be rigid enough so that the cameras can be calibrated once and left in place?
- does it make more sense to figure out a calibration method before installing the IR bandpass filters?
- does the filter take advantage of polarization? would it be possible to make a filter out of two polarized lenses? if it is, that implies we can make a variable bandwidth filter, can we calibrate in the visible spectrum and then implement the IR-pass by twisting the lens?

Tuesday, March 3, 2009

eye see hot

update -- matt got the basic webcam display functionality working, but the lag is still pretty intense. around 1 second.

makes for particularly cool fractal feedback videos that propagate inward..

ps3 eye experimentation

we've unboxed the ps3 eye and have started futzing around with it on matt's ubuntu box in attempts to get basic drivers working.

so far, the camera has worked fine at 30fps on my macbook pro using macam. unfortunately, macam didn't provide a means for bumping the frame rate up to 60fps, so we'll just have to assume that the functionality works out of the box for now.

matt's already got the audio from the camera to play back in vlc, albeit with an incredible latency of around 5 seconds.

i have also started working on my final project for ce125, which is a convolution module in verilog. my courseload is starting to drive me a little bit insane, but to be completely honest, i'm surprised i made it this far into the quarter without losing my shit. sure, i'm behind on a couple of programs for ce110, but i've been getting pretty much straight a's on the homeworks and quizzes, plus i got 105% on the midterm, so i shouldn't be worrying too much.

the frustration seems to be coming from my existing projects that must be completed before i move in to scimp full force. the frustration also stems from excitement, though, as i can't wait to get to the meat of this project.

they say good things come to those who wait. so i want patience, damnit, and i want it now!

at least squarepusher is here to calm me down...

Monday, March 2, 2009

sound on sight

welcome.

this entry marks the beginning of an academic journal kept by kevin nelson for the santa cruz interactive multitouch platform (scimp), a project for the masc group (micro architecture santa cruz) for research in interactive vlsi design.

this document exists primarily to document musings, notes, and slight tangents that i encounter as the project develops.

stay tuned. full time research starts at the end of march.

kevin // soundcyst => scimp

Followers