Chromium-inspired blob detection

I thought I would write as a separate post, it felt awkward to work into the one on Chromium.

On Saturday I was talking to Eddie about any implications Chromium would have with the image stitching. I didn't think it would have any direct effect on how to go about the stitching, but I had another idea.

It is actually not a given that stitching should come before blob detection.

You could either
1) Combine N camera images into one big image and detect blobs in that. In the final image, overlapping areas in the cameras have been discarded.
2) Detect blobs in each camera feed, then combine all of the blob data. Redundant blobs can be discarded.

Both ways sound relatively easy to me. I think which one to use is just a performance issue. In both cases, I'm not imagining "stitching" that is any more complex than dialing in areas to trim which we have experimentally found to be overlapping or uninteresting. A manual calibration, if you will.

1 comment:

  1. this actually hits the heart of why i was interested in chromium to begin with, but in the past day or so, i've realized that the concept is implementable either with or without chromium.

    eddie and i had originally discussed stitching the web camera image prior to blob detection and then running the blob detection algorithm on the single, large image.

    in considering chromium, i realized that the blobs could be stitched, rather than the images.. so, each image detection unit (software on a single machine) would access a single webcam, then each instance of the cam (thread?) would have a certain set of parameters defined in calibration that indicate an offset, rotation angle, and an overlap with the other cameras.. it gets complicated, but it should be easy to figure out with some more thought. i.e. a vector (gesture) on one camera may be moving away at a 270 deg. angle (straight down) while on another camera it's moving at a 325 deg. angle (down and to the right) simply due to the cameras' relative orientations.

    i'd expect that there will be less latency by stitching the blobs, but i also think that it makes the most sense to bring things into cuda as early as possible (possibly before blob detection), and implement all blobs and gestures on the gpu..

    i feel like we still have a lot of brainstorming to do on the subject.