I thought I would write as a separate post, it felt awkward to work into the one on Chromium.
On Saturday I was talking to Eddie about any implications Chromium would have with the image stitching. I didn't think it would have any direct effect on how to go about the stitching, but I had another idea.
It is actually not a given that stitching should come before blob detection.
You could either
1) Combine N camera images into one big image and detect blobs in that. In the final image, overlapping areas in the cameras have been discarded.
2) Detect blobs in each camera feed, then combine all of the blob data. Redundant blobs can be discarded.
Both ways sound relatively easy to me. I think which one to use is just a performance issue. In both cases, I'm not imagining "stitching" that is any more complex than dialing in areas to trim which we have experimentally found to be overlapping or uninteresting. A manual calibration, if you will.
10 years ago