The technology is amazing but I would like to see some proof that this works in real-life circumstances too (I have not seen this c9 video yet, sorry if blaise explained it all )
- How does photosynth deal with different light conditons? Not only day and night (which is fine, you can have two 'worlds' one is day, other is night) but 11am and 5pm.. Clouds and sunshine, rain and snow, wear and tear. Paintjob and graffitis. - How much work is put on the user to group the right photos together? - What cluster do you think would make the most sense for a global database of 'photosynth space'? Country? City? District? etc? You are not going to match all pictures in the world against all.. are you? - As a research company, when are you publishing the algorithms used to match pictures together? - Is the demo WPF based? (I see DirectX mentioned with seadragon so maybe not. Whe are you not yet on the WPF bandwagon? )
new here - i'm a PM working for Blaise on this and thought I'd help out a bit (since he'll be on his way to Siggraph shortly)
Lighting conditions have no bearing on the photomatching - the algorithms look for point features in the photos - the brightness, contrast doesn't matter at all - the photos you see in the demo were ones I took - and I did some retouching of them after the
fact and that made no difference at all. Right now the only thing you can't do to photos is crop them - we rely on focal lengths being true to locate the camera positions. (I've wanted to play around and make a collection that has all tweaked out photos and turn this into an art form - imagine San
Marco in various black/white - high contrast - pushed colors, etc.)
When we do the matching right now we submit a set of photos that we believe should match due to the overlaps included - other than that, there's no effort required on the user to do any manual alignment.
Clusters - yes - all of the above. That's the goal. Of course - there are ways to cheat and not just match points - many photos have text that can be extracted, GPS coordinates baked into EXIF can be used, etc.
WPF - we use DirectX in the viewer, but we definitely use WPF, specifically Photon for encoding images - if you see Blaise's earlier comment on Jpeg 2000, we are using Photon in a similar way since it offers the multi-res capabilities we take advantage of.