To be overly-simple, a cluster of white dots doesn't really represent any unique characteristics which we, or computers, can immediately point to and say "That's over there". Before you mention constellations, keep in mind that this only works when all
of the required points (or most of them) are viewed".
If you take an image from the Earth or a satellite, it has a cone of view. If you know the time and position of the camera on the Earth which takes the photo, you can calculate exactly which star is which.
Could you see stars from the side, as if the cone of view turned 90 degrees off one axis? If you know the distance between stars in the direction of the cone's origin to the center of the cone's base. That is measurable. If you do that you can reconstruct
an image from the side. For the back you could do the same thing, but you don't really know what is in the back the moon say, without a photo from the dark side, but you could fill in the blanks for stars and such, and rotating bodies obviously can be expolated,
so the only limitation is distance and "hidden" bodies that are never in the cone of view of any of our current photos, which is of course a lot of stuff in space.
For microbiology and beyond, if you use MRI, electron microscopy and more, then eventually you could imagine the massive distributed images of all these "visible" things being available so that software is able to reason about what should "fill in the blanks",
similar to how the mind operates when it fills in the blanks, i.e. optical illusions and such), so you could zoom in and out of the entire structure of a organism and cell down to the chemical and atomic level itself and beyond, extrapolated from images and
applying a bunch of physics and graphics allow someone to step through the operations of a cell, watch it develop and witness the energy transfer ... am I crazy?
Basically it seems like a real start to wiring up visual information on the web, which is really deep. 20 years from now I think learning about the world and things around us in detail will be enabled by this.
The searching ideas Blaise mentions is awesome, because imagine searching for whatever or whoever you want and being able to see and analyse it at any resolution, real-time or static. If you think beyond images there are many applications because it's got
a similiarity to the ideas of the mind being multi-resolutional. There's been research into language that points to language and understanding being multi-resolutional in nature (and much older research into language and understanding about multi-resolutional
nature of thought) . An example would be that when you hear a word that you start to process the sound immediately, branching off continuously as more sound "comes in" until there is a match. It doesn't just take a word, then search.
Imagine being able to call someone's name and the 'net locates them and you just speak to them, and it can also locate them visually and you are there in your nano-skin suit connected remotely, but it feels like you're there. On the other end, a billion nano-bots
gather from the dust and form a shell of "you" (SecondSkin (TM)
) that senses everything around it in realtime, but handles the data, because it filters out information in a mult-resolutional way, using the net to compute information determined worthy of
note, like a touch, or a fast moving object in the field of view, in anticipation of you reacting, as humans do to see what's about to smack you in the head. Now you can move around and touch and hear and see. Smell and taste would be further out because
it involves chemicals or stimulation of neurons, but heck, if we ever do sci-fi stuff like this then we'll probably have cracked that nut a long time ago.
Ok, so how about just little robots that tour around for us remotely?