Jan 21, 2007 at 4:47 AMSEP2007 wrote:Declarative languages learn and use 18 - 28 year old.
Funcutional Languages learn and use 35 - 55 years old.
Natural Languages ... learned and used for at least thousands or 10s of thousands of years.
Jan 20, 2007 at 12:26 PMGreat interview. I would love to work with any of these guys. Here's a crazy thought ... probabilistic concurrency. Instead of locking, duplicate and execute concurrently and keep track of side effects (split it across cores, whatever you like). Execution paths that result in error once you smack the results back together are probably out of sequence and need to be reordered. The more concurrent operations, the lower the probability that the compiler or runtime could determine the correct order of execution. However for 2 operations, say, it would be very likely that the correct order is the path that does not result in an exception. The depth of the composed objects would also reduce the probability of correctness. However, if one approached the problem from a probabilistic point of view, tools, and advances in statistical analysis in the compiler and runtime could perhaps improve this over time.
Just a thought. I am sure this is a vast oversimplification, I see many problems already.
Very nice demo - I once imagined a similiar idea (but not nearly as nice), but of course I had "features" to work on which are "more important" than process improvement.
I'm glad to see that MS is working on this kind of thing.
jsampsonPC wrote:I had thought about something slightly different. What about all the old photos I find of my great grandparents standing by some barbershop door, or down at their local foodmart.
What would the results be if I put one of those in It would be neat to see locations from 80-100 years ago. Imagine how many photographs of Disney there are, spanning decades.
This could be used as a historical learning device, too.
jsampsonPC wrote:To be overly-simple, a cluster of white dots doesn't really represent any unique characteristics which we, or computers, can immediately point to and say "That's over there". Before you mention constellations, keep in mind that this only works when all of the required points (or most of them) are viewed".
If you take an image from the Earth or a satellite, it has a cone of view. If you know the time and position of the camera on the Earth which takes the photo, you can calculate exactly which star is which.
Could you see stars from the side, as if the cone of view turned 90 degrees off one axis? If you know the distance between stars in the direction of the cone's origin to the center of the cone's base. That is measurable. If you do that you can reconstruct an image from the side. For the back you could do the same thing, but you don't really know what is in the back the moon say, without a photo from the dark side, but you could fill in the blanks for stars and such, and rotating bodies obviously can be expolated, so the only limitation is distance and "hidden" bodies that are never in the cone of view of any of our current photos, which is of course a lot of stuff in space.
For microbiology and beyond, if you use MRI, electron microscopy and more, then eventually you could imagine the massive distributed images of all these "visible" things being available so that software is able to reason about what should "fill in the blanks", similar to how the mind operates when it fills in the blanks, i.e. optical illusions and such), so you could zoom in and out of the entire structure of a organism and cell down to the chemical and atomic level itself and beyond, extrapolated from images and applying a bunch of physics and graphics allow someone to step through the operations of a cell, watch it develop and witness the energy transfer ... am I crazy?
Basically it seems like a real start to wiring up visual information on the web, which is really deep. 20 years from now I think learning about the world and things around us in detail will be enabled by this.
The searching ideas Blaise mentions is awesome, because imagine searching for whatever or whoever you want and being able to see and analyse it at any resolution, real-time or static. If you think beyond images there are many applications because it's got a similiarity to the ideas of the mind being multi-resolutional. There's been research into language that points to language and understanding being multi-resolutional in nature (and much older research into language and understanding about multi-resolutional nature of thought) . An example would be that when you hear a word that you start to process the sound immediately, branching off continuously as more sound "comes in" until there is a match. It doesn't just take a word, then search.
Imagine being able to call someone's name and the 'net locates them and you just speak to them, and it can also locate them visually and you are there in your nano-skin suit connected remotely, but it feels like you're there. On the other end, a billion nano-bots gather from the dust and form a shell of "you" (SecondSkin (TM) ) that senses everything around it in realtime, but handles the data, because it filters out information in a mult-resolutional way, using the net to compute information determined worthy of note, like a touch, or a fast moving object in the field of view, in anticipation of you reacting, as humans do to see what's about to smack you in the head. Now you can move around and touch and hear and see. Smell and taste would be further out because it involves chemicals or stimulation of neurons, but heck, if we ever do sci-fi stuff like this then we'll probably have cracked that nut a long time ago.
Ok, so how about just little robots that tour around for us remotely?
Charles wrote:Microscopy is certainly an area where this could be highly useful. Consider also astronomy... Navigating galaxies and other celestial bodies will never be the same!
Astronomy. Hmm. That raises an interesting question which I don't believe was addressed in the interview. Of course this type of image-manipulation will work great with highly-diversified imagery. A building front, or St. Peters basilica.. You have so many unique aspects of those buildings, dare I say it is easy to line up the images.
Compare that to an evening sky, where every star is very much visually-ambiguous. Of course inspecting satellite closely will give us the higher value of diversity, but from a distance, wouldn't this be too much of a feat for the application to properly distinguish one star from another? I suppose this problem may even exist in Microscopy, too. Obviously viewing a colony of bacteria will look very much the same in many different locations.
10 years ago I used an application will build panoramas from images taken along a horizontal plane. Each image had to be highly unique in order for the software to line up the images, and sometimes we got fuzzy couplings - especialy in places where threes were dominant. Perhaps I'm allowing my old knowledge to govern my understanding of this new technology too much
Was anybody able to convince Blaise that he needed a c9 account?
How could it be more difficult to line up stars, which are clear points of light, especially with high-res telescope photographs? If they can do it with window sill corners (which seems a lot more visually ambigous to me), why not stars? Besides, you just need to put up a frame of reference, like a grid, don't you?
Ultimately, you could have a unified realtime 3D model of the entire world, and you it would be completely robust and accurate, verified by millions or billions of video cameras. And you could still stream it to anyone, because the bandwidth optimizations. All you need is cameras everywhere. Privacy issues there, but surely in a few years we could have a non-realtime model of the entire world.