It’s the perfect moment. The family are together, everyone’s having a great time, this is going to make a killer photo. SNAP! Quick check on the camera’s LCD – looks brilliant. That’s the one that’s going in the family album. The following day, you review your photos, seeking the one. You find it, open it and... ah, the cameras auto-focus picked out the tree, which admittedly looks great, but the family are out of focus. Another moment lost.
Welcome to my world of photography. I love taking photos, it’s just that I’m not that great at it. I’m convinced that auto-focus, and anti-shake features were invented just for me. They’ve gone a long way to making my shots respectable, but the small LCD screens built into camera simply can’t tell you how well focused an image is going to look at full resolution on screen.
If, like me, you’ve lusted after the infamous “Image Enhance” technology that’s seemingly prevalent in every Hollywood blockbuster you can name, it might just be on the horizon.Laptop Magazine reports on a new technology demonstrated by Adobe at the NVIDIA GPU Technology Conference this week. Photography using advanced Plenoptic lenses allow images to be rendered using computational algorithms, allowing any part of a photo to be brought into focus after it has been taken.
Plenoptic lenses work by cramming in hundreds of lenses together. The resulting image looks like a blurry mosaic, but with much more information captured by the lens, it can be manipulated in software to allow specific areas to be focused.
Back to my lost moment earlier. Rather than a good 5 minutes with a stress ball, with plenoptics it would be Step 1. Unfocus Tree. Step 2. Focus Family. Step 3. Receive plaudits for excellence in photography.
The technology is yet to be commercialised, but check out this great demo of the use of plenoptic lenses from Adobe’s Senior Research Scientist, Teodor Georgiev. It’s a brilliant example of science fiction slowly becoming science fact.