Refocusing Photographs After Taking the Image

December 7, 2006 | Mark Goldstein | Digital | 21 Comments | |

Refocus Imaging, Inc. Press Release

Every photographer is familiar with the frustration of losing a shot because the camera focused too slowly, or focused on the wrong thing. Recent camera technology innovations at Stanford University provide a new solution to this old problem. The idea is to capture extra information at the sensor, which is missing in conventional cameras. Special processing enables physical functions of the lens to be implemented in software. This approach provides unprecedented photographic features, such as the ability to refocus photographs after the image is taken. The underlying technology also enables dramatic improvements in lighting and sensitivity. For the novice, this means a more reliable camera that makes it easier to take great-looking pictures. For aficionados and professionals, this technology means unprecedented control over the quality of each image pixel.

At the December 13th COBA meeting (, Ren Ng will be discussing his software and prototype camera that allows for refocusing after the fact. In this talk, Ren will present photographs taken with a prototype camera, discuss how it works and how he believes it will affect photographic science and art.

His research has been featured in the press, including Wired, Popular Science, Digital Photography Review, KNTV-NBC11 TechNow, KTVU-TV Fox 5 News, Photonics Spectra, MIT Tech Review, Stanford Review, slashdot, engadget, and more.

Speaker Bio
Ren Ng recently graduated with his PhD from the Computer Science department at Stanford University, and founded Refocus Imaging to commercialize his research. His PhD dissertation won the Arthur Samuel Thesis Award for the best dissertation in Computer Science at Stanford, and was nominated for the Association of Computing Machinery’s (ACM) Dissertation Award. Ren’s interests are in digital imaging systems, computer graphics, optics and applied mathematics. He holds an MS in Computer Science and BS in Mathematical and Computational Science from Stanford University.

Tracker Pixel for Entry

Your Comments

21 Comments | Newest Oldest First | Post a Comment

#1 Andrew Gwozdziewycz

This will be great. Being able to adjust depth of field after the fact would be awesome. I can't wait to see this in action.

2:14 pm - Thursday, December 7, 2006

#2 Digital Art Guy

Unless it somehow captures multiple frames, I can't see how this would be anything other than a heavy-handed sharpening filter. But here's hoping...

3:09 am - Friday, December 8, 2006

#3 nick in japan

Appears to be a bracketing sequence utilizing aperture settings vs exposure settings, actually a no-brainer... why it has taken so long to develop software to do is is the question I have!
Expanding the program to cover all the other selections / variations the camera offers will be the next step. Which means when we get that real fuzzy feeling about a image we are capturing, we select the "Do it all Jack" setting , and actually get 327 different variations of the same image!
Sounds GREAT to me!!!

3:24 am - Friday, December 8, 2006

#4 nick in japan

Couple more things, this appears to be a two element tool, software and firmware, also I suspect that when selecting the "Do it all Jack" mode, a tripod is strongly recommended!

3:31 am - Friday, December 8, 2006

#5 Herb


So instead of having many extra pixels crammed into a small area and then removing the noise and downsizing it to a desirable level, the algorithm has to extrapolate the gaps in the "blur" of the image being out of focus, right?

@ 2 - multiple images is actually an interesting way to describe it, as I see this focus-fixing as nothing but extra-deep-focus capture in various buffered multiple memory zones in order to facilitate the anticipation of something being out of focus.

Basically at what point would you call something not being "sharp" according to what the human eye perceives as pleasingly clear?

I suppose the application of this new machine wouldn't really be for consumers - it's more for extreme long-range imaging or extreme microscopic imaging. Because at that point they would be able to extract everything that's in focus in between here and there and create mutliple-depth realistic 3D images where we can navigate ourselves virtually.

If they were able to shove this tech in to a consumer level machine, then I suppose we have to assume that everything will be in focus from here to there and that we would manipulate the image later in the computer to create aesthetically pleasing "fake" out-of-focus areas (which we already do to a certain extent, in PS).

8:08 am - Friday, December 8, 2006

#6 nick in japan

Herb, I saw nothing in the initial read that led me to believe that this was nothing more than bracketing of the aperture capabilities of the lens combined with minor focus point bracketing involved, all within control of software/firmware, where am I going wrong?
It really sounds like the idea would work in all areas of photography, providing us with a bunch of bracketed images to chose from.
Can't see a need for Photoshop to be involved except for normal individual tweaking desired.

8:45 am - Friday, December 8, 2006

#7 Nicholas

The fundamental problem in out of focus images is the 'circles of confusion' are not close to being a point of light, but rather are circles.
No software or hardware could restore recorded ( out of focus images) circles of confusion to their 'points' as they would have existed if they were correctly focused when passing through a lens. All IMHO. ( but I am willing to bet I am right ).
The only exception is 'slightly out of focus images' which can more or less be recovered with today's software and not requiring any hardware.
Nice PR for Stanford. That's all it is.

12:37 pm - Friday, December 8, 2006

#8 Herb

It's question of threshold: how far out-of-focus before it is irretrievable?
There has to be a limit.

Focus Blur
70 ----------------------------------0

What would it be? Around 70%? I don't know, I'm taking a wild guess.
How much do the current focus-fixers fix?

1:13 am - Saturday, December 9, 2006

#9 Nicholas

It's a single multiple exposures or bracketing. And it appears to be targeted to commercial camera sensors not extreme scientific imaging.

3:51 am - Saturday, December 9, 2006

#10 Herb

Oops let me re-write that:


Shall we say, that it's irretrievable, that it's not fixable if the focus is more than 30% blurred, let's say.

4:33 am - Saturday, December 9, 2006

#11 Nicholas

I am not sure you can quantify an image in terms of a percentage regarding its actual depiction of reality.
The more individual recorded circles of confusion that correspond to focused points of light in an image the better when attempting to put in focus an out of focus image.
Thee subject matter would affect your ability to 'retreive' an image.
A photo of stars in the night sky, versus a close-up of a baby's face.

10:53 am - Saturday, December 9, 2006

#12 Fred

The rendition of the circle of confusion in front of the point of focus may be differentiatable from the rendition of the circle of confusion behind the point of focus.

This may make it possible (if the exact focal length of the lens and distance of focus are recorded) to alter the result so it would represent as if the point of focus was shifted within certain finite limits.

In other words I am not suggesting that aiming the camera and setting the zoom would be the only requirements to take a sharp picture.

"Special processing enables physical functions of the lens to be implemented in software"

This hardly sounds like a sequence of images which can currently be setup for shifted focus bursts on some cameras. (Nothing new.)

"the ability to refocus photographs after the image is taken"

Nothing in either of these quotes or anywhere I see in the release even mentions the jump some writers have made to increased depth of field.

11:41 am - Saturday, December 9, 2006

#13 Dick


that's not a word.

"In other words I am not suggesting that aiming the camera and setting the zoom would be the only requirements to take a sharp picture."

Umm yeah, you have to actually FOCUS, as well. And who said anything about a zoom? How about a PRIME lens, instead?

8:50 pm - Saturday, December 9, 2006


Dick, apparently 'differentiatable' is a word. If you Google it, you will
find over one thousand uses of it. In the context that Fred was using
it, the meaning would be "circles of confusion being differentiated by
their position relative to the focus point when FOCUSING ON TABLES. :)

2:47 pm - Sunday, December 10, 2006

#15 Dick

Er, NO, Gary, it is NOT a word.

DIFFERENTIATED, yes, as you used it, I agree. But not "differentiatable."

It's actually


Just a slight difference, but incorrect, nonetheless.

I know everyone can do this, but look here:

9:32 am - Monday, December 11, 2006

#16 Dick

And neither the Oxford nor the Cambridge English dictionaries have it in there.

I understood, within the context of what he was trying to say originally, but that's beside the point.

9:41 am - Monday, December 11, 2006

#17 Jay

Refocusing occurs from a single exposure.

As far as I understand the technology, it uses an array of microlenses positioned in front of the sensor, which effectively splits the image into a number of equivalent images, all with slightly different focal points.

The image which represents the desired point of focus can then selected post-capture; or images could be combined for a greater depth of field.

11:15 am - Monday, December 11, 2006

#18 Nicholas

Then this process is not refocusing as a typical photographer ( most of us ) would define refocusing.
This process is capturing multiple images at different settings. And post capture selection of the desired images is not refocusing an out of focus image.
With all those micro lenses between the subject and the sensor I wonder what the affect on quality would be.
Perhaps its use would be for security cameras with no shutterlag due to focusing.
Which would have some value.

1:10 pm - Monday, December 11, 2006


Dick, you were looking in the wrong dictionary.

2:26 pm - Monday, December 11, 2006


Jay, you are correct about the microlens array, but not about multiple
images. The light from each of the microlenses is measured, not only
for its intensity in the usual fashion, but also for its directionality. With
all the additional information, new focus points can be mathematically
calculated from the single image that is captured.

3:00 pm - Monday, December 11, 2006

#21 Jay

Gary, quite right. I think you understood and explained better than I did.

Nicholas, sure, we're not going to be seeing consumer cameras with this very soon, but judge for yourself on quality - - I was impressed anyway.

3:16 pm - Monday, December 11, 2006