Lytro finally did something interesting

unpopular

Been spending a lot of time on here!
Joined
May 17, 2011
Messages
9,504
Reaction score
2,002
Location
Montana
Lytro has taken their light field technology and finally applied it to something useful - a professional cinema camera that is post-production oriented.

If they actually pull this off, it will change *everything*.

Lytro Cinema
 
Which was first said about their light field technology when they came out with their first cameras - and nothing changed.
As I said before - it's a solution that doesn't solve any problems.
So I predict that applying their solution that doesn't solve any problems to cinema won't change diddly now either.
 
Well. I never really was too excited by Lytro, but to say that having the features which they are promising with the Cinema camera here really only shows how little one knows about post production and visual effects.

I don't necessarily feel like Lytro promises the moon. Their previous cameras pretty much did what they said they would. Hopefully the cinema camera is the same. Having normal data attached to pixels is not just a little bit useful, it's a lot of bit useful.

Before you had to build up proxy geometry to act as shadow catchers for models. If the pointcloud here is as robust as they are suggesting, they can be meshed directly out of camera. The same would apply for dynamics. Yes, motion capture software does a pretty good job, but it's only robust enough to get references. You still have to build up geometry.

Refocus too is pretty useful because you can sync your Lytro focus with the renderer so that they match up perfectly.

The problem with the still camera is that the photographer knows exactly where the focus has to be. The photographer is typically the only one who cares. Visual effects artists have to spend a lot of time planning around choices that have to be made in pre-production and finalized on set. Once it's set, there is no going back. This limits the VFX artist's options considerably. After everything is done, a visual effects artist might say "it'd be cool if we could push in on Godzilla while he walks behind those buildings" - but if roto and camera maps weren't budgeted for the shot, the VFX director would nod in agreement and just tell the artist to carry on as planned.

What they're saying here is that all that is captured in real time. If it can do that then what would have been a $10K scene could easily be just a fraction of that and finished in a matter of days rather than weeks.

Not only could you do this sort of thing, you could calculate velocity and direction of anything, and even the movement of the camera itself on the set with a much higher degree of precision that with current methods.
 
It seems like a very powerful technology that doesn't use green screen at all. Can creates depth of view on live stage. Records up to 755 raw megapixel and 300 frames per second. That is very impressive.
 
Well, the 755 megapix is a bit misleading. Lightfield needs a lot more captured data to work. The finished product won't be nearly that.
 
I'm just thinking of all the possibilities now....
 

Most reactions

New Topics

Back
Top