Back when I reviewed the innovative-but-deeply-flawed Lytro camera in 2012 – a camera that could capture 'live' photos that you could adjust the focus on when editing – I noted that if the technology was scaled up and applied to video, it would find a natural home in the world of visual effects.

Four years later, this has finally happened. Following the launch of the Immerge VR camera in November 2015 – which is still a prototype – the company has launched the Lytro Cinema system. This lets filmmakers capture footage with the 'light field' information for each pixel. De-jargoned, this means that you know how far each pixel is away from the camera – called Z-depth – as well as its colour and brightness.

The simplest use for this would be to remove the need for shooting against a greenscreen to capture actors, props and scenery to be keyed and composited with CG and other footage later. With footage from the Lytro Cinema camera, you just select the pixels with a certain distance from the camera and, voila, an instant perfect key.

For professional VFX work though, there's much more that you can do with that Z-depth information. Knowing where every pixel of a piece of footage is in 3D space makes adding in CG characters or other elements a lot easier – especially where you have things in your footage that would make conventional 3D tracking in post difficult.

In a briefing, Lytro demoed a scene from a film it commissioned to show off the camera's capabilities that included confetti blowing in front of a wedding march (you can kinda see it being shot below), with a screen behind some simple scenery. The combination of a moving camera and the confetti would perplex most 3D trackers, but the information recorded by the Lytro Cinema camera made it easy, claimed the company.

Combined with focal length data, the Z-data also lets you adjust the focus of a shot in post. This could be done for creative effect – for example to change the depth-of-field through a shot, something that currently either need to be done when shooting or using extensive post work – to make integrating CG and live-action footage easier.

You can also ‘move' the camera in post up to 10cm in any direction, which could also be used to generate stereoscopic 3D footage from a single camera – though you’d have to allow for the possibility of some parts of nearer objects being hidden from the new position –

As a camera, the Lytro has impressive specs. It has a 755-megapixel sensor that has a wide colour gamut and can capture up to 16 stops of dynamic range – more than enough to be considered capable for capturing HDR video. It can shoot at up to 300fps.

The footage is captured as what Lytro calls as a Light Field Master. This can be rendered into standard filmmaking formats including Real D and IMAX. The Z-depth and other information can be accessed inside an OpenEXR file through a plugin for The Foundry's Nuke compositing software, and Lytro says plugins for other applications will be coming soon – though it wouldn't say which ones beyond a choice of ‘DI and editorial’ tools.

The camera can currently only be used on set, as it needs to be connected to a server during capture – as it's recording some hefty files. However, Lytro told me that it’s planning to produce a more portable version that could even be hung from a helicopter.

The size and complexity of the files means that working with the footage is best done from the cloud, where higher performance is possible than using even a high-end workstation – with editors and VFX artists working using virtualised software. Lytro is offering its own cloud-based platform for this, but the company says it’s open to working with larger VFX and post-production houses to integrate with their infrastructure.

Pricing of production packages, including both the camera and server, start at around $125,000 (around £87,000)