Making all this happen automatically on your iPhone was no small feat.
On Hollywood shoots, pulling focus requires a talented team of experts. Like a cinematographer, who makes the overall call about what’s in focus and when that changes. And a focus puller, who makes sure the transition is smooth, the timing is spot on, and the subjects are perfectly crisp.
First we had to generate high-quality depth data so Cinematic mode knows the precise distance to the people, places, and pets in a scene. And because this is video, we needed that depth data continuously — at 30 frames per second.
It’s so computationally intense, we needed a chip that could handle the workload. Enter A15 Bionic.
We also trained the Neural Engine to work like the experts. It makes on-the-fly decisions about what should be in focus, and it applies smooth focus transitions when that changes. If you want creative control, you can always hop in the director’s chair and rack focus manually, either when you shoot or in the edit.
The sheer computational power needed to run the machine learning algorithms, render autofocus changes, support manual focus changes, and grade each frame in Dolby Vision — all in real time — is astounding.
It’s like having Hollywood in your pocket.