In this article, we focus purely on the lens, a component many consider to be the most influential factor in a film's look. But how does it influence the way we matchmove?
What is lens distortion, and how does it affect camera tracking?
When taking a picture or filming, the job of the lens is to direct beams of light onto the film or image sensor. In reality, lenses are not perfect at performing this job and photons from a straight-line object often end up in a curved line, which results in a distorted image. This is called lens distortion. The most straightforward types of lens distortion are barrel distortion, where straight lines curve outwards, and pincushion distortion, where straight lines curve inwards. Lens distortion is usually more pronounced towards the edges of the frame.
Lens distortion causes features in the captured image to not reflect their position in the real world, which does not suffer from lens distortion. Match-moving applications, however, often assume such ideal cameras as their underlying model to reengineer the camera and movement of a shot.
Where image features deviate from the assumed position in a perfect camera, their corresponding reengineered 3D positions will not match their real-world locations. In the worst cases, this could cause your camera track to fail.
But that’s not where lens distortion’s influence in visual effects ends. For example, the mathematically perfect cameras in 3D animation packages do not exhibit any lens distortion either. Undistorted CG images, however, would not fit the distorted live-action plate. Even where 3D packages can artificially distort the renders, the distortion must match the objective lens’ distortion for the composite to work.
In practice, the effects of lens distortion on the plate (the live-action image) will be removed during camera tracking, which makes the matchmoving artist responsible for dealing with lens distortion. As a result, you will get a mathematically perfect virtual camera and undistorted plates. The resulting virtual camera will be used to render the CG elements, which are then composited into the undistorted plates. At this point, we have perfectly matched CG integrated into the undistorted live-action plate. However, with other (non-VFX) parts of the footage still exhibiting lens distortion, your undistorted VFX shots may stand out, even if the CG is perfectly matched. That’s why, at the end of this process, (the original) lens distortion is re-applied to the composited frames. Consequently, matchmoving not only needs the ability to remove lens distortion and export undistorted plates but also provides a means to re-apply the same lens distortion on the composited result.
Types of lenses
There are (at least) two ways of classifying lenses: prime (or fixed focal length) versus zoom, which can be further complicated by being spherical or anamorphic.
Prime lenses cannot change their focal length (more on focal length below), whereas zoom lenses can do so within their zoom range. Not being able to change the focal length comes with some advantages for prime lenses. The more straightforward design and less optical elements in the lens commonly result in a higher quality image, for example, exhibiting less distortion than comparable zoom lenses.
A rule of thumb for matchmoving is that the more information about the real live camera you have, the easier it is to get a good solution. When it comes to collecting this camera information to assist camera tracking, prime lenses have the advantage that if you know which lens was being used for a shot, you automatically also know which focal length it has. This is much harder when it comes to using zoom lenses. Even if you know which lens has been used for a shot, you still don’t know the focal length the lens was set to. It is much harder to keep track of any focal length changes, ideally frame accuracy. The good news is that knowing the type of zoom lens can still help matchmoving. If nothing more, knowing the range of a zoom lens can provide boundaries when calculating the actual focal length for a frame during matchmoving.
Anamorphic lenses’ breakthrough in filmmaking began with the adoption of widescreen formats. The scene was squeezed horizontally to utilise as much of the film surface area as possible.
With digital sensors, the need for anamorphic lenses is reduced to aesthetic considerations. Common anamorphic lenses squeeze the horizontal by a factor of 2, which means for a digitised image, a single pixel is twice as wide as it is high, compared to the square pixels for spherical lenses.
When matchmoving anamorphic footage, make sure to account for the correct pixel aspect ratio. In the above example, this ratio would be the common 2:1, but there are also lenses with different ratios.
Anamorphic lenses are available as both prime and zoom lenses.
Focal length in matchmoving
Focal length is a lens's most prominent property. It is often the first thing mentioned in any listing of lenses to distinguish them and distinguish prime and zoom lenses. The focal length, usually denoted in millimetres (mm), defines, for a given camera, the extent of the scene that is captured through the lens. This is also called the (angular) field of view (FOV).
It is no surprise that focal length also plays a part in matchmoving. On the other hand, it may surprise you that focal length is only half the story regarding camera tracking. Matchmoving applications are interested in the field of view rather than any focal length value in mm. To calculate this field of view, they need to know the focal length and the size of the camera’s sensor or film back.
You may have encountered this relationship with the term 35mm equivalent focal length. For example, the iPhone 5S’ primary camera’s sensor size is 4.89×3.67mm, and its lens has a focal length of 4.22mm. Its 35mm equivalent focal length, however, is 29mm, which means that to get the same FOV with a full-frame 36x24mm sensor, you would need a 29mm lens rather than the 4.22mm lens for the iPhone’s smaller sensor. This relationship is sometimes called crop factor, as RED Digital Cinema explains in Understanding Sensor Crop Factors.
Luckily, the sensor sizes for most digital cameras can be found easily online, for example, in the VFX Camera Database, so always note the camera model and the lens when collecting information on the set.
The matter gets a bit more complicated through today’s plethora of different sensor sizes and the fact that, depending on the format, not all of the sensors is being used to capture images. In the above illustration, it doesn’t matter whether the sensor in the bottom camera is smaller than in the top camera or if it’s just a smaller part of the sensor used due to the chosen format. For example, your camera’s resolution may be 4500 x 3000, a 3:2 aspect ratio. If you plan to shoot an HD video with an aspect ratio of 16:9, some parts of the sensor will not be recorded in the video. For a full-frame sensor, this would reduce the effective sensor size for HD video from 36 x 24 mm to 36 x 20.25 mm, as illustrated below.
Depending on the sensor size and format, cropping may occur at the top & bottom, as in the example above, or from the sides of the sensor.
Conclusion Lenses & Camera Tracking
The camera’s lens significantly impacts the VFX pipeline, and the matchmove artist’s job is to mitigate most of this impact. The Pixel Farm’s matchmoving application, PFTrack, has a wide range of tools to use information about the lens and camera and handle situations where no such information is available. It also provides the tools required to manage all aspects of lens distortion.
Start now and download PFTrack today to explore its powerful features for free in discovery mode!
Comments