top of page

All posts

Stay up-to-date with all the latest posts and information from The Pixel Farm, including details on the newest build updates and comprehensive articles covering everything you need to know.

Updated: Dec 16, 2024



Knowledge of camera movement, shot size, and angle is essential for all skilled matchmove artists. When directors, editors, and cameramen refer to particular types of camera shots, the terminology can sound like a foreign language if you’re unfamiliar. Our essential guide to camera movement will help demystify some of the common terminology used in film production.



 


Core Camera Movement Types

This article will examine some of the more common terms we use to describe how a camera moves through a scene.


Camera motion is a fundamental part of how we narrate a story visually and has created some of the defining moments in popular films, such as the contra-zoom in Jaws (1975) or the Steadicam shots in The Shining (1980).


For matchmoving, camera motion is essential to help determine the correct scale, position and orientation of the camera within a 3D scene. This article will help you identify the types of camera motion that make up your shots. While it is not a complete summary of all camera movement types and terms, it provides widely used core essentials in every production.


Static / Lock off


A picture of a camera in a static locked off position

The static shot, sometimes called a lock-off, has no intentional camera movement. While this might seem an easy shot to matchmove as there is no camera movement to match, it can be tricky to match perspective exactly when integrating CGI. However, you can eliminate the guesswork and position a static camera accurately with an application like PFTrack, which uses its unique ability to use multiple cameras to solve a scene, even if a camera isn’t moving.


Pan


A picture demonstrating a camera in a panning motion

A panning shot involves lateral movement of the camera to the right or left of a given starting position. Depending on the choice of focal lengths, the relative position of objects near and far to the optics will be exaggerated. Wide-angle lenses will make distant objects move slowly and seem far away, while longer focal lengths will make objects in the distance seem closer and move more quickly. With good Parallax Matchmoving, a panning shot can be relatively easy.


Nodal pan


A picture demonstrating a camera in a nodal pan motion

Nodal pans involve the same lateral movement to the left or right as the standard pan. The difference is that with a nodal pan, the camera will pan around the entrance pupil of the optics. This particular type of camera movement is intended to eliminate the parallax in the shot.


This type of movement would be useful for stitching plates together for visual effects shots or generating a large digital matte where parallax would be an issue. This move was sometimes used in the past to disguise foreground miniatures in forced perspective shots. These shots can be tricky to generate a virtual camera from as there are little to no clues for the depth of a scene.


Tilt


A picture demonstrating a camera in a tilt motion

A tilt is the camera's vertical movement up or down, usually from a fixed starting position, while keeping the horizontal axis consistent. Tilts are often used in establishing shots or in a reveal. Depending on the lens used and the position of the camera on the tripod, these shots can be more tricky to matchmove than a pan.


Pan and tilt


A picture demonstrating in a camera panning and tilting motion

This is a combination of horizontal and vertical motion from a fixed point. An example shot may follow a character as they walk from one end of a room to another, panning and tilting the camera as they go to keep the framing consistent.



Track/dolly


A picture demonstrating a camera in a tracking / dolly motion

A tracking shot, also known as a dolly shot, is the forward and backwards motion of the camera commonly used to follow a character as they traverse a scene. While these shots can seem quite daunting to matchmove, with suitable masking it can actually be quite easy to find a solution.


Lateral track/crab/truck


A picture demonstrating a camera in a lateral track motion

Similar to a standard tracking shot, lateral tracking – or crab – is the sideways movement of the camera. Depending on the scene, this type of shot can provide a large amount of parallax, which is useful when calculating depth and solving a camera. Some good examples of lateral tracking shots can be found in the films of Wes Anderson and Steven Spielberg.


Crane / pedestal / jib


A picture demonstrating a camera in a craning motion

This is the vertical raising or lowering of the camera, which will normally remain in relatively the same position while motioning up or down. The camera can be boomed out on some rigs to make for a more complex motion. These types of shots are often used to establish the geography of a scene, starting high and lowering to eye level. Crane shots are sometimes more straightforward than others to establish a good ground plane when matchmoving due to the elevated perspective.


Handheld


A picture demonstrating a camera with handheld motion

Handheld is as it sounds – the camera operator is hand-holding the camera, usually shoulder-mounted or slung underarm. Movement of the camera is completely free because there are no mechanical axial restrictions. Some good examples of handheld camera work can be found in Paul Greengrass's films. Motion blur can become a factor when attempting to matchmove handheld shots. The motion can also be hard to predict due to its non-linear nature.


Stabilised


A picture demonstrating a camera with stabilised motion

Usually mounted on a Steadicam, gimbal, or a combination of the two, a stabilised camera moves through the scene, performing many, if not all, of the camera moves as handheld but with the ability to remove the high-frequency movement. Smooth, stable shots with linear motions are generally much easier to matchmove.


Aerial / drone

Aerial shots taken from either a helicopter or drone allow the camera to be at a more significant elevation than a crane/jib while being stabilised via a gimbal to remove high-frequency movement. They are usually combined with other camera moves and tracked forward or backwards through the scene to establish an environment or to follow the action from a more significant elevation. Due to the vertical perspective, these shots often provide plenty of trackable detail and parallax when matchmoving.


 

Conclusion

Of course, shots can combine many of the techniques above, and there are also many more complex camera movements, but it’s good to be able to identify the basic components that make a shot. In part two, we will examine the common terms used to describe the framing of a scene in both size and angle.


 





Start now and download PFTrack today to explore its powerful features for free in discovery mode!











Updated: Jul 17, 2024



Understanding the language used for shot size/camera angles and combining them with the terms used for camera motion can help you identify and communicate the type of shot you have been assigned. Additionally, it can help you break a shot into components to get a better estimate of how long something will take. This article examines and demystifies the industry terms used to describe camera shot sizes and angles. 


 

Shot Size: what is it?

It may not initially seem like the shot size will affect your matchmoving in the same way camera movement does, but depending on how we arrive at a specific framing choice, it can provide us with some useful clues that can help during the matchmoving process. Let’s take a look at the terms we use to describe shot size and angle.


The shot size determines how large or small a character or subject is in the frame relative to their surroundings. How we arrive at the various framing sizes can also impact matchmoving.


Traditionally, the director/director of photography would choose a favoured single focal length for a scene, and the camera would be physically moved backwards and forwards until the correct shot size was achieved. Alfred Hitchcock famously used 50mm throughout most of his films, building the sets to accommodate the focal length. The benefit of the single focal length is that the distortion will be consistent across all shots.


Sometimes, due to a scene's physical location, a given focal length is not possible. For example, in order to achieve the angle of view required for a particular framing size in a small room, the DOP may have to swap to a shorter focal length. This can potentially distort trackable features over the frame.


Occasionally, the shot size needs to be adjusted after the shooting has finished. This could be for a technical reason, such as cropping an undesired element out of a shot or for a thematic reason, perhaps because there wasn’t coverage from a specific frame size for that section. This is where one of the most tricky situations for matchmoving can arise. Panning and scanning a clip in post-production will mean that the optical centre no longer coincides with the centre of the image, which can adversely affect how the camera motion, focal length, and distortion are calculated.


We will start by looking at the widest perspective and move towards the narrowest.


Extreme long shot (ELS) / Very long shot (VLS)



Starting with the extreme/very long shot this type of shot is used to establish a scene, usually the geography of where a character or subject may be. As long as sufficient parallax and low distortion are sufficient, the ELS can provide many trackable features due to the large angle of view.


Long shot (LS)



This shot size, sometimes referred to as a wide shot or full shot, is frequently used for action shots showing the character or subject in full and in context with their surroundings. Often used for master shots, this shot size, along with the ELS, is one of the more common shots you will encounter when creating set extensions for a scene.


Medium long shot (MLS)



The medium-long shot, also known as the three-quarters shot, refers to framing a character from the knees up. Wider than a medium shot and closer than a long shot, this particular shot type allows multiple characters and elements to be in frame at the same time while being close enough for dialogue.


Medium shot (MS)



The medium shot frames the character from the waist up, which is why it is sometimes called a waist shot. It is a general-purpose shot intended to direct the viewer’s attention to the character and motions rather than the surroundings.


Medium close-up (MCU)



Closer than the medium shot, the medium close-up is usually framed from the chest or shoulders up and is used to showcase a character’s face. It is used mostly for dialogue shots, and the surroundings generally don’t feature heavily in this framing set-up.


Close up (CU)



Framed from the neck up, the character’s face will almost fill the entire frame. The close-up is used to focus the viewer more intensely on the character’s facial detail and expressions. You might see a shot like this where geometry tracking is required to apply digital makeup effects or to replace the head entirely. It can be tricky to establish camera placement with a close-up as the trackable features may be obscured or blurred.


Extreme close up (ECU)



Sometimes referred to as a big close-up, the extreme close-up will frame only a portion of a character’s face. An example of this would be where a character’s eyes fill most of the frame. A famous example of such a shot can be seen in the opening to Blade Runner (1982) where the character’s eye fills the frame. As with the close-up, you might come across this type of shot where geometry tracking is required to change or replace a key part of the character’s face.


Insert (INS)



The definition of insert will vary greatly depending on whom you talk to, but the traditional definition is a detail shot of an inanimate object or part of the body other than the head. The ultimate purpose is to look closely at something in the scene. An example of an insert shot is a hand operating a dial on a radio. Inserts can be taken from multiple angles but are generally a tight shot size similar to a close-up. You might come across this type of shot size when matchmoving the camera so that a digital object can be placed in the scene.


 

Angles

Knowledge of the camera’s angle can be very useful when match moving. It helps us to determine the orientation of the camera relative to the subject. Below are some of the more common angles you will likely encounter.


Eye level



Sometimes referred to as neutral, this particular angle is filmed from the viewer’s perspective or the character’s eyeline. With this type of shot, you can usually make an educated guess that the camera will be around 5–6 feet from the ground.


High angle



Taken from above eye level with the camera pointed downward, this angle is often used to convey the vulnerability of a character or subject in the scene. The elevated perspective of this particular angle type can sometimes make it easier than others to establish a ground plane.


Top shot



Also called a birds-eye, this shot is taken from a straight-down perspective, usually from quite a high elevation, to show the context of the character and their surroundings. Due to the flattened perspective, it can be tricky to matchmove, especially if shot from a high elevation on a longer focal length with little change in the elevation of the geography.


Low angle



Shot from a low position and angled up, this is perhaps one of the more tricky angles to matchmove, as there is no easy way of determining where the ground plane is. Additional cameras filming the same scene can potentially be used in PFtrack to accurately determine the low angle’s orientation.


Canted angle



Also known as a Dutch tilt, this framing rolls the camera on the side axis so that the horizon is not parallel with the bottom of the frame. Canted angles are often used to convey unease within a scene.


While specific shot sizes and angles can initially seem problematic, some useful tools in PFTrack can help you find a solution. For example, you can matchmove multiple cameras into the same 3D scene by looking for similar features in each shot. You can even use set photos and witness cameras to help.


 

Conclusion

Now when you hear someone say they are working on a low-angle, long shot tracking into a medium close-up, you can already picture what this might look like and the components that make up the shot.


 




Start now and download PFTrack today to explore its powerful features for free in discovery mode!



Updated: Aug 22, 2024


A picture of a cinema camera with a 18mm prime lens attached

In this article, we focus purely on the lens, a component many consider to be the most influential factor in a film's look. But how does it influence the way we matchmove?


What is lens distortion, and how does it affect camera tracking?

When taking a picture or filming, the job of the lens is to direct beams of light onto the film or image sensor. In reality, lenses are not perfect at performing this job and photons from a straight-line object often end up in a curved line, which results in a distorted image. This is called lens distortion. The most straightforward types of lens distortion are barrel distortion, where straight lines curve outwards, and pincushion distortion, where straight lines curve inwards. Lens distortion is usually more pronounced towards the edges of the frame.


Image demonstrating the difference between barrel distortion and pincushion distortion

Lens distortion causes features in the captured image to not reflect their position in the real world, which does not suffer from lens distortion. Match-moving applications, however, often assume such ideal cameras as their underlying model to reengineer the camera and movement of a shot.


Where image features deviate from the assumed position in a perfect camera, their corresponding reengineered 3D positions will not match their real-world locations. In the worst cases, this could cause your camera track to fail.


the effect of lens distortion on solved 3D points

But that’s not where lens distortion’s influence in visual effects ends. For example, the mathematically perfect cameras in 3D animation packages do not exhibit any lens distortion either. Undistorted CG images, however, would not fit the distorted live-action plate. Even where 3D packages can artificially distort the renders, the distortion must match the objective lens’ distortion for the composite to work.


In practice, the effects of lens distortion on the plate (the live-action image) will be removed during camera tracking, which makes the matchmoving artist responsible for dealing with lens distortion. As a result, you will get a mathematically perfect virtual camera and undistorted plates. The resulting virtual camera will be used to render the CG elements, which are then composited into the undistorted plates. At this point, we have perfectly matched CG integrated into the undistorted live-action plate. However, with other (non-VFX) parts of the footage still exhibiting lens distortion, your undistorted VFX shots may stand out, even if the CG is perfectly matched. That’s why, at the end of this process, (the original) lens distortion is re-applied to the composited frames. Consequently, matchmoving not only needs the ability to remove lens distortion and export undistorted plates but also provides a means to re-apply the same lens distortion on the composited result.


Types of lenses

There are (at least) two ways of classifying lenses: prime (or fixed focal length) versus zoom, which can be further complicated by being spherical or anamorphic.


Prime lenses cannot change their focal length (more on focal length below), whereas zoom lenses can do so within their zoom range. Not being able to change the focal length comes with some advantages for prime lenses. The more straightforward design and less optical elements in the lens commonly result in a higher quality image, for example, exhibiting less distortion than comparable zoom lenses.


A rule of thumb for matchmoving is that the more information about the real live camera you have, the easier it is to get a good solution. When it comes to collecting this camera information to assist camera tracking, prime lenses have the advantage that if you know which lens was being used for a shot, you automatically also know which focal length it has. This is much harder when it comes to using zoom lenses. Even if you know which lens has been used for a shot, you still don’t know the focal length the lens was set to. It is much harder to keep track of any focal length changes, ideally frame accuracy. The good news is that knowing the type of zoom lens can still help matchmoving. If nothing more, knowing the range of a zoom lens can provide boundaries when calculating the actual focal length for a frame during matchmoving.


Anamorphic lenses’ breakthrough in filmmaking began with the adoption of widescreen formats. The scene was squeezed horizontally to utilise as much of the film surface area as possible.

With digital sensors, the need for anamorphic lenses is reduced to aesthetic considerations. Common anamorphic lenses squeeze the horizontal by a factor of 2, which means for a digitised image, a single pixel is twice as wide as it is high, compared to the square pixels for spherical lenses.​


a series of images showing the fov difference between spherical and anamorphic lenses

When matchmoving anamorphic footage, make sure to account for the correct pixel aspect ratio. In the above example, this ratio would be the common 2:1, but there are also lenses with different ratios.


Anamorphic lenses are available as both prime and zoom lenses.



Focal length in matchmoving

Focal length is a lens's most prominent property. It is often the first thing mentioned in any listing of lenses to distinguish them and distinguish prime and zoom lenses. The focal length, usually denoted in millimetres (mm), defines, for a given camera, the extent of the scene that is captured through the lens. This is also called the (angular) field of view (FOV).


It is no surprise that focal length also plays a part in matchmoving. On the other hand, it may surprise you that focal length is only half the story regarding camera tracking. Matchmoving applications are interested in the field of view rather than any focal length value in mm. To calculate this field of view, they need to know the focal length and the size of the camera’s sensor or film back.


camera frustum showing sensor size and fov differences

You may have encountered this relationship with the term 35mm equivalent focal length. For example, the iPhone 5S’ primary camera’s sensor size is 4.89×3.67mm, and its lens has a focal length of 4.22mm. Its 35mm equivalent focal length, however, is 29mm, which means that to get the same FOV with a full-frame 36x24mm sensor, you would need a 29mm lens rather than the 4.22mm lens for the iPhone’s smaller sensor. This relationship is sometimes called crop factor, as RED Digital Cinema explains in Understanding Sensor Crop Factors.


Luckily, the sensor sizes for most digital cameras can be found easily online, for example, in the VFX Camera Database, so always note the camera model and the lens when collecting information on the set.


The matter gets a bit more complicated through today’s plethora of different sensor sizes and the fact that, depending on the format, not all of the sensors is being used to capture images. In the above illustration, it doesn’t matter whether the sensor in the bottom camera is smaller than in the top camera or if it’s just a smaller part of the sensor used due to the chosen format. For example, your camera’s resolution may be 4500 x 3000, a 3:2 aspect ratio. If you plan to shoot an HD video with an aspect ratio of 16:9, some parts of the sensor will not be recorded in the video. For a full-frame sensor, this would reduce the effective sensor size for HD video from 36 x 24 mm to 36 x 20.25 mm, as illustrated below.


example of an aspect ratio crop on a full frame sensor


Depending on the sensor size and format, cropping may occur at the top & bottom, as in the example above, or from the sides of the sensor.


Conclusion Lenses & Camera Tracking

The camera’s lens significantly impacts the VFX pipeline, and the matchmove artist’s job is to mitigate most of this impact. The Pixel Farm’s matchmoving application, PFTrack, has a wide range of tools to use information about the lens and camera and handle situations where no such information is available. It also provides the tools required to manage all aspects of lens distortion.



 




Start now and download PFTrack today to explore its powerful features for free in discovery mode!



bottom of page