Camera movements can also be motion captured so that a virtual camera in the scene will pan, tilt, or dolly around the stage driven by a camera operator, while the actor is performing and the motion capture system can capture the camera and props as well as the actor's performance. This allows the computer generated characters, images and sets, to have the same perspective as the video images from the camera. A computer processes the data and displays the movements of the actor, providing the desired camera positions in terms of objects in the set.
Mo cap offers several advantages over traditional computer animation of a 3D model:
Video games often use motion capture to animate athletes, martial artists, and other in-game characters. Movies use motion capture for CG effects, in some cases replacing traditional cell animation, and for completely computer-generated creatures, such as Gollum, The Mummy, and King Kong.
Sinbad: Beyond the Veil of Mists was the first movie made primarily with motion capture, although many character animators also worked on the film.
In producing entire feature films with computer animation, the industry is currently split between studios that use motion capture, and studios that do not. Out of the three nominees for the 2006 Academy Award for Best Animated Feature, two of the nominees (Monster House and the winner Happy Feet) used motion capture, and only Pixar's Cars was animated without motion capture. In the ending credits of Pixar's film Ratatouille, a stamp appears labelling the film as "100% Pure Animation -- No Motion Capture!"
Motion capture has begun to be used extensively to produce films which attempt to simulate or approximate the look of live-action cinema, with nearly photorealistic digital character models. The Polar Express used motion capture to allow Tom Hanks to perform as several distinct digital characters (in which he also provided the voices). The 2007 adaptation of the saga Beowulf animated digital characters whose appearances were based in part on the actors who provided their motions and voices. The Walt Disney Company has announced that it will distribute Robert Zemeckis's A Christmas Carol using this technique.
Virtual Reality and Augmented Reality allow users to interact with digital content in real-time. This can be useful for training simulations, visual perception tests, or performing a virtual walk-throughs in a 3D environment. Motion capture technology is frequently used in digital puppetry systems to drive computer generated characters in real-time.
Gait analysis is the major application of motion capture in clinical medicine. Markerless motion capture allows clinicians to evaluate human motion, without burdening patients with body suits or tracking devices. This allows patients to move freely within a defined area, using cameras that map the silhouette of the person and fit 3 to 24 perspectives to a model of the person, to track range of motion, gait, and several other biometric factors, and streams this information live into analytical software. Because this system removes the markers, patients, physicians and analysts are able to collect quantifiable data in real-time with less patient inconvenience, although they tend to have centimeter resolution verses the sub millimeter resolution of most marker based systems.
Motion tracking or motion capture started as a photogrametric analysis tool in biomechanics research in the 1970s and 1980s, and expanded into education, training, sports and recently computer animation for cinema and video games as the technology matured. A performer wears markers near each joint to identify the motion by the positions or angles between the markers. Acoustic, inertial, LED, magnetic or reflective markers, or combinations of any of these, are tracked, optimally at least two times the rate of the desired motion, to submillimeter positions. The motion capture computer software records the positions, angles, velocities, accelerations and impulses, providing an accurate digital representation of the motion.
In biomechanics, sports and training, real time data can provide the necessary information to diagnose problems or suggest ways to improve performance, requiring motion capture technology to capture motions up to 140 miles per hour for a golf swing.
Passive optical system use markers coated with a Retroreflective material to reflect light back that is generated near the cameras lens. The camera's threshold can be adjusted so only the bright reflective markers will be sampled, ignoring skin and fabric.
The centroid of the marker is estimated as a position within the 2 dimensional image that is captured. The grayscale value of each pixel can be used to provide sub-pixel accuracy by finding the centroid of the Gaussian.
An object with markers attached at known positions is used to calibrate the cameras and obtain their positions and the lens distortion of each camera is measured. Providing two calibrated cameras see a marker, a 3 dimensional fix can be obtained. Typically a system will consist of around 6 to 24 cameras. Systems of over three hundred cameras exist to try to reduce marker swap. Extra cameras are required for full coverage around the capture subject and multiple subjects.
Vendors have constraint software to reduce problems from marker swapping since all markers appear identical. Unlike active marker systems and magnetic systems, passive systems do not require the user to wear wires or electronic equipment rather hundreds of rubber balls with reflective tape, which needs to be replaced periodically. The markers are usually attached directly to the skin (as in biomechanics), or they are velcroed to a performer wearing a full body spandex/lycra suit designed specifically for motion capture. This type of system can capture large numbers of markers at frame rates as high as 2000fps. The frame rate for a given system is often traded off between resolution and speed so a 4 megapixel system runs at 370 hertz normally but can reduce the resolution to .3 megapixels and then run at 2000 hertz. Typical systems are $100,000 for 4 megapixel 360 hertz systems, and $50,000 for .3 megapixel 120 hertz systems.
Active marker systems can further be refined by strobing one marker on at a time, or tracking multiple markers over time and modulating the amplitude or pulse width to provide marker ID. 12 megapixel spatial resolution modulated systems show more subtle movements than 4 megapixel optical systems by having both higher spatial and temporal resolution. Directors can see the actors performance in real time, and watch the results on the mocap driven CG character. The unique marker IDs reduce the turnaround, by eliminating marker swapping and providing much cleaner data than other technologies. LEDs with onboard processing and a radio synchronization allow motion capture outdoors in direct sunlight, while capturing at 480 frames per second due to a high speed electronic shutter. Computer processing of modulated IDs allows less hand cleanup or filtered results for lower operational costs. This higher accuracy and resolution requires more processing than passive technologies, but the additional processing is done at the camera to improve resolution via a subpixel or centroid processing, providing both high resolution and high speed. These motion capture systems are typically under $50,000 for an eight camera, 12 megapixel spatial resolution 480 hertz system with one actor.
One can reverse the traditional approach based on high speed cameras. Systems such as Parkash use inexpensive multi-LED high speed projectors. The specially built multi-LED IR projectors optically encode the space. Instead of retro-reflective or active light emitting diode (LED) markers, the system uses photosensitive marker tags to decode the optical signals. By attaching tags with photo sensors to scene points, the tags can compute not only their own locations of each point, but also their own orientation, incident illumination, and reflectance.
These tracking tags that work in natural lighting conditions and can be imperceptibly embedded in attire or other objects. The system supports an unlimited number of tags in a scene, with each tag uniquely identified to eliminate marker reacquisition issues. Since the system eliminates a high speed camera and the corresponding high-speed image stream, it requires significantly lower data bandwidth. The tags also provide incident illumination data which can be used to match scene lighting when inserting synthetic elements. The technique is therefore ideal for on-set motion capture or real-time broadcasting of virtual sets.
One commercially available markerless system designed by Organic Motion was featured during Intel CEO Paul Otellini's keynote address at the 2008 Consumer Electronics Show in Las Vegas. During the demonstration, singer Steven Harwell of the band Smash Mouth performed live while tracking data generated in realtime by the markerless system were instantaneously fed into the Unreal Engine 3. By using the motion capture system as an input device, the game engine utilized tracking data to animate a virtual Steve located within a garage scene. The demonstration showcases the adaptability of markerless technology in service industries such as patient care wherein a variety of subjects could benefit from motion analysis without the need for extensive user calibrations. These systems work well with large motions, but tend to have difficulties with fingers, faces, wrist rotations and small motions. Some systems require no special suits, while others prefer special colors to identify limbs.
Facial motion capture is utilized to record the complex movements in a human face, especially while speaking with emotion. This is generally performed with an optical setup using multiple cameras arranged in a hemisphere at close range, with small markers glued or taped to the actor’s face. However there are a number of systems such as Image Metrics, Mova's Contour and artemdigital's NVisage that offer the ability to capture realistic facial expressions and dialogue without the use of such markers.
RF (radio frequency) positioning systems are becoming more viable as higher frequency RF devices allow greater precision than older RF technologies. The speed of light is 30 centimeters per nanosecond (billionth of a second), so a 10 gigahertz (billion cycles per second) RF signal enables an accuracy of about 3 centimeters. By measuring amplitude to a quarter wavelength, it is possible to improve the resolution down to about 8 mm. To achieve the resolution of optical systems, frequencies of 50 gigahertz or higher are needed, which are almost as line of sight and as easy to block as optical systems. Multipath and reradiation of the signal are likely to cause additional problems, but these technologies will be ideal for tracking larger volumes with reasonable accuracy, since the required resolution at 100 meter distances isn’t likely to be as high.
An alternative approach was developed where the actor is given an unlimited walking area through the use of a rotating sphere, similar to a 000 0580.JPG, which contains internal sensors recording the angular movements, removing the need for external cameras and other equipment. Even though this technology could potentially lead to much lower costs for mocap, the basic sphere is only capable of recording a single continuous direction. Additional sensors worn on the person would be needed to record anything more.
A studio in the Netherlands is using a 6DOF (Degrees of freedom) motion platform with an integrated omni-directional treadmill with high resolution optical motion capture to achieve the same effect. The captured person can walk in an unlimited area, negotiating different uneven terrains. Applications include medical rehabilitation for balance training, biomechanical research and virtual reality.
Research groups addressing the task of Markerless Motion Capture: