MoCap technology has revolutionary technology quickly, especially technologies that use in movie and games business, MoCap has come a crucial role in the creation of animation as soft as real. The record will begin stated about MoCap technology such as what it is, How it useful, and Who'll use this kind of technology to bring more understanding to the readers. After that the article will give attention to the Weta digital, a world s leading film studios which has continued to develop MoCap technology until it is highly successful. Nowadays, MoCap is use in many famous videos such as many individuals in Beowulf, alien people in Region 9, identity Gollum in Lord of the Wedding rings trilogy, a giant gorilla named Kong in King Kong, and etc.
And to get more detailed knowing about MoCap, so that Avatar, an epic movie which acquired the most stunning computer graphic (CG) details, was chosen to be an example of the latest development of MoCap. Besides, this movie was greatly well-known and also influences the revolution of film industry explicitly as well.
WHAT IS MOCAP?
MoCap is an abbreviation of Action Take; this technology is a process by which movement is digitally documented. The approach was actually used for armed service traffic monitoring purposes and in athletics as an instrument for biomechanics research which focused on the mechanical working of your body, like how the heart and muscles work and move. Within the last twenty-five years, motion capture is becoming an important tool in the entertainment business, offering computer animators the capability to make non-human characters more life-like. It's a technology found in animated videos and tv as well as video games.
Historically, MoCap in animated films was made using an extension of the rotoscoping strategy. In this technique, an actor is filmed making certain moves or gestures while wearing markers on specific points of his or her body. Each marker in each shape of film is then personally encoded in to the computer. As animation software improved, it became possible to use an algorithm to the markers that attach these to a 3D subject, creating what is now called movement capture.
MoCap can be classified by their four most important input methods that happen to be Prosthetic, Acoustic, Magnetic, and Optical method.
1. Prosthetic (or mechanised) method: That is one of the first methods for taking the motion from differing of human anatomy. These procedures include simple "on/off" kind of motion diagnosis systems as well as complex motion traffic monitoring systems. The second option kind of prosthetic motion record could be a perfect methodology if it wasn t for the sophisticated mechanised requirements and the performance-inhibiting features generally associated with such designs. However, the kind of data provided could be clean rotational data gathered in real time with no occlusion problems. This technique is dependant on a set of armatures which must be attached all over the performer s body. The armatures are then linked to each other by by using a group of rotational and linear encoders. These encoders are then connected to an software that can together read all the encoders in order to prevent data skewing. Finally, through a set of trigonometry functions, the performer s action can be analyzed. These design constraints appear to be quite difficult to overcome, and can probably limit the use of these kind of devices for persona animation.
2. Acoustic method: Acoustic record is another method presently used for performance capture. This method involves the use of a triad of music receivers. An array of audio transmitters is strapped to various parts of the performers body. The transmitters are sequentially brought on to end result a "click" and each receiver measures enough time it takes for the sound to visit from each transmitter. The calculated distance of the three receivers is triangulated to give a point in 3D space. An inherent issue with this approach is the sequential aspect of the positioning data it generates. In general, we would like to visit a "snap shot" of the performer s skeletal position rather than a time skewed data stream. This position data is typically put on an inverse kinematics system(1) which drives an animated skeleton.
One of the big advantages of this method is having less occlusion problems normally associated with optical systems. However, there seems to be several negative factors associated with this method that could or might not exactly impede its use. First, there is certainly the fact that the wires can be a hindrance to various types of shows. Second, the existing systems do not support enough transmitters to accurately catch the personality of the performance. Third is how big is the catch area, which is limited by the velocity of audio in air and the number of transmitters. Furthermore, the accuracy of this approach can often be affected by spurious sound reflections.
3. Magnetic method: This is a favorite method used for performance capture. Magnetic capture involves the use of any located transmitter, and a set of receivers that happen to be strapped to differing of the performer s body. These receivers can handle calculating their spatial marriage to the transmitter. Each receiver is connected for an interface that may be synchronized to be able to prevent data skew. The producing data stream involves 3D positions and orientations for each and every device. This data is typically put on an inverse kinematics system to operate a vehicle an animated skeleton. This magnetic methodology shares the same lack of occlusion issues with the audio method. But it addittionally stocks the same negative factors such as the hindrance of cables, insufficient sufficient receivers and the limited capture area. In addition, being magnetic the system is damaged by any sizable areas of metal in the vicinity of the get area, such as girders, articles, etc.
4. Optical method: Optical systems have grown to be quite popular over the last couple of years. These systems will offer the performer the most freedom of movements since they do not require any cabling. Optical systems combine directionally-reflective balls referred to as markers which attach to the performer. Optical systems require at least three video cameras, each which is equipped with a light source that is aligned to illuminate the field of view to the camera. Each camera is in turn linked to a synchronized shape buffer(2). The computer is offered each camera view in order to compute a 3D position of each marker; the producing data stream therefore contains 3D position data for every single marker. This data is normally applied to an inverse kinematics system, to animate a skeleton.
One typical problem with optical systems is the actual fact that it is rather easy for the performer to occlude, or hide, a number of markers thus creating "slots" in the data stream. Adding more cameras and/or more markers can lessen this occlusion problem. However, adding more camcorders makes checking each marker more complex, leading to increased CPU time. Increasing the amount of markers can lead to exponentially increasing the "confusion factor", i. e. monitoring which marker is which. Optical systems are also limited by the quality of the cams and the sophistication of their tracking software.
>> http://www. wisegeek. com/what-is-motion-capture-technology. htm
>> http://vizproto. prism. asu. edu/classes/sp03/motioncapture. htm
WHO USES MOCAP?
MoCap is being used more and more in movies nowadays. MoCap centered animation is essential for creating individuals that move realistically, in situations that would be impractical or too dangerous for real celebrities (such as personas slipping off the dispatch in Titanic. MoCap was also used extensively in Titanic for 'filler' individuals (fit in between real celebrities) or in situations with exclusive camera fly-bys over the virtual ship. Several shots would have been difficult or impossible to do with real cameras and a genuine ship, or real models, so electronic models, celebrities, and cams were used. Some film characters require the use of MoCap, usually their computer animation seems unreality. More and more impartial companies are beginning to put together desktop studios - The thought of two or three people creating a whole movie isn't that remote, if MoCap is employed properly. The Gypsy(3) is ideal for small and large shops. MoCap animation can be carried out rapidly and inexpensively, without arranging expensive motion take trainings in a studio room.
Game development is the largest market for MoCap. With games drawing just as much revenue as videos, it is not hard to see why game development often demands enormous quantities of MoCap. The enormous competition to produce the 'coolest game possible' (thus learning to be a top-seller - maybe) means that greater production capabilities indicate higher quality. Additional time is kept for aesthetic finishing details and fine-tuning of game play.
Generally there are two main types of 3D figure animation used in video games: Real-time playback vs. cinematics. Real-time allows the overall game player to choose from pre-created moves, thus controlling the character's steps in real-time. Cinematics are the totally rendered 'videos' used for intros and 'cut-scenes'. Usually the previous part of game creation, or an activity that is sub-contracted to another studio, cinematics are generally not necessary to game-play, but do put in a lot of charm to the overall game, and help immensely with tale development and disposition generation.
Video and TV Performance Computer animation:
Real-time motion is becoming popular for live tv set broadcasts. MoCap may be used to place a online character within a genuine scene, or even to place live actors within a electronic scene with virtual actors, or digital characters in a virtual field.
MoCap for real-time broadcast requires mock-ups of any non-standard physiology (big stomachs, tails, etc. ) to keep the performer's motions from creating the character's limbs to interpenetrate its body. Joint moving restrictions on the shoulders and knees. A real-time adaptation feature such as Film Field Animation's real-time movement mapping (from the performer's skeleton to another proportioned character's skeleton) is essential when the character's is completely different from the actor's body.
When merging a live elements with digital elements the true and virtual cameras must reveal the same properties (perspective, focal span, depth of field, etc. ) otherwise the illusion appears strange.
Post-Production for Ongoing Series:
MoCap for ongoing series is becoming very popular. Setting up a weekly show without MoCap invariably triggers shows to be overdue or creation studios going bankrupt. Having an efficient MoCap pipeline is vital to the success of an ongoing character animation based mostly series.
MoCap is suitable for the web, whether used to create virtual hosts or handmade cards. As the net becomes more advanced and bandwidth(4) heightens, MoCap will help bring a 'human factor' to the web, by means of characters that audiences can relate to and connect to.
MoCap produced Performance Computer animation can be regarded as Improvisation fulfills Computer Graphics (CG). At industry events, conferences or press conferences, a good improviser acting via a CG personality in real-time can create a very intriguing long-term experience for the audience. Integration with live stars further helps create a remarkable experience.
MoCap is useful for perceptual research. By delivering test things with abstract movements, distilled from action capture data, repeatable experiments can be developed that provide insights into people perception.
Biomechanical analysis for treatment purposes relies extensively on MoCap, because of its ability to produce repeatable results. MoCap may be used to measure the magnitude of your client's disability and a client's progress with treatment. MoCap can also assist in effective design of prosthetic devices.
MoCap is essential for producing product designs that are ergonomically useful, as well as designs for physical products that are comfortable and attractive. Even though there are restrictions of the systems. Optical systems are often occluded and require a big distance between your subject matter and the camcorders. Magnetic systems have major issues with metallic in the catch space.
MoCap training can make a huge difference in an animators training. While usage of MoCap is not really a substitute for developing good art skills and good traditional character animation expertise, it can go a long way towards making someone more employable.
Virtual Actuality (VR):
MoCap is essential for VR training applications. It makes for far better immersion than by using a joystick or a positional cope with.
>> http://vizproto. prism. asu. edu/classes/sp03/motioncapture. htm
TECHNOLOGIES IN FILM BEFORE BECOMING MOCAP
An experimental 2D animated brief by Peter Foldes drawn on a data tablet, who used the world's first key framework animation software, invented by Nestor Burtnyk and Marceli Wein.
First use of 2D computer animation in a substantial entertainment feature film. The idea of view of Yul Brynner's gunslinger was achieved with raster design.
First use of 3D computer graphics for animated side and face. Used 2D digital compositing to materialize heroes over a history.
1977: Celebrity Wars
First use of any animated 3D wire-frame graphic for the trench run briefing series.
First CGI individual persona, Cindy. First use of shaded 3D CGI even as think of it today.
First use of in-camera result for thermal eye-sight series (see Predator).
Extensive use (15 min. totally computer produced) of 3D CGI like the famous Light Routine sequence. Also includes very early cosmetic computer animation (for the Grasp Control Program).
1983: Rock and roll & Rule
First animated film to make use of computer images.
1985: Tony de Peltrie
First CGI-animated individual character expressing emotion through his face and body gestures.
1985: Young Sherlock Holmes
Lucasfilm creates the first photorealistic CGI character, 'stained glass knight' with 10 mere seconds of screentime.
First sensible CGI creature.
1987: Captain Power and the Military of the Future
First Tv set series to include characters modeled entirely with computers.
1990: Total Recall
Use of motion capture for CGI personas.
1990: RoboCop 2
First use of real-time computer graphics or "digital puppetry" to create a personality in a motion picture.
1991: Terminator 2: Wisdom Day
First realistic human movements over a CGI figure. First use of an individual computer to set-up major movie 3D results.
First broadcast group of animated CGI pants.
1993: Jurassic Park
First photorealistic CG animals.
First computer animated Television set series. First use of figure animation in some type of computer animated television series.
First full-length computer animated Television series.
1994: Radioland Murders
First use of digital CGI models with live celebrities.
First CGI lead persona in feature-length film (preceded Toy Account by half a year). First CGI individuals to have interaction realistically with live actors.
1995: Toy Story
First CGI feature-length computer animation.
1997: Marvin the Martian in 3D
First computer animated movie seen with 3D eyeglasses.
1999: Combat Club
First reasonable close-up of comprehensive facial deformation over a synthetic people.
2001: Final Fantasy: The Spirits Within
First feature-length digital film to be produced based on photorealism and live action rules.
2001: GOD, THE FATHER of the Rings: The Fellowship of the Ring
First use of AI for digital celebrities (using the Significant software produced by Weta Digital).
2003: The Matrix Reloaded
The Burly Brawl - the first use of "Universal Capture", the mixture of thick (somewhat than point-based) action take and per-frame surface capture.
2003: Gollum from the Lord of the Rings trilogy
First photorealistic movement captured persona for a film, Gollum was also the first digital actor to win an honor (BFCA), category designed for Best Digital Acting Performance
2004: The Polar Express
First CGI movie that used motion capture for all those actors.
First full size movie made using performance-capture to make photo-realistic 3D individuals and an attribute a fully CG 3D photo-realistic world.
>> http://en. wikipedia. org/wiki/Timeline_of_CGI_in_film_and_television
WETA DIGITAL: THE EARTH Innovator OF CG STUDIOS
Weta Digital is a global leading visual effects company located in Wellington, New Zealand. They provide a full suite of digital production services for feature films and high end commercials, from idea design to cutting edge 3D animation.
Weta was produced in 1993 by a group of young New Zealand filmmakers including Peter Jackson, Richard Taylor and Jamie Selkirk. It later split into two specific halves - Weta Digital (digital results) and Weta Workshop (physical effects).
One of Weta s first projects was to provide aesthetic effects for Peter Jackson s film Heavenly Creatures. They went on to work digital magic on Peter s blockbuster videos GOD, THE FATHER of the Rings trilogy and Ruler Kong. Plus they also work with other Hollywood directors, providing digital effects for pack office visits like I, Automatic robot, X-Men: THE PAST Stand, Eragon, Bridge to Terabithia, Fantastic Four: Go up of the Sterling silver Surfer, The Water Horse, Jumper, The Day the planet earth Stood Still, Region 9 and The Lovely Bones. Moreover, their clubs of digital musicians and artists are world-leaders in all areas of visual effects development, including animation, motion capture (MoCap), masses generation, modeling, compositing, and film scanning and tracking.
In August 2006, Weta signed on to help Adam Cameron, a world famous director, to produce Avatar. Creation design for the film took many years. The film acquired two different creation designers, and two individual artwork departments, one of which focused on the nature of Pandora, and another that created human being machines and human factors.
In September 2006, Weta was growing and combining its technologies with Cameron s systems to reach a brand new degree of creative and technical excellence, providing the film in 3D. This mixture makes a great CGI(5) power to both Weta and Cameron through all the procedure in making an epic movie like Avatar.
Most recently James Cameron s Avatar had proven that Weta was the CGI professional. This leaded Weta to get an Academy Award for Best Visible Effects. Their work on the film engaged utilizing a new camera system and taking pictures on a virtual level. Finally, Weta s reputation was disperse across the world for the power of imagination and delivery which will keep them in high demand with some of the entire world s leading film studios or more as yet there are incredibly least of the visible impact companies that can match Weta CGI creating potential.
>> http://www. wetafx. co. nz/about/
>> http://en. wikipedia. org/wiki/Avatar_(2009_film)
MOCAP & AVATAR
Each of Cameron s film introduces new systems that change just how people make videos. Avatar is the outcome of the successful mixture of an great proficient director and a world class visual effect originator like Weta. Avatar making process step in the limit of the recent film making which supported by the innovation of the two new types of surveillance cameras which are the 3D Fusion camera and the Simul-Cam digital camera, both of these are offered as the part of MoCap.
3D Fusion camera: This camera can be used for match-move, where you have action capture CG characters and have to complement their goes to composite them into an established shot. A couple of micro-motors on the Fusion rig to adjust the separation and convergence viewpoint of the stereo Sony F950 video cameras. The typical three zoom lens functions of move, focus, and iris, plus interocular distance(6) and convergence are all under software machine control. Beam splitters provide the capacity for having an interocular of half an in. even although cameras are four inches wide. The video cameras are attached in the Fusion rig at a 90-level angle.
Simul-Cam exclusive camera: A exclusive camera with a series of super high intensity LED lighting system that fires off of the LEDs in sync with the motion capture cameras working at 20 microsecond exposures, and these dynamic LED markers on the Pace rigs. Therefore, this virtual camera would see the markers, but not see any of the shiny live action stage light or even the sun, this mean Simul-Cam exclusive camera can eradicate a great weakness of a genuine motion take camera.
>> http://hdusermagazine. com/wordpress/?p=12
The great benefit that can obviously seen while using MoCap in Avatar is its capacity in capture a realistic movement of varied types of thing such as individuals, pet, and etc. Because all of the creatures plus some humanoid character types like Na vi tribe in Avatar derive from an imagination of James Cameron, to taken those animals by saving their reality life like many other films do is impossible and almost impossible if the animator want to do every movement of most Na vi tribe heroes and creatures in this movie by only using their high speed personal computers and mouse as well. In addition, the realistic movement that recording by MoCap technology can persuade visitors to believe that if there are such varieties of these living things can be found in real life they must live in the same way as they have in film. Besides, there are extensive flexible way of using MoCap technology to match the variety of users goal, this is why MoCap is the best answer to make Avatar.
WHICH DEVELOPMENT Path IN THE MOCAP IS HEADED?
The uses of every type of MoCap usually rely upon the action data needed in each type of industry. Thus, the developments of each kind of MoCap development are also difference. For instance, in the film industry, a whole lot of research going into optical MoCap because of its flexibility and satisfactory quality data while other industry buying the introduction of other MoCap type seem to be to be a much better choice because of the top quality data but involve some other limitations. However, all type of MoCap future development is sharing some similar developments which are as implemented:
? Every people in each section of using MoCap expect that this technology provides them the effect with a great exactness (or quality); including improved upon physical expertise, so that personas can touch the other person and toes meet solidly on the floor. This expectation straight affects the development of the MoCap and all of the technologies development as well.
? When sets of performers are captured simultaneously, the amount of polygons available to be digitized for each and every performer is lowered. Therefore, image quality will also reduce. Many MoCap manufacturer make an effort to solve this issue and make MoCap to grow its ability to fully capture data from multiple characters.
? For better capturing details, MoCap s preview velocity have a tendency to drop down quickly. Improving the speed of MoCap technology provides an improved use for the consumers.
? Capturing space roughly called level of MoCap is too narrow for a large project like recording a big group of performers, so that increasing in volume will increase the value of MoCap technology.
? MoCap processing cost is still too high which resulting in MoCap selling price is very costly. If manufacturers can lower the price, so that consumers and self-employed artists can accessible and experiment or even increase the technology considerably faster.
>> http://web. mit. edu/comm-forum/papers/furniss. html
(1) Inverse kinematics: It's the process of determining the parameters of a jointed flexible subject such as bones of creature models including humanoid enter order to attain a desired present. Inverse kinematics is a kind of motion planning. Inverse kinematics are also relevant to game programming and 3D computer animation, in which a common use is ensuring 3D characters hook up physically to the earth, such as ft landing firmly together with terrain.
>> http://en. wikipedia. org/wiki/Inverse_kinematics
(2) Framework buffer: It really is a video output device that drives a video display from a storage area buffer containing a full frame of data.
>> http://en. wikipedia. org/wiki/Framebuffer
(3) The Gypsy: It was the world's first Inertial Gyroscopic Movement Shoot System (Gypsy). Gypsy could capture movement using inertial sensors or gyros mounted on lycra suit, it reports simultaneous action and result of performance. It uses 19 custom-made Inertia Cube (TM) gyroscopic receptors for detecting nuance of movement and optimizing data outcome, while global translation system stimulates precision of acting professional placement and can be enhanced by addition of optional ultrasonic monitoring technology. System also allows celebrities to touch or hug without occlusion.
>> http://news. thomasnet. com/fullstory/528380
(4) Bandwidth: bandwidth is often used as a synonym for data transfer rate - the amount of data that may be carried from one indicate another in confirmed time frame (usually a second).
>> http://searchenterprisewan. techtarget. com/sDefinition/0, , sid200_gci211634, 00. html
(5) CGI: It is a brief term of Computer-Generated Imagery, this is the software of the field of computer design or, more specifically, 3D computer graphics to special effects in films, tv set programs, commercials, simulators and simulation generally, and published media. Video games usually use real-time computer graphics (rarely referred to as CGI), but could also include pre-rendered "slice displays" and intro movies (or full movement videos) that would be typical CGI applications.
>> http://en. wikipedia. org/wiki/Computer-generated_imagery
(6) Interocular distance: The length between the centers of rotation of the eyeballs of an individual or between your oculars of optical devices.
>> http://www. thefreedictionary. com/interocular+distance