Before you read this, you may want to review the articles: Digital VCR formats compared and HD (high definition) VCR formats, for details on the recorder end of the equation.
The fact is the differences between 4:2:2 and 4:1:1 DV formats affect image quality in the NLE. As users then, we need to understand what happens to graphics composition, layering, and special effects when working with different formats. Does the editor add successive compressions and decompressions to the data resulting in concatinating artifacts? Are some NLEs better than others?
Just to get up to speed, heres a recap of some of the main points covered in my previous articles about digital VCR formats:
1. Wider, faster moving tape provides the environment for a more robust signal. Data is more reliable and less error correction is necessary.
2. The more data you record, the better the picture and sound.
3. The more component samples you keep, the better. DV's 4:1:1 sampling discards 6 color samples out of every 12 picture samples received from the camera. Professional formats such as DVC PRO50, Digital-S (now known as D9), and Digital Betacam with their 4:2:2 sampling ratios discard only four color samples per twelve picture samples. The fewer color samples you discard, the sharper your colors will be and the better your transitions, titles, graphics, and chromakeys will look.
4. The less you compress the data to make it fit on the tape, the better your picture and sound. No compression, as you would find in D1, D2, D3, and D5, yields superior results, but may be too expensive. Mild compression such as JVC's D9 at 3.3:1 and Ampex's DCT and Sony's Digital Betacam at 2:1 are low enough to cause very minimal damage to the signal. The 5:1 compression found in consumer and industrial DV machines is small but detectable.
5. 4:1:1 sampled signals do not gracefully convert to 4:2:0 as found in DTV, DVD, digital satellite broadcast, and devices outputting to Y/C. Signals sampled at 4:2:2 however, convert nicely.
How much of this applies to nonlinear editors?
Almost all of the above applies to NLEs, but in its own way. The NLE itself is not so much the big question here; it's the codec that converts the video signal into the data that represents the scenes to be edited. More on that shortly.
Back in the camcorder, the analog video signal was converted into data. Some of the data was thrown away (as in 4:2:2 vs. 4:1:1 sampling), and the resulting data was compressed (as in 5:1 for the lower DV formats, and 2:1 or 3.3:1 for the higher formats). The sampling rate and compression were set by the camcorder's format (a few of them are switchable).
The same thing is true for the digitizing part in your nonlinear editor. This device, like the camcorder, can take in analog video (composite, Y/C, or component) and digitize it. It throws away some of the samples (yielding 4:2:2 or 4:1:1) and compresses the resulting data. Some cards (and associated software) record the signal with no compression, other cards compress the data mightily, others are variable, and others compress a certain amount to work primarily in a certain realm (i.e., a DV codec would use 4:1:1 with 5:1 compression to match the data from DV VCRs). As you might guess, the digitizing card that does the least compression and data tossing is the one that gives the best imagery to the editor to edit.
The digitizing card and its associated circuitry, called the codec (coder/decoder), must work hand in hand with the rest of the computer. If you are working with a fast, dual processor workstation with Ultra Fast Wide SCSI drives or RAIDs that can store 270 Mbps/s, the digitized signal doesn't need to be compressed. If you are working with more common computers or slower EIDE drives, the data has to be compressed in order to fit through the narrow pipelines in the computer's bus structure and fit on the slower drives without hiccuping.
Editing software often allows you to select various video capture rates, but these selections must be made within the limitations of the codec, computer, and drives. In short, the process is a symphony where all parts have to work together. The system will only be as strong as its weakest link and, thus, the compression (and ultimately your sound and picture quality) will depend on how much data your symphony will handle per second.
There is not much point to recording analog video on a digital VCR only to play back an analog signal to be redigitized by the nonlinear editor. Exceptions: If your NLE works in 4:1:1 and your DV deck has 4:2:2 and your editor has no built-in transcoder to change one to the other, you will need to fall back to analog and let the editor redigitize the signal in a format IT wants. The same may be true if your NLE works in 4:2:2 and you're feeding it 4:1:1. Then the NLE may have to fall back to analog and then convert the source material into its own format. In general, however, if the source material is already digitized, you might as well keep it as digits without any conversion losses; that's the beauty of digital video. So if your digital VCR has a digital output, your editor needs to have a matching digital input. SDI, IEEE-1394 (FireWire), ATM, DS3, fiberchannel, USB2.0 and SDTI list just a few of the ways this data can be transferred.
FireWire is typically output from a prosumer DV camcorder (4:1:1 at 25Mbps compressed 5:1). When you get to the higher formats, the pros don't use their camcorders as players, so here you find digital outputs on standalone decks, not the camcorders. Also at the higher end (4:2:2, 50Mbps, 3.3:1 or 2:1 compression) the output is likely to be SDI (non-compressed).You'll find SDI in/outputs on D1, D5, Digital Betacam, D9, and DVCPRO50 decks, and SDI outputs on Betacam SX, and Digital Betacam camcorders. D9 and DVCPRO50 camcorders have no digital outputs.
You may think, isn't it a shame for the pro machines to compress data only to decompress it for the editor's SDI input? Actually, the damage at this data rate is inconsequential. Tests by the SMPTE/EBU Task Force have shown that, with mild compression, you could go easily seven rounds of decompression/compression without visible artifacts creeping into your pictures. Also, SDI is the favored way to feed digital effects boxes, character generators, and routers.
Assuming that the computer and drives can keep up with the incoming data, nothing is harmed in the process of taking in the data. A nondamaging change may take place with the data as the computer software formats it into something it can handle. DV, for instance, is not directly editable. The computer must "wrap" the DV data in its own packaging, creating AVI, Quicktime, or some other files. This doesn't damage the image; it only places invisible headers and other invisible data around the files so that the computer can find and manipulate them.
The same is true in reverse. When you select "print to tape" after editing your masterpiece, the AVI or Quicktime files are converted back to a DV datastream and output in a format that your recorder can use. Depending on your output selection, a codec may even be involved so that analog video (composite, Y/C, or component) is output to an analog VCR.
If all you are doing is trimming or selecting snippets of your digitized footage and arranging these pieces in a desired order using only cuts, nothing really happens to your data. A "play list" simply selects which data on your drive gets played and in what order. Whatever quality your data had when it came into your computer will be maintained throughout the process of cuts-only editing. When it is played back, the data is unchanged, just rearranged.
If you are using effects, however, the storys a bit different. When a nonlinear editor needs to perform graphic effects, transitions, and titles, something has to happen to the data. It needs to pass through the codec once to convert the compressed data to something the computer can manipulate. Once the manipulation is over, the codec reconverts the data into standard files and stores it. Depending upon the speed of the computer, this rendering of a transition or title may take several minutes, several seconds, or may be performed in real time, but in any case, some manipulation of data has to occur. Depending on the hardware and software, this manipulation may add some artifacts to the image.
When several manipulations occur (i.e., a dissolve with added title or moving faces on a spinning cube), they are often performed in layers, but the data undergoes just one decompression/compression cycle. Some systems permit fewer layers to be handled at a time than others, making it necessary to execute a transition and render it, then go back and add something else to it. In this case, the image goes through two rounds of decompression/compression and gets two generations of degradation.
This same situation occurs when the person doing the editing is using old analog editing techniques, where transitions and titles are layered one at a time. It also occurs if, after the tape is finished, someone decides to add to a transition that's rendered. In the latter case, it behooves one to redo the transition from scratch inside the computer, keeping the decompression/recompression cycles down to one. Note, however, that only the transitions (not the cuts) suffer any damage from these compression/decompression processes.
And before I get skewered by the cognecsiente, there are exceptions to that rule too. According to Fast, the 601, depending on the chipset, is able to handle a few simple transitions in compressed mode. Similarly, the Media 100 uses a shortcut that bypasses recompression when executing some simple effects. For instance, when a dissolve, wipe, title, or brightness/contrast/color adjustment is called for, the Media 100 system doesn't decompress-render-recompress, then store the transition to disk as it does for more complex effects. Instead, it waits until you're ready to "print to tape", and when the simple effect arrives, the machine decompresses the original MPEG data, performs a real-time render, and spits it out. The simple transition is never recompressed nor stored on the drive.
As you might expect, a 4:2:2 signal with very little compression will hold up better to transitions, graphics, chromakeying, and other manipulations than will 4:1:1 highly compressed video data. When there is more picture to work with, the job can be done more eloquently --- assuming the computer and disc drive can handle the higher data rate.
Another factor that makes a difference in quality is the rendering engine in the software. Some packages are programmed to more accurately perform their miracles than others.
Since, the codec plays such an important role in determining image quality, let's give it some more attention.
There was a day when all source video was analog. Then the NLE codec's job was to digitize, sample, and compress the signal to fit onto the (then slower) computer hard drives. Motion JPEG (MJPEG) was the compression method, which was variable from 2:1 to 100:1. The Avids, Media 100s, Pinnacle Real Time, Matrox DigiSuite, and Fast Video Machine still use this method. Inside the NLE, cuts and trims simply rearrange the playback list, while transitions and effects require decompression, rendering, and recompression of the data. When output, the MJPEG was converted by the codec back to analog. Today, the above NLEs have add-on codecs that output to DV, SDI and other formats.
Then came DV and FireWire (what Sony calls iLink). If the NLE had no DV codec, the FireWire link with its already compressed digital data could not be used. The analog method mentioned above was applied, along with one round of digitizing and conversion. Some early NLEs had FireWire inputs, but lacking DV codecs, they still had to convert the DV datastream to MJPEG, an unfortunate decompression/recompression exercise.
Then came NLEs with true DV codecs. They took in FireWire data, wrapped headers on the data and, without loss, converted it to AVI or Quicktime files for storage. The only damage that occurred to the audio or video was when it changed (i.e., transitions, graphics, titles, change in audio level, sound mixing, etc.). The DV codec did its work efficiently in reverse, changing the computer files back into a DV datastream to be recorded on DV tape, again without loss. The signal could also be converted to analog audio and video with the expected losses for that medium.
Then came SDI (Serial Digital Interface, also called SMPTE 259). Here, compressed digital data, be it DV at 4:1:1, 25Mbps 5:1 compression or pro DV at 4:2:2, 50Mbps, 2:1 compression, is decompressed, transmitted at 270Mbps, and, using whatever codec is in the NLE, may be recompressed. The joy of SDI is that it is standardized, popular, and is a lowest common denominator --- you can go anywhere from there.
Take, for instance, the Fast 601 (six-o-one) NLE, costing 13 to 16 kilobucks including PC, has an MPEG-2 compression chipset in its codec. It converts the SDI stream to editable MPEG-2 in the 4:2:2 realm. A DV source would have to be converted to analog (Y/C or component) then redigitized by the 601. DV's 4:1:1 image would have "fake" data added to make 4:2:2, the color space used by the 601's chipset. Because of efficiencies in the encoding algorithm, MPEG-2 provides about 40% more compression than MJPEG for the same quality, thus permitting more source material on a disk drive. The compression is scalable at 5, 15, 25, 33, and 50Mbps, where 5 is offline quality and 50 is broadcast quality. As before, cuts are just reordered data; only the transitions are decompressed for rendering, and depending on the chipset, even some of the transitions can occur without decompression.
The output of the 601 could be SDI or DVD compliant. The SDI stream might go to a Digital Betacam, which applies its own gentle compression, and the results are what most would consider broadcast quality (able to go seven generations of such processing before artifacts became visible). By selecting "Print DVD", the MPEG-2 data, which was all I-frames ("real", editable frames) is converted to "distribution MPEG-2" with I, P, and B frames (P and B being predictive frames, allowing further compression). This data rate may average 4-6Mbps and go as high as 9.8Mbps, as called for by the DVD format. Thus, the mode used for editing the program is largely maintained when converted to DVD, a savings in decompression/recodifying. The above MPEG-2 stream is also useful for commercial insertion by TV stations.
The SDTI (Serial Digital Transport Interface) standards will do for MPEG-2 and other compression schemes what FireWire did for DV. Unlike SDI, which transported noncompressed data at 270Mbps from machine to machine, and FireWire, which passed DV compressed data (4:1:1 at 25Mbps 5:1 compressed), SDTI transports compressed data of other varieties. Using the same sockets and cables as SDI, NLEs with SDTI will have one more method to accept or send data without decompressing it midstream. And SDTI will offers a natural step up to the data-hungry HD formats that absolutely require compression.
The consumer editing boards costing under $350 generally capture in low resolution and/or may use MPEG-1 compression. Low data rate is paramount and artifacts show up quickly.
In the $500 - $700 range comes the Radius Moto DV Studio 2.0, MiroVideo DC30 and 50 Pro, Canopus Raptor, Truevision Bravado DV 2000 and Pinnacle DV 300 working in the MJPEG realm. About $1000 700 brings FireWire into the picture. Because DV's 4:1:1 data is a bit dicey with keys, titles, and transitions, many NLEs, like the Media 100 Finish system, translate everything to MJPEG at 4:2:2 ("faking" the missing data). MJPEG's adaptive compression technique enables the rendering engine to work one pixel at a time for more accuracy than if the render worked on DV's native 8 x 8 pixel macroblocks. Put in English, the MJPEG 4:2:2 realm does neater transitions than the DV 4:1:1 realm.
Spending about $3000 for a higher end card like Fast's DV Master Pro with Speed Razor 4.7 bundled software gets you a FireWire input plus a codec that constantly outputs analog video to a TV monitor for monitoring. Multilayer transitions can be rendered in one swoop, forcing only one decompression/compression round for that part of the program. The FireWire input keeps the signal in the DV realm (just adding file headers, etc.) when laying it to disk.
Looking ahead, we can expect MJPEG to eventually disappear because of its compression inefficiencies, replaced by DV (which gets converted to AVI or Quicktime, etc.) and MPEG-2. SDTI and MPEG-2 will usher in the HD era.
In conclusion, its fair to say that your resulting edit will look as bad as your weakest link. If you started with DV at 4:1:1, and your computer and drives can handle the 25Mbps data stream (plus a little overhead), the image will not be degraded as it is taken into the machine digitally, nor will it be damaged by cuts-only edits. Digitally output back to the DV recorder, it will maintain its original quality (i.e., 4:1:1 sampling with 5:1 compression), but of course will not look any better than 4:1:1, etc. Add titles and transitions and you will get some artifacting at those points. Using a better nonlinear editor won't make the cuts-only pictures better, but it may make the transitions look better, and at least will make things happen faster.
If you start with 4:2:2 with mild compression, transfer via SDI, and the NLE is up to it, then a benign decompression/compression will occur. Cuts won't hurt anything, as before, but transitions will add artifacts, albeit small ones. If the NLE cannot handle the data rate, then the data must be compressed more than mildly, damaging it. Thus the nonlinear editors cannot make silk purses out of sow's ears, but if they are matched to the digital VCR formats, they won't make sow's ears out of your silk purses either.
AVI, Quicktime, and the Computer
Video and audio data enter the computer as a stream of ones and zeroes. Like human communication, these streams have different "languages" and within these languages there are various "dialects." Video data may come at various resolutions, with different amounts of color information (ie., 4:1:1 or 4:2:2) and be squeezed with various amounts of compression and types of compression. This stream of data is usable only if the computer knows how to interpret the resolutions, color information, and compression algorithms. The computer must also be told whether the data represents audio alone, whether the audio is mono or stereo, or whether the audio is combined with video. The computer must be told how the audio and video have been interleaved, that is, how the audio has been sandwiched between packages of video data.
Once the computer has this setup information, it can go about descrambling the stream of digits and reconstructing the picture and sound. Note, however, that just because your computer has been told how something was made doesn't guarantee your computer can decipher it. If the data, for instance, was compressed a certain way and your computer has the matching decompressor algorithm, you are in business. If the data stream was compressed in an unusual way, it will remain incomprehensible to your computer.
AVI and Quicktime represent a collection of file, headers, and control information that are used to package audio and video data to define the contents of the files and provide the aforementioned setup information.
Quicktime: The Quicktime movie (.MOV) file format was originally developed by Apple for the MacIntosh but was later extended to the PC, thus it works on both. You can post a Quicktime movie on the Web or ship it on a CD-ROM knowing that both MacIntosh and PCs will be able to read the file. QuickTime 3 can read over 30 audio and video file formats including 3D animation and virtual reality. The newest version, Quicktime 4, support the above plus MP3 compressed audio and live streaming video over the Web. The versions that read files are free, while the Quicktime Pro Version that enables multimedia authoring costs about $30. Quicktime 3 and 4 are able to play Windows AVI files.
AVI: Audio Video Interleaf (.AVI) is Microsoft's answer to Quicktime. Microsoft's ActiveMovie and DirectShow and its older Video for Windows programs all use the AVI file format. Because this programming is built into Windows, all you need to do is doubleclick on an AVI file and it will play automatically. Unlike the Macintosh, which can play both Quicktime and AVI files, your PC can play only AVI files automatically. If you wish to play Quicktime files on your PC, you have to install the Quicktime program. This extra step only needs to be done once.
Note, however, that just because you can play a certain type of file doesn't mean your computer has the necessary codec to decipher all types of compression. If you create files using AVI's or Quicktime's built-in compression formats, you can be sure that other users will be able to view the file. If you create the files using newer, more efficient compression algorithms, the files will only play if the other users also have the corresponding codec on their machines to decipher your oddball compression scheme. Both AVI and QuickTime are equipped to decipher MPEG-1 (Motion Picture Experts Group) files and, depending on version, MPEG-2 and MPEG-4 files.
File Conversion: If you need to change an AVI to a QuickTIme or vice versa, there are utilities to do it. SmartVid for Windows (free from Intel at http://support.intel.com/support/technologies/multimedia/indeo/smrtvid1.htm). SmartVid doesn't decompress or interpret the files, it simply repackages the files with new headers and control information while maintaining the same compression. A similar program, TRMOOV, from the San Francisco Canyon Company is also available for download from various websites.
Nonlinear Editors: Most nonlinear editors programs make all of this file format business transparent. They can read AVI, QuickTime, DV, MPEG, and MJPEG files as well as write them. Not all editing software does all things; the popular Adobe Premier 5.0, for instance, in both the Windows and Macintosh versions, will read AVI and QuickTIme files and write Quicktime on both platforms but will only write AVI files on the Windows platform.
It's not hard to play the various file formats or change one to another, especially with the help of editing software. When writing one of these formats, however, you need to think about the user who will be using them. If you want your files to be readable on the greatest number of players, you must stick to the most commonly available compression schemes such as those built into Quicktime or ActiveMovie or DirectShow.