In the old days, things were easy; we connected video
outputs to video inputs and were done. Now we've got RF,
composite video, Y/C (or S), RGB, and for the wealthy, R-Y/B-Y/Y,
Y/I/Q, and digital (DV) 4:1:1, 4:2:2, and 4:4:4. What does all
this alphanumeric soup mean?

There are three answers: the simple answer, the
complicated answer, and the very complicated answer. Mercifully
I will start with the simple answer, and never get to the very
complicated one.

Y/C versus composite video -

Super VHS and HI8 VCRs manufacture two kinds of video,

composite and Y/C (sometimes called S for super). The Y stands
for luminance, the monochrome parts of your TV picture. The C
stands for the chrominance, the color parts of your picture. Put
them both together and you have a full color picture (actually a
black-and-white picture with color painted over the top of it).
In composite video, the color and luminance signals have
been combined into one signal traveling over one wire. With Y/C,
the color signals travel over a separate wire from the luminance
signals, using two wires instead of one.

Actually composite video travels over two wires, one is a
shield or a ground wire which nobody talks about. In the case of
Y/C, there are two ground wires that nobody talks about.
So what's the big advantage of Y/C over composite? There
is a basic law of electronics that says, "The less you mess with
a signal, the less you screw it up." When you combine the color
and luminance signals, you damage them a little. When you
separate them (as a TV and most video devices must do in order to
use the signals), you damage them even further. In fact, VCRs
costing under $2000 generally drub the detail out of the signal
when separating it. Industrial VCRs costing $6000 or so have
comb filters and Faroudja circuits that delicately separate the
color from the luminance with minimal injury to either. The
damage is seen as reduced picture sharpness (resolution) and
color artifacts (moire and color dot crawl).

In a nutshell, it is better to keep the color and luminance
signals separate as you go from camera, computer, character
generator, or other source, through your switchers, proc amps,
TBCs, and into your video editors or recorders. If possible,
keep the Y/C signal separate all the way to your TV monitors.
Before we get too excited about Y/C, we should confront the
sad truth that Charlie Couch Potato is unlikely to notice the
difference one way or the other. Those of us who take video
seriously will discern the color dot crawl, moire, and soft
picture, and will find it annoying. And those of us who edit our
videos, will see the mayhem multiply before our eyes.

Don't believe the dealers -

Camcorder and TV salespeople in big electronic stores seem

to have their jive all jumbled. You may have already heard them
tell you that there is no point to having an SVHS or HI8
camcorder if you don't have a special TV set to go along with it.
That's bullcrackers with only a crumb of truth buried deep

inside. Here's what's really happening: Nearly all super VHS
and HI8 camcorders and VCRs can play a picture with 400 lines of
horizontal resolution and with almost no color artifacts when
their Y/C cables are used. If their composite cables are used,
their images are degraded slightly, but not much. A regular VHS
or 8mm VCR will reproduce 240 lines of resolution and minimal
color artifacts with those signals traveling down a single coax

If the signal travels to an older or a simple TV with only
an antenna input, the RF modulator (the little thing in the
camcorder or VCR that generates channel 3 or 4 out of the video
signal) tramples the signal pretty badly. The tuner in the TV
set stomps on it again, making a signal with smeary color and
barely 200 lines of resolution. In this case, the super ability
of the camcorder is 50% wasted (the number would be higher if it
were not for the fact that super camcorders have circuits in them
that improve the picture in other ways, making it look better
even on fuzzy old TVs).

If you have a modern TV with a composite video input, you
will see a difference between a regular VHS and super VHS (or
HI8) feed, even though Y/C wasn't used. The 400 line resolution
picture will be reduced to maybe 330 lines, but that's still
better than the 240 you got from your regular VHS VCR.
If you do have a TV with a Y/C input, then you get to enjoy
the full 400 lines of resolution without added color artifacts.
So, yes, it is better to team a modern Y/C-capable TV with your

super camcorder, but you will see an improvement even if you

Remember that Charlie Couchpotato won't notice the
difference whether he's watching RF, composite video, or S video,
the differences are too subtle. If I had to give them a number,
I would say that composite video looks 10% better than RF, and S
video looks 10% better than composite. I know that super is 400
lines of resolution is almost twice as much as the regular 240
lines, but to the eye, the TV picture may only look 10% better.
Remember, however, that when you edit video tape, you need all

the sharpness you can get, because you are losing some every step
of the way. If you duplicate the tapes you have edited, bringing
you down to the third generation, those percents add up, making
the difference between a smooshy picture and a crisp one.

A more complicated explanation -

What is Y/C, RGB, Y/R-Y/B-Y, 4:2:2, 4:4:4, and all this

other alphababble? It all has to do with collecting color
pictures, transporting them efficiently, and making them look
good to the eye. Cost and quality are always a balancing act.
High quality pictures cost a lot to reproduce, and low quality

pictures look terrible. It is often possible, however, to
maintain high quality at certain crucial stages of picture
gathering and editing, while allowing the quality to drop where
it wouldn't be noticed. This is the theory behind all those

Color pictures generally start out as RGB, three signals

representing the three primary colors: red, green, and blue.
Every color picture can be dissected into these three components,

and when the components are recombined in the proper fashion, you
recreate a color picture. Cameras, character generators,
computers, and other video sources nearly always start out with
these three components.

If the RGB (also called component video) signals were sent
from the camera to an RGB switcher to RGB processors and special
effects devices, and then to RGB video recorders, and then edited
with RGB equipment, and played out to RGB televisions (all of
these devices exist today), you would see a dazzling picture.
Most of these devices (except for camera and TV) cost as much as

a small house, however. Although it is important to retain
picture quality during the editing stages, the super sharp color
would be somewhat wasted on Victor Videographer and especially on
Charlie Couchpotato . Sharp colors are nice on computer screens
where you view the screen from two feet away, but are
indiscernible by the average television viewer, six to ten feet
from his TV screen. Our eyes can see black-and-white details in
a picture, but not much color detail. Taking this into
consideration, the TV engineers designed some cost saving

The red, green, and blue parts of the picture don't have to
be sharp, but the sum total of all three which comprise the
black-and-white parts of the picture do have to be sharp. So the
engineers added R, G, and B together to make Y, a super sharp
black-and-white picture. But that made four wires, R/G/B/Y, a
truly expensive way to transport video signals. Since R + G + B
= Y, we can algebraically convert the four signals back into
three, removing the redundancy. The signals are now called
R-Y/B-Y/Y which represent the red with the luminance subtracted,
blue with the luminance subtracted, and the luminance alone. It
is a fairly easy task to recombine these signals to make R, G,
and B again. Incidentally, there are systems that do similar
tasks and call their colors Y/I/Q. All are called component

Since colors aren't as important as luminance, we can
degrade their sharpness without making a visible difference.
Professional component VCRs like Betacam, DVCAM, DVCPRO, and

others record these three signals using a technique that
maintains all of Y's quality but only half of the quality of the
color difference signals, R-Y and B-Y. For instance, one swipe
across the tape is dedicated to a full quality Y signal. The
next swipe across the tape contains the two color difference
signals, squeezed both in quality and in space to fit onto one
swipe of the head. Betacam uses a system called compressed time
division multiplex which in English means they reduce the
frequency (sharpness) of the color signals to about half of what
they were so that the two could be put together to make a high
frequency again, just like the Y channel. Thus the recorder
generates luminance, color, luminance, color with each swipe of
its heads.

Upon playback, the component VCR plays back the luminance

from one swipe of the head and sends it out its Y output. With
the next swipe of the head, it collects the color signal,
separates it into two signals, and sends them out the R-Y and B-Y
outputs. Voila, sharp luminance, and lower cost, half-sharp

Digital video -

Analog devices are like your steering wheel while digital

devices are like your headlights. You can turn your steering
wheel a lot, a little, or whatever. Your headlights have no
halfway position, they are either on or off. The trouble with
analog systems is that they are prone to errors and noise and
drift; you can't always turn the steering wheel exactly the same
amount each time you pull into the garage. Headlights, on the
other hand, are simpler; there's less room for error. They are
on, they are off, and the process is 100% reproducible every time
you drive into the garage. That's one of the things that makes
digital preferable to analog.

Digital cameras, VCRs, and associated video equipment use a

similar scheme. A video signal is chopped into fine pieces, the
pieces are measured and turned into numbers. Usually each
vibration of the tiniest video wave is diced into four smaller
pieces, like sawing a smooth hill into four rectangular chunks of
rock. The digits can be reassembled to simulate the analog hill
again (albeit with steps). The tiniest wave represents the
highest frequency in the video signal and the dicing occurs at
four times the highest frequency or four times television's 3.58
MHz color subcarrier frequency. A lot of slicing for the video
Ginzu knife.

If four of these data samples represented the red signal,
and four represented the green signal, and four represented the
blue signal, we would call this 4:4:4. The 4:4:4 could as easily
represent Y/R-Y/B-Y signals. Remember how color sharpness wasn't
as important as luminance sharpness? One could save data,
bandwidth, and money by throwing away half the color data and
using samples that were 4:2:2.

Expensive paint systems, digital disk recorders, effects
generators, and switchers work in the 4:4:4 domain while professional
component video recorders handle 4:2:2. Even graphics and
animation workstations process signals as 4:4:4 or 4:2:2 plus
various other reproduction recipes. Some switchers, digital
VCRs, and effects devices even sport 4:4:4:4 where the last digit
represents an alpha channel that "cuts out" a piece of one
picture and replaces it with a piece of another picture (called a
linear key) where that cutout could be totally opaque, almost
transparent, or some other shade inbetween. A 4:4:4:1 system
would process all the color components fully, with an opaque
cutout (no half-transparency allowed). The next time you read
the specifications of digital video processors and computers,
watch these numbers to see what kind of quality is passing
through the system. A 4:4:4:4 is top of the line, where lesser
numbers represent lower cost and lower quality.

In the prosumer domain are the DV (digital video) camcorders
costing one to three kilobucks each. They sport 4:1:1
digitization where all the luminance is kept (yielding 500 lines
of resolution), but 3 out of 4 color samples are thrown away to
reduce the flood of data to be recorded. The data is also
digitally compressed, throwing away some more data that our eyes
are unlikely to miss.

From component to Y/C -

Component Betacams and digital VCRs are pricy. Engineers

look for other ways to cut corners and reduce cost using Y/C.
Instead of creating a high quality Y, a medium quality R-Y,
and a medium quality B-Y signal, the industrial video
manufacturers lowered the standards a little. They created a
medium quality Y, and then combined the two color components
together into a single color signal. Professionals,
incidentally, don't call Y/C "component" video; this term is
reserved for true RGB and Y/R-Y/B-Y, and Y/I/Q signals. Still,
Y/C can loosely be called component because the color rides on a
separate wire from the luminance.

Industrial and consumer VCRs can't handle high frequencies
very well, so more shortcuts are taken in the recording process.
The medium quality Y signal is recorded without too much damage.
The C signal is reduced in frequency (called heterodyned) from a

moderately sharp 3.58 MHz frequency down to a fuzzy 629 KHz
(SVHS) or 748 KHz (HI8). Mushy as they are, the color signals
still look pretty good to your eye. Their degradation becomes
most noticeable in multi-generational editing.

When the tape is played back, the low color frequencies are
heterodyned back up to high frequencies so the signals are
compatible with other Y/C gear. Boosting them up doesn't make
the colors sharper, it just makes the fuzzy signals
the right frequency for TVs to understand, and therefore compatible.
The sharpness they lost when heterdyned down is lost forever.

In summary, super VCRs maintain reasonable luminance
sharpness (which is visible), sacrifice color sharpness (which is
much less visible) and keep the two separate so they don't

From Y/C to composite -

If you think of luminance as a singer in one room and

chrominance as a flute in another room, you could easily choose
whether to hear the singer or the flute just by moving to the
right room. Super VCRs and other Y/C video equipment work
similarly, they can receive the signal they need from the correct
room, or from the correct wire. Composite video, on the other
hand, mixes the flute with the singer in the same room; you hear
both at the same time. You can try to listen to the singer, but
it takes concentration to block out the flute. Similarly,
electronic gear requires concentration to block out the color
signal when it wants to process only the luminance signal. When
luminance signals leak into the color circuits, gray herringbone
jackets and pinstripe shirts start to vibrate into colors and
rainbows. When color signals leak into the luminance circuits,
moire and color dots roll along the edges of brightly colored
objects (most noticeable in colored lettering and graphics).
Inexpensive VCRs and TVs do a poor job of separating color from

luminance, often throwing away the high luminance frequencies
(sharp parts of the picture) so they don't interfere with the
color frequencies. Expensive equipment with good comb filters
delicately separate the luminance and chrominance frequencies,
reducing these artifacts.

Composite signals may require only one wire, and may be a
cheap way to move video from one place to another, but you now
see the disadvantage of composite video. It takes a lot of work
to separate the singer from the flute and send the luminance
signals and chrominance signals to the right places in your VCR's
or TV's circuits. Y/C, on the other hand, never combines the two
and saves this delicate and expensive step allowing simple
equipment to act more like their expensive brothers.

Now you C Y Y/C is better.

 About the author

 About Today's Video 4th. ed.

 Return home