By Dustin Sklavos
I’ve lamented the generally dire state of consumer-grade video editing software, and there’s a good reason why. In the process of trying to produce something more intuitive than pro-grade software, developers created something exponentially less. Worse still, none of these consumer-grade editors are a good stepping stone to the professional-caliber software. What may be most damning is this: pro-grade suites like Premiere Pro and Final Cut Pro are frankly just plain easier to use.
Since software developers are so intent on solving a problem in the worst way imaginable — by completely abstracting what has developed to be an intuitive, easy-to-understand film editing workflow — I offer you another option: in this three-part series, I will give you the basic understanding you need to edit video. This first part is the introduction and foundation, the second will be about navigating the software proper, and the third will cover hardware.
WHEN VIDEO ISN’T JUST VIDEO
The first and most important thing you need to understand — and the one the concept that consumer-level software consistently screws up — is this: video, as you know it, is actually comprised of two completely different elements. We use the term “video” as a catch-all to describe the movies we watch, the files on our hard drive, but it’s actually describing two separate things.
Video proper is just the moving image. Motion picture film is honestly just a series of photographs run at high speed, with each individual photograph termed a frame; all digital video does is introduce levels of compression that can produce interdependency between frames. Video in any format, and moving pictures of any kind, are just a series of still images that are run fast enough to produce the illusion of motion. Even your video games are actually a series of stills being rendered out by your computer or console. All of these run at what’s called a framerate, usually counted in the number of frames-per-second. Standard definition television and video run at a framerate of 29.97fps (basically just referred to as 30fps). Film runs at a slower 24fps, which produces a different feel. High definition television tends to run at 30fps or 60fps.
But the other key element, the part that the consumer-level video editors screw up so thoroughly, is audio. It is vital you understand that while video and audio are going to be synchronized on whatever you happen to record with, they are two utterly and completely separate tracks. I cannot stress this enough: just because these two are paired together in files or on DVDs does not mean they’re joined at the hip; they can be edited completely independent from one another, and are stored within files totally separate from one another.
This idea is counterintuitive because video and audio get conjoined under the catch-all “video,” but if you keep in mind that they can be de-synchronized and edited separately (or together if you prefer, for simplicity’s sake), you’ll instantly be head and shoulders above where the misguided software developers would’ve pegged you.
A LITTLE MORE DRY TERMINOLOGY
If this is unspeakably, painfully dry I apologize, but there are a couple of basic terms that will keep you sane as you move into what seems to be incredibly complex work but is actually pretty simple stuff. Think of it this way: if someone just shoved a paintbrush, some pigment and an easel in front of Average Dave, he might be confused. If someone showed him how to get started, the only limits to what he can produce will be how much he’s willing to experiment, practice, and so on.
So, first things first: timecode. On a tape (assuming you’re one of those old bastards like me who still shoot on tape), and later in editing, timecode is a means of navigating audio/video. It’s presented in this form: hours:minutes:seconds;frames. Let’s say you have sixty minutes of footage, and you want to see the part where Uncle Bob got ticked-off at Cousin Wilberforce and knocked his ever-loving block off. Well, according to the timecode, Bob’s fist makes first contact with Willy’s face at 00:23:12;15. That means that 23 minutes, 12 seconds, and 15 frames in (remember that video has a framerate, measured in frames-per-second), Willy’s jaw begins what is going to be a long and painful relocation. Now let’s say you want to isolate the period from the time Bob makes his impact to Willy’s drop to the dance hall floor. Willy hits the ground cold at 00:24:14;21. You’re now able to use timecode as a sort of an address: at this juncture, life starts to become difficult for Willy. So, from 00:23:12;15 to 00:24:14;21, cousin Wilberforce is in freefall. If you wanted to edit just this piece of video, it would be called a clip.
Clips are the essential foundation of editing because they’re what you sequence together to produce your final product.
Now, remember what I said about how audio could be de-synchronized from the video? That’s because it can. That means that if your clip has popping in the background, the audio can be edited separately to remove that popping. Or if you’re so inclined, you could theoretically record new audio and run it under the same video: this is called dubbing, and it ruins foreign films. You can also just use audio from another clip. And this is to say nothing of just running music under your project.
You can also just layer everything together, but we’ll talk about tracks in the next part.
I hope this was pretty simple to follow; it’s easy for me because I’ve been messing with this stuff for eight years now and have been able to make a dime working with it, but I can see how it might be confusing for the neophyte. Basically, here’s what you need to know:
- Audio and video are recorded together and synchronized, but they’re completely separate and can be edited independently of one another.
- Timecode is used to keep track of video.
- Individual segments between two specific timecodes can be referred to as a clip.
- AUDIO AND VIDEO ARE COMPLETELY SEPARATE!
Can’t stress that last one enough. Next stop: navigating video editing software!