How It Works: Processors

by Reads (58,419)

by Dustin Sklavos

When you go to Best Buy, Costco, or even to your favorite builder’s website, you find yourself inundated with a mountain of options as far as the hardware in your notebook is concerned. These options may seem daunting at first, but by the time you’re done with this series you’ll realize just how limited these options actually are. Manufacturers are really only differentiating themselves by build and design at this point, and by how much battery life they can successfully wring out of nearly the same exact hardware. Some manufacturers (HP) are more successful than others (ASUS).

The part of the notebook that consumes the most power is far and away the screen itself. Advances in technology in most of the other components have yielded minimal gains at best, but one of the other chief consumers of power in a laptop is the processor.

The processor, as I mentioned in my introductory article, is basically the engine of your computer. When you’re buying a car, you’re often performing a balancing act between power and efficiency. If you buy a sports car, for example, you can generally expect to spend a lot of time at the pump, but the flipside is that if you really need that power, it’s available to you. The same is typically true of processors, or at least processors of a given family.

And as far as the vendors? Well, think of AMD processors as buying American, and Intel processors as buying foreign. The American car probably won’t be quite as efficient as its foreign counterpart, but it may also be less expensive. Likewise, the foreign may be more efficient and better with mileage, but you’ll pay for the benefit. This, of course, is only true at the time of this writing; AMD and Intel have been playing tug-of-war for a long time and the pendulum is bound to swing in the opposite direction at some point.



How It Works: Processors

So let’s get into it. I’m going to try to be largely agnostic about brands here, because the beauty of AMD and Intel processors is this: your computer doesn’t really care which brand you use. You’re not going to find yourself unable to run any important programs on account of your brand decision. It’s also important to add that at the time of this writing, Via is working to position itself as another viable vendor of x86 (more on that in a moment) processors. Again, your computer doesn’t care which brand of processor it’s running.

The processor, or CPU (Central Processing Unit), is a chip designed around what’s termed the x86 instruction set. Not all chips are created equal: graphics chips, for example, are designed around a completely different set of instructions. The processor in your mobile phone is designed around yet another. x86 processors are designed as a sort of jack-of-all-trades. The CPU is a generalized piece of hardware, not specialized toward any given task.

Let me explain: in theory, any type of processor can execute just about any type of code. Your CPU can execute the code necessary to produce the graphics of your favorite computer game. The problem? The CPU isn’t designed and optimized for that task, so while your Nvidia GeForce 8400M can make Unreal Tournament 3 run pretty smoothly and hit about thirty frames per second, your CPU will choke trying to hit even five frames per second, and it really doesn’t matter just how fast your CPU is (unless somehow you’ve violated the laws of physics and gotten it running at 30GHz instead of 2GHz.)

Modern processors have several things in common: they generally have some number of cores, an on-die cache, and support for either 32-bit or 64-bit code. They require a chipset (remember the motherboard article?) to properly communicate with the rest of the system. And they’re one of the most power-hungry components of a laptop.

I’ll explain all of these things.

32-Bit and 64-Bit

Okay, so the last time you heard the terms 32-bit or 64-bit and had them mean anything was around the era of the original Sony Playstation and Nintendo 64. I’m not going to get into the esoterica of exactly what these terms mean, but here’s the gist of it: almost everything up until this point in consumer-grade computers is 32-bit. So just establish in your head that 32-bit is actually a known quantity: if you were running Windows 98 or XP, you were running a 32-bit operating system.

A 32-bit operating system can only address 4GB of memory at most. This is reduced by the fact that every piece of hardware in your laptop requires what’s called an "address" – an "address" that would normally be occupied by memory. Your computer knows everything in it by its address, and everything uses memory addresses. So if you only have 4GB worth of addresses, all the other parts are going to start eating into that, leaving part of the 4GB of physical memory you have untouched. I know it’s a little confusing, but it’s basically why when you put 4GB of memory into a Windows XP or Windows Vista 32-bit machine, Windows doesn’t give you the full 4GB – it may give you as little as 2.5GB or as much as 3.5GB, depending on the hardware you have in your machine.

What a 64-bit processor and 64-bit operating system does is dramatically raise the amount of memory your computer can address. By increasing this limit, it allows the computer to see the full 4GB and still have room for everything else. It also allows the computer to run 64-bit code and use 64-bit operating systems (Windows Vista 64-bit has become pretty popular). 64-bit programs can be potentially faster than 32-bit, though many modern implementations have seen only minimal improvement. The flipside to running a 64-bit operating system is that memory addresses are now twice as long, which results in programs requiring a substantially greater amount of memory to run.

Still, the move to 64-bit – provided you’re running at least 4GB of memory – can generally be beneficial. My desktop, for example, has 8GB of memory since I do high-definition video editing on it. Having full access to that 8GB of memory and being able to hand that access over to Adobe’s software can substantially improve performance.

So what does this have to do with your processor? Simple, really: your processor either can or can’t run 64-bit code. The overwhelming majority of notebooks on the shelf today can: if your processor is an Intel Core 2 Duo or some flavor of AMD Turion, it is 64-bit capable and will run that software happily.


The new hotness over the past couple years – after Intel and AMD both realized how hard it was to get a processor to run past 3 GHz – has been going "multi-core." You’ve probably heard the terms single core, dual core, quad core, and the odd tri core; these last two are currently only available in desktops although Intel has a notebook quad core on the way.

What the heck does this mean? Well, basically, it’s this: a single core processor is what we’ve been using up until dual cores came on the market. It’s basically just a single processor. A dual core is more or less two processors put together in one chip. You can probably guess what a tri core and quad core are.

Now, an important distinction: this does NOT mean that a 2GHz dual core is equal to a 4GHz single core in performance. All this means is that you have two 2GHz cores working for you instead of one. Why is this important?

Simple: if you have one cook in the kitchen, and he has to make both spaghetti and salad, and for some odd reason these two dishes take exactly the same amount of time to prepare, he’s going to have to do them one after the other. If you put a second cook in the kitchen, he can make the salad while the first one works on the spaghetti, and the work gets done in half the time. However, if you have both cooks in the kitchen and the only dish that needs to be prepared is the spaghetti, the second cook can’t really do anything, so his being there doesn’t reduce the overall time it takes to make the spaghetti.

So adding cores basically increases the number of cooks in the kitchen. However, here’s where it gets a little complicated: What if the recipe is written in such a way that it has instructions for more than one cook?

This is what’s called "multi-threading." A program that’s multi-threaded – written for more than one core – can take advantage of a multi-core processor. So essentially, your spaghetti recipe, instead of spelling things out one step at a time, now tells one cook to boil the noodles while the other one works on the sauce. One of these tasks is going to take more time than the other, sure, but overall, the spaghetti gets made in substantially less time. Not half the time, but less time.

This is pretty much exactly how multi-threaded programs work: they divvy up the work as best they can to send it through the processor. What’s important to note here, too, is that you don’t NEED a quad core processor to handle a program that runs in four threads: the cooks in your kitchen aren’t stupid, they’ll just separate the four tasks between however many of them there are.

Multi-core’s tangible benefit isn’t necessarily one of speed, but one of smoothness. Even doing basic tasks on your computer, a multi-core processor can divvy up the different programs you’re running between the cores, resulting in a smoother running computer. So if you’re running an antiviral scan and talking on the internet, instead of these tasks taking turns on your single core and reducing responsiveness, now the antiviral scan – the most time-consuming task – can sit in one core and do its thing while you talk on the internet using the other core. Your operating system keeps all of this transparent to you, too, so all you feel is the smoothness.

This is becoming a long-winded section, but there IS more: more cores isn’t always better, and there’s definitely such a thing as diminishing returns. For the vast majority of users, a dual core is going to be exactly as much as you need. Because as I mentioned before I edit video – a task that requires as much processing power as you can conceivably throw at it – I use a quad core in my desktop. Yet because most programs really can’t take advantage of more than two cores (let alone one), it’s important to keep in mind that the extra cores may go to waste. More than that, more cores means more power draw and with that more heat. In a notebook, these become serious considerations, which is why either vendor is taking so long to get notebook quad cores on the market, and why Intel’s upcoming notebook quad is an "Extreme" processor that will only surface in large desktop replacement machines.


Cache is kind of a funny thing; basically, it’s memory, or RAM, that’s been built into the processor. Because of its proximity to the cores and its build within the processor itself, it enjoys substantially greater bandwidth than the memory does. This cache is called "on-die cache."

It’s assembled into a sort of a hierarchy: L1 (level 1) cache is the smallest and fastest, L2 cache is pretty much the standard, and the odd processor employs L3 cache. Because of L1 cache’s integration into the core itself, L2 cache tends to be the one that sees the most variability. I’m sure someone in the forums is going to correct me on this, but bottom line here: L2 is the one you want to worry about.

Now I know someone is going to ask: if cache is so much faster than memory (and it is), why bother with memory at all? Why not just integrate all that memory into the processor itself? Simple: memory is physically huge. Sure, it’s not that big when you look at a processor or a stick of memory, but consider this: on a typical processor, the L2 cache takes up roughly half the die. So if as little as 4MB of cache takes up that much space, try to fathom how much space 1GB would need. And that’s why.

Cache size is honestly a number you shouldn’t worry about too much. Sure, it’ll be advertised all over the place, but it shouldn’t be a consideration. On an Intel processor, as long as the L2 cache is at least 2MB on a dual core, it’s plenty. On an AMD, it doesn’t matter as much, at least as of the time this is written.

Front-Side Bus

This is presently an Intel-only statistic, and even it will be phased out within two or three years. This basically refers to how fast the processor communicates with the memory. It’s measured in MHz.

This number is about as important to overall performance as the cache size is, and it bears mentioning that while desktop hardware has front-side bus speeds all the way up to 1600MHz, notebooks are just now seeing 1066MHz speeds. So why isn’t it ramping up in notebooks as fast?

Simple: a faster FSB draws more power. Remember that laptop processor design is a balancing act that tries to maximize the amount of performance a processor can provide while minimizing the amount of power it draws and heat it generates. So the faster this is, the more power it can draw and the more heat it can generate, which is why technology in notebooks tends to advance a bit more slowly.

This isn’t to say that you should be hunting down low-FSB processors or that you’ll get substantially or even noticeably better battery life for it – it’s all part of the total package.

Recommendations and Conclusion

The bottom line with modern processors is honestly this: a dual core is enough, and the clock speed doesn’t actually matter that much for everyday use. The rise in popularity of modern "netbooks" like the ASUS Eee PC, the MSI Wind, and all of the competitors waiting in the wings suggests how overpowered a modern processor can be. These notebooks use substantially lower-powered processors that nonetheless provide a perfectly reasonable experience for daily uses such as word processing, playing music, or surfing the internet.

While you don’t want a shamefully slow, crippled single core processor for regular use, that extra $250-$500 for a top of the line processor really isn’t going to do you any favors unless you’re rendering video or doing heavy gaming – and I mean HEAVY gaming – on your laptop. Even games are going to be largely limited by the processing power on hand in the notebook’s graphic card or GPU (more on this in a future article); the CPU itself honestly just isn’t going to factor in that much.

Breaking it down as I so love to do, here’s the bottom line:

  • CPUs are general purpose hardware, a jack of all trades and master of none.
  • The number of cores doesn’t directly affect performance, but rather smoothness. It can only improve performance in applications designed to handle more than one core, and these applications are termed "multi-threaded."
  • Cache and front-side bus are statistics that sound pretty, but largely don’t mean a whole lot unless they’re painfully low (1MB of cache or a front-side bus of 400MHz).

Coming Up:

In my next "How it Works" article, I’ll explain to you just what the heck memory (or RAM) actually does and why you may need so much of it.

Stay tuned!



All content posted on TechnologyGuide is granted to TechnologyGuide with electronic publishing rights in perpetuity, as all content posted on this site becomes a part of the community.