Last time, we took a quick look at PC66/100/133 SDRAM and how it worked with the chipset in your computer to communicate with the CPU and the rest of the system. In this article, we’re going to examine Double Data Rate (DDR) SDRAM in a similar way.
DDR SDRAM is an outgrowth of what we now call Single Data Rate or SDR SDRAM. Functionally, the difference between the two is just as the name implies: DDR SDRAM is capable of providing data at twice the rate of SDR SDRAM. Of course, as you might imagine, there are differences between practical and theoretical memory designs, so you probably shouldn’t expect a 100% improvement in memory performance for a given bus speed. Nonetheless, DDR is definitely faster.
I recently bought a motherboard that uses a VIA Apollo KT333 chipset. It supports DDR SDRAM, with a memory bus of up to 333MHz. I don’t happen to have any DDR333 modules, but I do have a couple of DDR266 modules which just happen to run at double the clock speed of the PC133 SDR modules in the motherboard the new one replaces.
Functionally, there isn’t much difference between the North and South Bridges of an SDR chipset and a DDR chipset. Other than being in color and having a bit more detail, the block diagram below isn’t much different than that of the SDR chipset in the last article. Basically, the North Bridge serves as the interface between memory, AGP, the CPU and the South Bridge, while the South Bridge controls just about every other I/O function on the motherboard.
Obviously something is different, though. After all, DDR modules add an extra 16 pins compared to their SDR counterparts. And they do benchmark faster.
Part of the secret is a clever trick with the memory clock. Right now, the fastest DDR memory on the market runs at 333MHz. More commonly, though, is the 266MHz memory. The trick is that even though the memory may run at 266MHz, the clock is still poking along at 133MHz. The secret? Each clock is really two clocks in one, each running at 133MHz, but 180 degrees out of phase with each other. Thus, when one signal is high, the other is low. One important part of this is that there are two points in each clock cycle when the two signals cross: when the first clock is rising from a zero to a one and the second is falling from a one to a zero and then vice versa. So, for each “pulse” of a 133MHz clock, you get two nicely defined points that can be used to control when data is sent in and out of the module. That’s also why most memory companies say that the data is clocked in and out of memory on both the rising and falling edges of the clock. What they really mean to say is that the data is clocked in and out of memory when the two clock signals cross.
The other important part of this clock scheme is that the clock signals are “differential”. Besides being 180 degrees out of phase with each other, they are placed on the motherboard (and module) in such a way that any noise that shows up on one clock also shows up on the other, so that if you subtract one signal from the other, you’ll always get zero. Since the result of a subtraction is a “difference”, this type of clock is called a “differential clock”.
But wait, there’s more! Data coming or going from DDR suffers from the same limitations as that of SDR memory. It takes a certain amount of time to get the information from one spot on the motherboard to another. The net result is that for a given address, it takes two clock cycles to get the data out. That means that as soon as one 64-bit chunk of data comes out of the module and goes to the chipset, the memory has to sit idle for an extra clock cycle before it can send data from a new address. That’s not particularly performance enhancing, so the committee that established the DDR specification came up with a solution.
Each DDR module communicates with the outside world 64 bits at a time, just like SDR does. The chipset receives those 64 bits, then doles them out 32 bits at a time to the CPU. That’s how PC133 claims a maximum 133MHz data rate. DDR ups the ante a bit. Even though the external data bus is 64 bits, the internal bus is 128 bits wide. That means that for every address that is accessed, four 32 bit words come out, 64 bits at a time. So, for those idle clock cycles, data is still coming out…it’s just coming out sequentially instead of randomly. Statistically, this bodes well for most data reads and writes because data does tend to come in chunks like that. And if all the computer needs is one of those 32 bit words? That’s OK; there are mechanisms to “mask” the non-desirable data.
One drawback of this tinkering with the clock specifications is that the idea of the memory running at some frequency starts to become kind of subjective. For example, is it more correct to say that a DDR266 module is running at 266MHz or at 133MHz? A good case can be made for either claim. Because of that, another naming convention has emerged, somewhat clouding the issue. You may see references to PC1500, PC2100 and PC2700. Those designations correspond to the maximum data rates of 1500Mb/s, 2100Mb/s and 2700Mb/s from DDR200, DDR266 and DDR333 modules (respectively).
It’s important to remember that the maximum data speeds are really just theoretical. They are calculated considering perfect conditions, particularly that all 128 bits that come out of the module are used. Still, even if things aren’t perfect, you’ll find that DDR virtually always outperforms SDR. And to make matters even better, there is virtually no price premium anymore.
What’s coming for the future? DDR-II offers even higher memory throughput by using different signaling schemes. You can find out more by looking at the web site of the Joint Electron Device Engineering council ( JEDEC ).