SM: Chip design does not have to be a labor intensive process.
FW: No it doesn’t, but it can be. You can very easily apply too many people. You need a lot of people for complex chips, so there is always a temptation to use too many people. The problem is you then run into the mythical man-month problem and you run into communication overhead. A good friend of mine, Jim Keller, has a rule that it takes 100 people to design a processor. If you have fewer it takes longer, if you have more it takes longer. There is an optimal number of people for work in our space.
Part of the reason I liked what MetaRam was going to do is that we saw something we could do with a relatively simple chip, meaning that a team of 30 people could get it to market. What that means is you can hire a more capable team. If I set out to hire 200 people in a year, I could not hire 200 great people. Now Google can because they are so big they can attract 200 great people. However, if you start with just two that type of hiring will never happen. If we focus on 15 or 20 then we can keep our standards high, use close connections, and find great quality people. Having a smaller team that is more capable is always a big advantage in this business.
SM: Could you relate a bit of history as to where memory has been and where we are going with memory?
FW: For quite some time main memory has been based on DRAM. They are on a pretty good pace of increasing density every year but are not able to do it quite as fast as the processors are increasing the demand for it. There is a gap that opens up and every 5 or 10 years where you have to figure out some change to close that gap. Two years ago we really reached that point, partly because the overall transition from 32 to 64 bit computing was delayed by the delay of the x86 moving to 64bit.
When you are a 32 bit machine you only put 4 GB of memory in a machine before you run out of address space. People took this artificial address space barrier and used it to allow machines to keep using smaller amounts of memory longer than they should have. This built up big demand for more memory, yet at the same time new applications such as memory databases and web searching drove up the demand for the amounts of memory needed. We had reached one of these points where some change to the overall systems architecture was needed to put more memory into one machine.
Interesting there is a history of memory having gone through with some really neat innovations which were fundamentally good but something was not quite right in their deployment. Rambus introduced new memory technology around 1994. They completely changed their interface to the memory offering much higher bandwidth and the ability to put more memory into a system thus increasing capacity. Those were really good breakthroughs and fundamentally too radical of a change to the systems architecture which is why they did not succeed in the marketplace. I think they violated a couple of golden rules of memory. The first rule is don’t change the DRAM. DRAM is all about how to get the maximum number of bits on a device. Any time you add complexity to DRAM you are breaking a golden rule. The second issue is that they changed the basic interface which meant they added cost to low end systems without providing a lot of benefit. It became a case where the poor are taxed but only rich benefited.
The next generation was something called fully buffered DIMM. What they tried to do was fix one of those problems by not touching DRAM by using a chip that sits between the CPU and memory to do the same thing Rambus allowed you to do. Unfortunately it did not fix golden rule number two which ultimately hurt them.
This segment is part 3 in the series : Pioneering Change in the Memory Market: MetaRam Visionary Fred Weber
1 2 3 4 5 6