Experts estimate that Internet data center power requirements are rising as much as 20 percent a year, which in total consume as much electricity as some countries, including Iran, Mexico, Sweden and Turkey. The industry hopes to reverse the trend by revisiting the design of cooling systems, power supplies and server architectures.
In servers, the notoriously voracious microprocessor is passing the power-hog mantle to the DRAM, which offers fast data access but requires a heat-generating refresh every few milliseconds. Thus the greening of the data center includes a focus on lower-voltage DRAMs, non-volatile alternatives and the emerging category of storage-class memories.
Whether the green-memory movement thrives or dies on the vine, the DRAM status quo could be uprooted.
The DRAM’s power appetite is not its only problem. As the recession wears on, OEMs are keeping a nervous eye on struggling memory suppliers. “It’s not pleasant to see our partners suffer so badly,” said Tom Lattin, director of strategic commodities for industry-standard servers at Hewlett-Packard Co.
DRAM scaling, meanwhile, could hit a wall as it becomes increasingly difficult to shrink the capacitor within the device. That could fuel the need for such alternatives as ferroelectric, magnetoresistive, phase-change and resistive RAM.
Don’t look for the DRAM to disappear, said Bob Merritt, an analyst with research firm Convergent Semiconductors who believes DRAMs will scale to 20nm. “There will be DRAM applications for the next 10 years,” Merritt said, but “you will also see applications” that will turn to non-volatile alternatives (which don’t require refresh to maintain the data) for server main memory.
Bill Tschudi, program manager at Lawrence Berkeley National Laboratory, said the drive to make data centers more power efficient will include better IT practices, new power distribution schemes, higher processor utilization rates and “advancements on the memory side.”
“Memory power is a significant portion of platform power,” noted Dileep Bhandarkar, distinguished engineer with Microsoft Corp.’s Global Foundation Services unit. “As processor performance increases and virtualization takes off, the memory footprint will increase. There is a need for lower-voltage DRAMs.”
DRAM makers have responded with lower-voltage DDR3 synchronous DRAMs, which have found a home in servers from such vendors as HP, IBM, SGI and Sun.
Meanwhile, solid-state drives (SSDs) and I/O accelerators could shake up the memory and storage hierarchy. And server startups Schooner Information Technology Inc. and Virident Systems Inc. have released data center servers that promise to cut hardware costs as well as power consumption. The potential of the technology has prompted IBM to form an alliance with Schooner.
In theory, green servers could replace traditional X86- or RISC-based systems, possibly displacing DRAM in the process. Schooner and Virident use lower-power, non-volatile, “storage class” memory to handle the search index and other tasks usually relegated to DRAM.
Market watcher Frost & Sullivan estimates that a typical server farm of 5,000 systems with 32 Gbytes DRAM each could be reduced to 1,250 systems with 128 Gbytes each of non-volatile memory, resulting in a 75 percent reduction in energy over four years, a 75 percent reduction in the cost of physical space and a 45 percent reduction in capital expenditures.
Increasing consumption
Such reductions would be welcome news for U.S. data centers, which spend $3 billion per year on electricity alone, according to the Environmental Protection Agency. The EPA sees U.S. data center power consumption rising from 61 billion kWh today to 100 billion kWh in 2011. Meanwhile, Frost & Sullivan projects that the total installed base of data center servers will rise from 2.2 million units in 2007 to 6.8 million units next year.
The typical server consumed about 50W before 2000 but draws some 250W today, according to “Energy Efficiency for Information Technology,” a new book published by Intel Corp. And SGI, formerly Rackable Systems Inc., estimates that for every 100W to power a server, a further 60W to 70W are needed to cool it.
Processor power consumption ranges from 45W to 200W, according to Intel. In a server with eight 1Gbyte dual in-line memory modules, the DIMMs can contribute 80W to the power budget, according to Intel. In large servers with up to 64 DIMMs, the result could be “more power consumption by memory than processors,” Intel notes.
Intel incorporates “automatic memory throttling” on its processors to reduce heat. DRAM vendors are also reducing heat generation in their latest 50nm-class parts, exemplified by those from Hynix, Micron Technology and Samsung.
Meanwhile, server vendors have been migrating from DDR2 SDRAMs to 1.5V and, more recently, 1.35V DDR3 SDRAMs. DDR3 doubles performance and provides a 60 percent improvement in power consumption (for the 1.35V version) over DDR2, said Jim Elliott, VP of memory marketing for Samsung Semiconductor Inc.
By next year, DDR3 modules could migrate from conventional to load-reduced DIMMs, which could boost memory capacity fourfold. And by 2011, vendors could unveil DDR4 SDRAMs, reportedly a 1.2V technology.
But those developments won’t take all the pressure off DRAMs. Data center servers’ use of virtualization, which enables multiple operating systems to run on the same computer, reduces hardware costs but slices up the system workload; not all processors run the same tasks at the same time. Server utilization ranges from 10 to 30 percent in a data center, according to the Uptime Institute.
The use of virtualization, along with complex multicore processors, heightens the need for more-efficient memory, said Michael Sporer, director of marketing for enterprise memory at Micron Technology Inc.
“Today, the bottleneck is in the disk and the disk subsystem,” Sporer said. “The next bottleneck may be in memory performance, rather than capacity.”
Server startups Schooner and Virident are pushing similar concepts to address the looming performance squeeze.
Greening memories
In April, Virident rolled out its GreenCloud line of X86-based data center servers, said to deliver up to 70x the performance of traditional systems. The line uses storage-class memory, which bridges the performance gap between DRAM and mass storage. Virident said the architecture boosts processor utilization and eliminates I/O overhead by providing random word-level access to large data sets.
Virident’s systems still use DRAM for some functions, but storage-class memory is more efficient for search-index and related applications, said president and CEO Raj Parekh. Virident’s initial systems use Spansion Inc.’s EcoRAM NOR devices, but the startup also expects to use NAND and phase-change memory from Numonyx Inc.
Over time, Virident’s systems will variously support a single memory technology or a mixture of device types, depending on the application. NAND reads small data chunks at high rates, for example, while NOR is ideal for random read searches and phase-change memory offers high write speeds, Parekh said.
HP, meanwhile, is putting a new twist on a conventional approach with its new ProLiant G6 servers. Based on Intel’s Xeon 5500 processors, the G6 deploys thermal sensors and a technology that caps the power drawn by the server.
The servers also use DDR3 memory, which Jimmy Daley, marketing manager for industry-standard servers at HP, called a “major step forward” over DDR2.
HP stopped short of endorsing storage-class memory, but it offers an optional I/O accelerator from Fusion-io Inc. The subsystem, based on a redundant NAND architecture, does not replace the hard drive but sits between the memory and storage system to alleviate system I/O bottlenecks, said David Flynn, chief technology officer for Fusion-io. The accelerator is said to provide more than 1 million I/O operations per second in the HP servers.
SGI is keeping an eye on Spansion’s EcoRAM and the Fusion-io accelerator, said Geoff Noer, vice president of product management at the server maker. EcoRAM could address “some opportunities,” Noer said, but “I don’t see it as a mainstream solution.”
SGI’s new CloudRack C2 is a cabinet design that can pack a number of dense, rack-mount servers in the same unit. C2 supports up to 1,280 processor cores per cabinet. To handle heat, the X86-based offering uses redundant fan arrays and dc power supplies.
The C2 supports DDR3 SDRAMs, and Noer said he is also bullish on solid-state storage. Between 2008 and 2013, according to iSuppli Corp., the use of SSDs could allow data centers to reduce power consumption by a combined 166,643MWh—slightly more than the total megawatt-hours of electricity generated in the nation of Gambia in 2006.
That’s good news. But even as the server supply chain finds ways to rein in power, more data centers will be built, turning up the heat.
That has industry jokesters quipping that perhaps Google should look to erect its next data center on a rig off the coast of Iceland. Or why not the moon?