Bring on Moore: Intel’s Multicore Move

Intel revealed its latest creation at the International Solid States Circuit Conference (ISSCC) in San Francisco on February 12; an 80 core chip. Apart from the fact that the chip has 20 times the processors of any introduced so far, it is remarkable for a few other reasons. It is capable of a teraflop or trillion floating operations per second, and its required 63 watts of power is less than that consumed by many of today’s dual core chips. This is not Intel’s first move into tera computing; it introduced the technology a decade ago for a system at Lockheed Martin Corp.’s Sandia National Laboratories. However, to put things in perspective, that model used a lot more space, about 2,000 square feet, and a lot more power, 500 kilowatts. In short, the new chip offers a network’s worth of computing power in the size of a fingernail and uses as much power as a light bulb. Though Intel says it does not plan to release the chip to the general public, it will more likely find its way to specialized functions such as those required in large-scale financial computing tasks; the size and power consumption of the chip mean that it, or at least one similar, could ultimately find its way to personal computers, and within the next five years. Such an introduction would radically alter the landscape of computing.

The press releases from Intel indicate what it thinks it has achieved, “Tera-scale performance, and the ability to move terabytes of data, will play a pivotal role in future computers with ubiquitous access to the Internet by powering new applications for education and collaboration, as well as enabling the rise of high-definition entertainment on PCs, servers and handheld devices. For example, artificial intelligence, instant video communications, photo-realistic games, multimedia data mining and real-time speech recognition — once deemed as science fiction in ‘Star Trek’ shows — could become everyday realities.” Bloomberg cites Intel Chief Technical Officer Justin Rattner and says, “the chip may help Intel play a part in developing products such as cars that know when there’s a pedestrian in the road and stop automatically, or computers that can recognize physical gestures by their users.” Pushing a new product via press releases and interviews is common, but Intel is not the only source touting the innovation. According to TechNewsWorld, “If successful, Intel’s research into ‘tera-scale computing’ – in which a chip modeled on Teraflop chips can perform trillions of calculations per second and move terabytes of data — has the potential to transform computers, software and the way people use their computers.” Though this specific chip was designed by Intel for research purposes, according to Jim McGregor, principal analyst at researcher In-Stat, “This is putting the proof-point out there that their road maps are on the right track.” However, there are hurdles, not just for Intel, but for all vendors working in multi-core processing.

Chief among the challenges to multi-core computing is the fact software programmers are not yet a match for the relatively new chips. InformationWeek points out that, “as the PC industry moves towards eight-core processors and beyond, the key is software, not hardware.” In its article on this issue InformationWeek quotes Bob Brodersen, the John Whinnery chair professor and co-scientific director of the Berkeley Wireless Research Center at the University of California at Berkeley, ”Can the software industry meet this challenge?” Referring specifically to Intel’s new chip Brodersen asked, ”How do you program that thing?” Rob Enderle, principal analyst at Enderle Group agrees, “The software community isn’t able yet to really grasp two cores effectively, let alone four or eight. Larger numbers are well beyond the current skill set.” BusinessWeek sums it up this way, “The main stumbling block to widespread acceptance of such chips, however, is the difficulty in writing software to take advantage of multiple cores. Even as Intel and AMD race to deliver quad-core chips in the next few months, software developers continue to struggle to write threaded applications to take advantage of just two cores.” Intel has considered such implications.

Software Development Times says, “In a global effort to drive adoption of the new hardware capabilities, Intel has set up curriculum in 45 universities around the world in threading of code and parallelism, to help developers understand and leverage the new chip architecture.” Referring to Intel Chief Technology Officer Justin Rattner, BusinessWeek says, “Intel’s Rattner suggests the chipmaker made the announcement of the new chip early to get software developers thinking about massively multicore chips. ‘If we just go two, four, eight cores, we’ll never get there [with software].'” Rattner is also quoted by ZDNet, “I think we’re sort of all moving forward here together . . . As the core count grows and people get the skills to use them effectively, these applications will come.” Intel is also ramping up its training efforts, working with software developers on creating tools and libraries. The ZDNet article expresses the challenges ahead, “The PC software community is just starting to get to grips with multicore programming . . . Microsoft, Apple and the Linux community have a long way to go before they’ll be able to effectively utilise 80 individual processing units with their PC operating systems.” David Patterson, a University of California, Berkeley, computer scientist and co-author of one of the standard textbooks on microprocessor design believes the scenario is one that the programming community must address, “If we can figure out how to program thousands of cores on a chip, the future looks rosy,” he said in the New York Times, “If we can’t figure it out, then things look dark.”

All this is not to say that Intel now has a lock on the future of processing, there are competitors, both in the multi-core and other spaces. BusinessWeek notes that rivals are, “also are hard at work pursuing ways to make chips work harder while sipping power.” Both AMD and IBM have been working on parallel computing, “which breaks up huge tasks into pieces, enabling them to be managed by different parts of a chip.” IBM and others have also introduced new materials used in chip production which cut down on data leakage and allow for a considerable reduction in size; getting chips past the 45nm level which was seen as a brick wall is now a reality. Furthermore, at the same show that Intel made its announcement IBM also held an unveiling. IBM’s offering, called eDRAM, will embed two types of chip on a single piece of silicon. Embedding the DRAM directly on the processor will allow for the elimination of SRAM, or static random access memory, “which is typically faster than DRAM, and acts as a go-between between the DRAM and the processor.” This will help substantially improve processor performance and reduce size. The product is scheduled for release beginning in 2008.

Two things can be said with certainty about the processing chip arena, its pace of change is extraordinary and the market ripe. Though the future is anything but certain, it is going to be smaller and faster.

A more complete version of this posting, with journal articles, and research reports can be found at the website of Analyst Views Weekly.

More information on this topic can be found in the Processors & Semiconductors section of Northern Light’s Software, Computers, & Services Market Intelligence Center.

And in the following articles:

Intel Teases 80-Core Chip before ISSCC
EETimes, February 12, 2007
With dual-core processors, Intel Corp.’s motto is “Do More.” In the recent future, it may be “Do Anything” if the company’s research into an energy sipping, 80-core chip ever trickles down to everyday desktop computing.

Intel Develops 80-Core Power Efficient Chip
TechWorld, February 12, 2007
Intel has developed an 80-core processor that performs more than a teraflop while using less electricity than a modern desktop chip. First described by Intel executives in September, the chip fits 80 cores onto a 275-square millimetre, fingernail-size chip and draws only 62 watts of power – fewer than many modern desktop chips.

Intel Shows Off Trillion-Calculation-a-Second Test Processor
Bloomberg, February 11, 2007
Intel Corp., the world’s largest semiconductor maker, created a test processor capable of performing a trillion calculations a second, billing the thumbnail-sized chip as the first of its kind.

Intel Touts Teraflops Potential of 80-Core Processor Prototype
InformationWeek, February 11, 2007
In a unusual bout of Sunday newsmaking, Intel today issued a press release announcing that its researchers have developed what the chip giant is billing as the world’s first programmable processor that delivers supercomputer-like performance from 80-core chip.

Intel 80 Core Chip Revealed in Full Detail
The Inquirer, February 11, 2007
The Roadmap to high end chips is now more than ever dominated by interconnects and the ability to get data in, out and around the chip. Couple that with a trend toward more task specific CPUs and you have a new “paradigm” in the works. Those paradigms are shown off in Intel’s Polaris chip.

Intel Tests Chip Design with 80-Core Processor
ComputerWorld, February 11, 2007
Following their march from standard processors to dual-core and quad-core designs in 2006, Intel Corp. researchers have built an 80-core chip that performs more than a trillion floating-point operations per second (TFLOPS) while using less electricity than a modern desktop PC chip.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: