Rack, blade, brick!

Pro

1 April 2005

Like many other areas in the overall ICT sector, the server market is going through difficult times. According to figures released by market researcher IDC in early June, worldwide sales of servers were down 3.6 per cent in revenue terms for the year’s first quarter compared to the same period in 2002. Nevertheless, unit shipments for the same period were up by 11.5 per cent, with very strong sales in the entry level category, defined by IDC as being less than $25,000.

These figures would tend to support IDC’s contention that with IT budgets still tight, companies are adding server capacity but are doing so by using low-cost, rack-optimised hardware.

IDC predicts that the worldwide market for 2003 will equal that of 2002 in monetary terms with total revenue of about $49bn. However, in 2004 and 2005 the market will grow reaching $58bn in 2005.

HP continued to be the market leader, said IDC, with $29.4m in sales or 28 per cent of the worldwide market. However, that represents a drop of 11.7 per cent over the Q1 2002. IBM on the other hand, grew its first quarter revenue by 6.9 per cent year-on-year. But that figure was dwarfed by Dell’s 15 per cent growth in revenue for the same period.

This does not of course, tell the whole story. Not only are companies fighting it out for market share, but so too are different configurations and processors. The market is at present, roughly evenly divided between tower and rack-mounted, or ‘pizza box’ servers. Towers resemble desktop PCs in the way they look but are optimised for use as servers, typically with specialised hardware and software. Rack-mounted servers conform to a standard size usually referred to in U values, where 1U is 1.75 inches. The architecture of these resembles that of tower servers with each unit containing one or more CPUs, storage — hard disk and/or optical — and communications.

In recent years, however, a new server configuration has emerged: blade servers, with one or more CPUs and memory only, and brick servers, also known as super-blade servers, which come in various processor, RAM, input/output and storage configurations.

Rack is where it’s at

‘If we look at the numbers from IDC, they reflect what is going on in Ireland,’ says Fergus Murphy, marketing manager for Dell in Ireland. ‘Rack-mounted is where it’s at. Tower systems are still growing in numbers but not as much. Blade servers are coming on but are only expected to account for 2.6 per cent of the market in revenue terms this year. It will grow significantly over the next few years but this year and next year we are still forecasting blades as accounting for a small chunk of the market.’ Rack-mounted systems, he says, will still dominate for the next two to three years with blades only achieving a significant market share by 1996.

Simon Sparrow, industry standard services product marketing at HP agrees. ‘There will still be a need for tower systems even though demand has dropped. They still have a use in branch office type activities where you don’t want to put in a rack.’ In other operations, however, there is definitely a trend to rack-mounted computing. One of the driving factors, he says is ease of management from a cabling point of view because of the higher density of computing power. It may not be as high a priority as, say, processing power, but factors such as infrastructure management do play a role.

‘We also offer a complete range of single-, dual- and quadruple-processor blade servers,’ he says. ‘A lot of people are interested in them for thin client applications such as a Citrix environment.’

Sun Microsystems is also a major player in the server market. ‘We break our computing architecture into two types: vertically scaling and horizontally scaling,’ says Aidan Furlong, country manager, Sun Microsystems Ireland. ‘Typically a vertically scaling system is a single chassis into which you add components and interconnect between them with a low latency, high speed back link.’ Horizontally scaling systems, on the other hand, he explains, are based around single or dual CPU systems more loosely interconnected. As a result the interconnections have higher latency. With vertically scaling systems, he says, there would typically only be one or very few instances of operating systems, whereas horizontally scaling systems would have as many operating-system instances as processors in a single cabinet.

‘We have taken a leadership position in symmetric multiprocessor (SMP) computing in the Unix base delivering up to 128 processors in a single chassis with over 95 per cent linearity of performance. Then about five or six years ago, we started driving down into more horizontal technologies when we saw the advent of Internet based services away from client-server architecture to multi-tier server level architecture. That drove down the requirement for larger volumes of more densely packed form factors.’

Three tiers from Sun

Sun initially identified three tiers in its multi-tiered architecture (MTA). Tier 1 was identified as highly dense platforms focused on Web services. Tier 2 was the application layer and Tier 3 was the database layer. Traditionally Tier 3 would use vertically scaling systems, Tier 1 would use horizontally scaling systems and Tier 2 would use a mixture. ‘In recent years, we have seen customers taking Tier 1 and having further granularity requirement. We call this Tier 0 which we see being regularly populated by blade systems.’

Blade technology, says Furlong, is growing fairly significantly on the edge of data centres. ‘They are typically driving the processing of http requests, and handling enquiries coming in on Web services. They also provide firewall protection of internal systems and virus scanning at this level. Organisations are finding it easy to implement these servers right at the edge of their data centre before they get into the real meat of service offering.’

While the idea of the blade concept is to put key server functions on a single card to achieve high density within a rack, the brick concept dictates that one build a standard machine and embed a high performance switch into it, according to Tiriki Wandurugala, senior consultant on servers, EMEA IBM. ‘This allows you to scale the machine up, only paying for the units you purchase,’ he explains. ‘With blades you buy a high-cost frame and fill it.’

According to Wandurugala, brick servers lend themselves towards databases whereas blades are more suited to moving tasks such as file, print and thin-client operations. Bricks are also suitable for virtualisation, in other words a single machine can behave as if it is two or more servers. ‘Imagine a shop where you need 50 IP addresses for testing today, but tomorrow you don’t. Or imagine the Internet world of Web serving where it is difficult to predict demand.’ With virtualisation, he says, it is possible to bring a server online by typing a few commands and then taking them offline the same way.

Processors

According to IDC the server market is dominated by CPUs based on Intel’s X86 architecture. This processor family power 87 per cent of the units shipped during 2002 and it is expected that over 88 per cent of units shipped this year will be similarly equipped. While the X86 architecture will decline in terms of share over the next four to five years, it will remain high at over 80 per cent. The winner will be IA64, the most visible exponent of which is the Itanium processor from Intel. This processor uses the EPIC (Explicitly Parallel Instruction Computing) instruction set, developed jointly by Intel and HP. EPIC-based processors were deployed in 0.1 per cent of servers shipped last year, but IDC expects them to be used in 7.2 per cent of units by 2007, provided HP sticks to its roadmap.

HP currently sells systems based on two different RISC processors: the PA-RISC chip which has long powered its Unix servers, and the Alpha which it inherited from Digital via its purchase of Compaq. Both of these processors will be phased out over time.

‘HP is committed to the road map we published last year,’ confirms Paul Marnane, enterprise marketing manager (High End Systems) at HP. ‘We brought out a new Alpha chip in January of this year and we will bring out the ED79 next January and that will be the last Alpha. Likewise with PA-RISC, we will be bringing out the 8900 next year. At the very high end we have the MITZ 14000 and a speed up of that process is due.’

After that, says Marnane, HP will have one consolidated approach. ‘From 2005 onwards we will have a single platform based on Itanium. But even though 2004 will see the last of the other chipsets, we will continue to sell Alpha, PA-RISC and MITZ systems until 2006 and we will support them until 2012.’

So does this mean it’s going to be an Intel world or is there room for any other processor?

‘I wouldn’t be sitting here talking if I didn’t think there was room for other processors,’ says Furlong. ‘Sun is a non-conforming organisation. We built our strategy on non-conformity. From our point of view the most important thing we can do is protect the investment our customers, partners and software developers have made over last 20 years. Every engineering change, every new design, every innovation that we make has that underlying requirement. Yes we do get asked why is Sparc running at only 1.2GHz while Intel X86 is running at 2GHz. We reply by asking is the application running any faster? That is more important than clock speed.’

Moore’s Law flawed?

A problem with Moore’s Law, says Furlong, is that while processor speed is doubling every two years or so, it does not take into consideration the fact that memory speed only doubles every six years. ‘We now have a situation where a fast processor executes an instruction, goes to memory for data and then waits for memory to catch up. Processors spend up to 80 per cent of their time waiting for data which is very inefficient.’ Sun, he says is emphasising throughput computing rather than processor speed. ‘In a traditional single thread CPU you have a compute cycle, latency then another compute cycle and so on. What we are doing is adding multi-threading at a chip level.’ When the processor finishes the compute cycle, he says, instead of waiting it begins a new thread.

‘What we are saying is if you have a rack full of blades with a processor in each blade, you can run, say, 100 threads through that rack. But if we start upping the capability of the processors you can get the same throughput from 10 servers as from 100 servers. We believe that within two years we will see at least a 15 times increase in processor throughput and beyond that we could go to 30 or 40 times the performance of today’s single thread processors. We will be able to drive SMP-type functionality on a chip.’

Wandurugala, is also confident that while Intel will be used in the majority of servers, non-Intel processors will still have a role to play. ‘I do not believe the server industry will migrate to a single processor type,’ he says. ‘The introduction of Itanium will be interesting as it is the first time Intel is bringing out a processor that is not backwardly compatible. All of the assumptions people have made in the past such as I have a 286 processor, so I can move to 386, cannot be made with Itanium. This offers people a chance to seriously consider what 64 bit computing will get them.’

IBM, he says, is following a processor-neutral strategy. It will continue to ship servers based on both its own Power RISC and on Intel architectures. ‘The main thing about the eServer brand is choice,’ he says. ‘We are not saying that Intel runs the world. Instead we are saying that we have a large number of customers who run their businesses with a mix of systems and they won’t want to throw those out. They want us to take the raw technology and package it into forms they can use, consistent with what they have already.’

For instance, he asks, how do you explain to a customer doing file and print serving only that they have to migrate to Itanium. It is for this reason, he explains, that the EXA (Enterprise X Architecture) chipset IBM uses is specifically designed to support both Xeon and Itanium. ‘But then the other question arises, when machines are virtualised, do you care which processor you have? Virtualisation changes the game again,’ he said.

Wandurugala finished on a cautionary note. ‘When I and my friends left college, we thought we would write the best operating system in the world or design the best chip in the world. But our professor cautioned us by asking: “do you see one of anything? If you do, it usually means it’s dying out”.’

05/09/2003

Read More:


Back to Top ↑

TechCentral.ie