A new and different arms race

Blogs
(Image: Fujitsu)

21 July 2016

A new arms race is gathering momentum, but this one is not about capital ships, tank fleets or even warheads. This new race is for the most powerful supercomputers in a competition between super powers.

The battle lines have been drawn in this race, as the means to construct the devices, such as processors, memory, storage and interconnect technologies are being controlled by various governments to prevent others from having the ability to construct their own supercomputers.

Back in 2000, the PlayStation 2 was restricted for export because of its 6 gigaFLOP performance, potentially providing what was then regarded as supercomputer performance for bad guys. Even earlier, the Sinclair ZX-81 was a cause for concern for the same reason as it was suspected that the Soviets might try to buy them by the shed-load and use them for missile control systems.

“Supercomputers themselves are not weapons. Indeed, most of the time they do fairly intense but also innocuous things… But, they can also be used to model complex, dynamic situations such as nuclear reactions and explosions”

However, just last year, the US government banned Intel, NVIDIA and AMD from shipping certain components to China for fear of their use in constructing supercomputers.

Supercomputers themselves are not weapons. Indeed, most of the time they do fairly intense but innocuous things, such as weather modelling, stress modelling, computational fluid dynamics, census crunching, and many more things that are to anyone outside of the data analysis community, fairly boring. But, they can also be used to model complex, dynamic situations such as nuclear reactions and explosions.

Say, for example, if you as a nuclear-equipped nation, had signed up to an international test ban, but wanted to explore ways of updating your nuclear arsenal without actually destroying a south Pacific atoll, or carrying out a rather extreme case of deep fracking. In this instance, one would construct a supercomputer, input the test environment and the parameters of your new device and happily detonate time and again, safe in the knowledge that no one need ever know that you are amassing a wealth of information for next-generation weapons development.

But there are other things too that could be developed and modelled in just such environments. Writing for irreverent online tech news outlet The Register, John Leyden quotes Kenneth Geers, ambassador for the NATO Cyber Centre and senior fellow of the Atlantic Council, as saying that the Stuxnet attack on the Iranian nuclear programme by US and Israel was merely the opening skirmish in what will be a never-ending cyberweapons race.

If one had a supercomputer with which one could model the potential effectiveness of any future cyberweapon, that could yield results to refine designs and future directions, then one could be assured to stay one step ahead of others.

In much the same way that some of the last world war’s greatest bombing raids were against the likes of damns, ball bearing factories and power stations, restrictions on the means of creating such potentially advantageous devices that could be used to develop such dangerous capabilities are worth considering.

Not that it seems to have had much of an effect.

Just recently, China announced that it had built the Sunway “TaihuLight”, an all Chinese supercomputer with a theoretical peak performance of 124.5 petaflops. This was reported in the latest biannual release of the world’s Top500 supercomputers report. It is the first system to exceed 100 petaflops, making it the current most powerful supercomputer.

Not to be outdone, the US Department of Energy’s Oak Ridge National Laboratory is developing an IBM system, named “Summit” that will be capable of 200 petaflops by early 2018.

It is interesting to note that the two foremost nations currently in the supercomputer race are also the two largest economies, and arguably the most aggressive.

Russia does have supercomputer plans, and back in 2014 Russian publication Prensa Latina reported that Andrei Zverev, CEO of state-sponsored electronics holdings company Ruselectronics, was coordinating with the Ministry of Industry and Commerce to create a 1.2 petaflop computer for the Russian defence industry, adding that all of its processors and components would be designed in Russia. Perhaps ironically, there was mention even then of licensing Chinese processor technology.

Russia’s mentions in the Top500 listing has declined since about 2010, from more than 10 supercomputers to about seven, as it fails to keep pace with developments elsewhere.

Despite this, Russia has successfully developed the kinds of weapon systems that rely heavily on the capabilities of supercomputers, such as stealth fighters. The Sukhoi PAK FA programme has successfully transitioned to the T-50 aircraft that first flew in 2010 and is due for delivery to the armed forces late this year, or early next year. A stark contrast to the US F-35 fiasco, though perhaps more analogous to the F-22 programme.

The implications of all this appears to be that for the more traditional pursuits of the high performance computing (HPC) or supercomputer applications, the cutting edge is nice, but not critical. But for the emerging applications, arguably cyberwarfare among others, it is worth keeping up to date and pushing the boundaries.

However, there is a significant caveat in all of this. Despite the fact that the TaihuLight boasts a base of 10.65 million compute cores, the real measure of a supercomputer is its versatility — its ability to run multiple workloads.

Often, supercomputers are built as one-trick-ponies to do a specific job. However, they are now getting so expensive this is no longer viable and they must be robust and flexible enough to handle multiple usage modes to bring their power to bear on a wider range of tasks.

This drive for versatility may be what keeps the US in the lead in terms of usability, if not outright horsepower in the HPC race, given that it has access to the industry standards such as Intel and IBM, as well as the GPU chips of NVIDIA and AMD.

But another interesting development in this space came from Fujitsu.

One of the only other licensees of the SPARC architecture, so long the heart of Sun Microsystems and later Oracle servers, Fujitsu said that it would drop the SPARC architecture for an ARM-based one in its upcoming system, dubbed the “Post-K” supercomputer.

The K computer delivered 10.5 petaflops of peak performance, using the Fujitsu-designed Sparc64 VIIIfx processor. In 2014, Fujitsu said it would use its Sparc Xlfx processor in the next K computer. Now, the Post-K machine looks set to go a different route, perhaps where outright power is balanced with power consumption and modularity for greater versatility, but still in that tens of petaFLOPS performance range.

With all of these developments in the supercomputer scene, one might think it is a good thing, as things like genomes, new cancer drugs and global vaccination programmes can be modelled and tweaked, cutting down research and development times for the benefit of all, and to a certain extent, that would be correct. But for a bit of context on the race for the fastest supercomputers, I’ll leave the last word to Steve Conway, high performance computing analyst at IDC:

HPC “is now so strategic that you really don’t want to rely on foreign sources for it”.

Quite.

Read More:


Back to Top ↑

TechCentral.ie