There is really no doubt about it, supercomputing is sexy-even if it does compete in FLOPS. What’s the fastest computer in the world today? Cray and IBM are probably most consistently among the leading runners but every so often a contender/pretender gets an e-nose in front for a period. Fujitsu’s K cluster holds the crown currently, having taken it from China’s national Tienhe-1. Other badges right up there include Hitachi, NEC and HP in partnership, SGI (Silicon Graphics) and France’s Bull.
For those who like detail in their tech diet, that Fujitsu K achieved 10.51 petaflops/s from a grand total of 705,024 SPARC64 processing cores. It is the first supercomputer to achieve a performance level of 10 petaflop/s, or 10 quadrillion calculations per second.
Core issues
Generally speaking, Moore’s Law still obtains in our advances in processor speeds. But in recent years two engineering advances have accelerated the overall computing performance, both in high performance computing and at more mundane levels. Multicore computing with two, four, eight or more processors on a single chip was the first step. This is essentially parallel computing on a chip and has made enormous speed gains in overall performance possible, with the important proviso that the programming has to manage and take advantage of the multiple computing resources in parallel.
The other engineering development that is having an almost dramatic effect in delivering speed gains efficiently and economically is the growing use of graphics processors for general purpose computing. Graphics chips (GPU or graphics processing unit) have for some years used multiple cores to speed up graphics rendering. Married to traditional CPUs they can accelerate applications by offloading some of the compute-intensive and time-consuming portions of the code while the rest of the application still runs on the CPU. From a user’s perspective, the application or task runs faster because it is using the massively parallel processing power of the GPU to boost performance.
Dr J.C. Desplat, associate director of the Irish Centre for High End Computing (ICHEC) gives a very stroking yet practical example of the kind of impact these developments have had. "The investment in our Stokes national computer cluster in 2005 delivered a 3 teraflops level of performance at an investment cost of about EUR*1.2 million. More recently we took performance up to 40 teraflops at a cost in the region of EUR*2 million. Yet last year here in ICHEC Dublin our engineers built a 1 teraflop computer for under EUR*1,000 from standard components, bought in fact from Komplett.ie."
Graphic interface
"What that illustrates is the GPU technology is genuinely disruptive in high performance computing for scientific and general purposes," Dr Desplat says. Already, three of the top 10 fastest computers in the world use GPUs. In fact the clunky term ‘GPGPU Computing’ (General Purpose Graphics Processing Units) is probably destined to become very familiar as it becomes mainstream in all forms of advanced computing. ICHEC has been a research partner of global leader NVIDIA since 2010, working on its CUDA (Compute Unified Device Architecture) parallel computing architecture. It is one of just a handful of supercomputing centres involved.
"We are at the beginning of major technological change, with processor manufacturers such as Intel and NVIDIA increasing the core density from tens to hundreds and even thousands of cores per chip over the next three to five years. This ‘many core’ computing is a genuine game changer and will provide huge benefits and efficiencies to those able to embrace it and major challenges for those who fail to address it. ICHEC is one of the leading centres in the world in the use of this technology," Desplat says, "and has already brought significant benefits to its academic and industrial partners, in one particular case shortening time to solution by two orders of magnitude."
Irish centre
ICHEC is the official national body for high performance/high end computing in Ireland, based in Dublin and NUIG and working with third level and research institutions, HEAnet and Met Eireann. Soon to become an autonomous not-for-profit company rather than a public sector body, ICHEC is also the Irish partner in the 24-country Partnership for Advanced Computing in Europe (PRACE), which offers a high performance computing infrastructure or ecosystem for both academic and industry science and research. Since 2008, ICHEC has operated as a condominium cluster and shared service using the Stokes and Stoney High Performance Computing (HPC) clusters. Significant savings to the tax payer are being achieved by moving away from the traditional cluster ownership approach to dedicated partitions of the national supercomputer on-demand. But the users can also book additional resources or yield spare cycles when required. The current members are Met Eireann and the Dublin Institute for Advanced Studies and universities UCD, DCU, NUIG, UL and NUIM. ICHEC itself is based in TCD, and manages the condominium resources for all in a co-located data centre.
ICHEC has two other important strands to its activities, HPC training and a range of technical consultancy services to business organisations in utilising innovative ICT. The training involves seminars, formal courses and direct mentoring for IT people and researchers encountering HPC for the first time and needing sufficient knowledge to take advantage of it for their own projects. An increasing number of private sector enterprises are working with ICHEC in areas such as analytics and data mining, cloud computing and optimising their applications to run on HPC and GPGPU. Well-known companies that have endorsed the value of ICHEC assistance include Tullow Oil, Paddy Power, Ezetop and CarTrawler.
Weather look
Since meteorology today is almost notoriously a computing-intensive applied science, it is hardly surprising that Met Eireann is one of the most active partner users of ICHEC’s resources. It is possibly the only daily user, running four forecasts daily that are based on a 10kms grid of its forecast areas. "We use the HIRLAM(HiRes Limited Area Model. See www.hirlam.org ) model, which is a collaborative international research programme between European meteorological institutes for short range weather forecasting," says Eoin Whelan, meteorologist in the Met Eireann research and applications division. Met Eireann uses a local implantation of the general model shared by Denmark, Finland, Iceland, Netherlands, Norway, Spain and Sweden.
Met Eireann has invested in computing since the late 1970s and had advanced to an 18-core IBM Linux cluster by the mid-2000s. That in-house machine was decommissioned in 2007, two years after it has begun to use ICHEC and its Walton HPC system, since replaced by the national Stokes supercomputer.
"Since July last we have moved to a next generation model ‘Harmonie’, bringing resolution down to a 2.5kms grid. We were the first HIRLAM member to do so operationally," explains Whelan. "About 20 years ago we were working with a 50kms grid, which was state of the art in forecasting. Today we are giving local forecasts out to two days that are as accurate as a 24-hour forecast a decade ago." From an IT perspective, the salient point is that numerical weather forecasting is based on extremely complex statistical analysis and modelling. As Whelan points out, all forecasting is judged on two simple criteria: quality and timeliness. Easy to say but progressively more difficult in mathematical and computing terms as the grid areas get smaller. Like all weather forecasters internationally, the ambition is to reach a stage where a small local area or event can enjoy a highly accurate prediction of short range weather-put out the ice cream, close the stadium roof or tie down the chairs.
Making waves
Another area of science which is about equal notorious for the mathematical and computing challenges it produces is fluid dynamics and in particular ocean waves. In that context, one of ICHEC’s best ‘customers’ is Professor Frederic Dias, a recognised global expert currently leading a five-year research programme in UCD’s Earth Institute. He is on leave from his chair in mathematics at the French Ecole Normale Superieure in Cachan. With a principal investigator award from SFI and in partnership with Aquamarine Power, he is working on high-end computational modelling for wave energy systems. This is both theoretical and practical, Prof Dias points out, since the modelling challenges are common while he is also investigating actual and potential working conditions for the Oyster device being developed at prototype stage by Aquamarine. This is one of four candidate devices currently for an EU-funded trial offshore Ireland, with the Clare and Mayo coasts as the contending locations. In addition, Dias has recently received a European Research Council award for research into extreme wave events. His UCD research topics list covers water waves and wave energy plus ‘sloshing, tsunamis and rogue waves’.
"The research is in three main areas, all of which pose computational challenges," Prof Dias explains. "The current Oyster device is a single prototype 26x13x5 metres in about a 10 metre depth of water. That is affected by free surface hydrodynamics, but there is also a large plate moving up to 60 degrees which can have a significant effect. Short and long wave lengths then come in, with other potential factors such as water depth."
After that, he says, there is the question of what happens when a number of devices are deployed in an array. Overall, there is then the question of what is called ‘wave climate’ which is still at the fundamental research stage. Deep water and near-shore waves will behave differently, open water or shallow water or other geographical environment will affect any modelling or prediction.
"These are the kinds of complex computational work we are engaged in," Prof Dias says. "In doing it we work closely with ICHEC. Recently, for example, they are helping us with the ‘parallelisation’ of a lot of our code so that we can use the computing power more efficiently."
Intensive users
The Tyndall National Institute in Cork is engaged in a wide range of research, notably in fields such as micro/nanoelectronics, photonics, microsystems and theory modelling and design. "We have over 50 staff in our modelling research team alone out of total staff of 450," explains Prof Jim Greer, head of electronics theory and director of graduate studies. "We are intensive users of HPC and in fact we reckon we consume between 20% and 30% of the national compute resource at this level. Our own HPC resource deploys over 2,000 cores, which are typically used in-house for tasks and projects utilising from say 64 cores to perhaps 256, and we are running at effectively 90% capacity."
Tyndall works with ICHEC and through it the PRACE network when it has HPC needs at a tier or more above its own capabilities and has also partnered with IBM in using its famous Watson research centre. "One project there in transistor simulation at an atomic scale took 2,000 cores for six days to simulate the behaviour of a million atoms-that is the level we can sometimes require," says Greer.
Exascale computing
IBM has based significant elements of several global research projects in Ireland in recent years including its Smarter Cities project and its first IBM R&D Laboratory in the EU which adds remits in risk management analytics (‘Risk Science’), exascale computing and next generation systems in HPC with a major focus on workload optimisation and parallel computing algorithms. Dr Lisa Amini, director of the IBM Lab, takes a pragmatic approach to where our huge advances in computing power are heading. "At the highest end, exascale computing and beyond, we are looking at multi-core, multi-processor systems with top end DRAM and advances in materials and optoelectronics and 3D chips. Just below that level is HPC as we are applying it more and more today at a systems level, the challenge is not so much in the engineering as in the systems software. In order to get the best overall performance we need to understand our workloads better so that we can optimise them to take full advantage of the processing power offered by massively parallel computing."
In essence, she says, we can go after faster component speeds in exascale computing but our current ways of handling software and workloads will not scale in the same way. All in all, to exploit seriously the power of supercomputing in real world applications, from scientific research to data modelling and deep analytics, progress has to be made also in workload optimisation and the design of applications to work better with the new architectures.
HPC value
"The Smarter Cities programme is a very good example of the many ways in which HPC can be of value across a wide range of potential applications and from the highest levels down through the technology stack to specific tasks," says Dr Amini. "Weather forecasting, for example, depends on HPC but the results can then be applied in much more mundane and practical ways. For example a wind forecast may tell local authorities whether to expect and deal with fallen trees or structural damage. A rain forecast may trigger drainage management options or warnings for sub-surface infrastructure like tunnels. Similarly, results of analysis of all kinds can have significant value for transportation management, water, energy, emergency services or whatever. The key point is the value of applying analysis to business, in this case the better management of our cities."
More than half of the world’s population live in cities and urban populations are projected to double by mid-century. As cities grow and expand services their management and governance becomes increasingly complex. Through better information generation, analysis and sharing city managements anticipate problems and line up priorities and resources more effectively. As Dr Amini describes it, the IBM Smarter Cities initiative aims to focus smart technology on all aspects of city living and management, tying together benefits from high level HPC analysis all the way down to sensor networks for monitoring and automation and robotics to respond appropriately to regular needs or emergencies.





Subscribers 0
Fans 0
Followers 0
Followers