The cutting edge, blunted

(Image: Stockfresh)

Despite computer development continuing apace on many fronts, the immediate replacement for x86 seems as elusive as ever

Print

PrintPrint
Blogs

Read More:

9 February 2018 | 0

Since about the turn of the millennium, people have been seriously asking what is next after x86 computing.

The venerable architecture has been around since 1978, and has persisted long beyond what would have been forecast then, as it has gone from its 16-bit beginnings, through to 32 and then 64-bit iterations, before going multi-core to proliferate in ways never thought possible, even in the early 21st century.

The advent of virtualisation, containers and other elements of what is now cloud computing have arguably worked to extend that life even further, but the recent Spectre and Meltdown bugs have shown that the architecture cannot be extended indefinitely, and that major limitations may soon arise, and that Moore’s Law looks more threatened than ever.

Alternative architectures have been around for quite a while too, from ARM to RISC-based processors such as Sparc, but these have tended to be targeted solutions that do a particular type of job really well, or are applied for a particular effect, such as low power consumption.

Some way off
A broad application new architecture is still some way off, though again, much work is going on behind the scenes to address this.

One project of note, on which we have been reporting for some years, is HP’s, now HPE’s, ‘The Machine’. This uses a new type of persistent memory as the core of its approach, known as memristor technology, and arrays processors around combined storage and memory. Photonic interconnects use light instead of electrons to transfer information and the result is a hugely more efficient and scalable design that does not appear to suffer the same drawbacks as current processor-centric options.

However, progress is slow, and in 2014, we ran a story saying that the Machine could be up and running by 2016. In 2016, we reported calls for open source developers to pitch in with software stacks to take advantage of the new architecture and its capabilities. In May of 2017, HPE displayed a prototype at its Colorado labs, but it is limited compared to what was envisaged back in 2014.

“Taken together, this all adds up to a less than rosy outlook in the five to 20-year time frame”

However, the prototype does have 160Tb of memory capacity, which is vastly beyond what any currently available server can handle, showing the potential of the distributed processor approach.

The prototype ‘Machine’ runs 1,280 Cavium ARM CPU cores, while the memory and 40 32-core ARM chips, divided into four Apollo 6000 enclosures, are linked via a fabric interconnect, offering huge capacity over current technologies. According to reports, the connections are designed in a mesh network to allow memory and processor nodes to easily communicate. Field programmable gate arrays (FPGA) provide the controller logic for the interconnect fabric.

This is all very impressive, but again. Is somewhat short of ambitions, and far from a commercially viable option. HPE is to be highly commended for spending the time and effort on such a project, but in terms of practical alternatives for the future, it is still a matter for whitecoats.

Quantum leap
Amid the clamour of the Meltdown and Spectre furore, Intel announced its 49-qubit quantum chip, with optimistic views of where quantum computing is going. IBM has long worked on the quantum computer, as has quantum specialist D-Wave. Microsoft too, is working on quantum computing and CEO Satya Nadella has talked extensively about it at the 2017 Ignite conference, where he lauded its massively parallel benefits. He expanded on this at the economic forum at Davos, where he said that quantum computing had the potential to solve in hours problems that current computing systems could take lifetimes to tackle.

Nadella used the analogy of a complex maze, where instead of a trial and error, single strand approach to solving the maze, a quantum computer with sufficient capacity could compute all possible permutations all at once, providing near instantaneous answers. He said the likes of climate change, pharmaceuticals and other areas of medicine and clinical research could be revolutionised by this kind of capability.

He highlighted the example of developing a catalyst to capture carbon from the atmosphere as another possible application for quantum computing.

However, a recent report from Science News on the IBM efforts has said that quantum computers will be unlikely to be in any significant practical deployment for 10 years!

There are significant technical problems to overcome, both in terms of the hardware and the software. Currently, most quantum machines require supercooling — temperatures not far above absolute zero are required to give the superconducting qualities required. Not only that, even those prototypes that are working, have massive problems dealing with errors, and error checking, as the massively parallel computations throw up all sorts of results which can vary in accuracy. Added to all of this is the general fact that quantum computing theory is evolving all the time and any design is grounded in a potentially obsolete interpretation almost as soon as it is conceived.

Taken together, this all adds up to a less than rosy outlook in the five to 10-year time frame.

Less predictable
While x86 has gone on far longer than expected, its limitations, as shown by recent events, are less predictable than before. While advances in manufacturing and materials have extended it, design, architecture and software limitations are now impacting.

The Machine is an admirable effort that has shown great potential but has been dogged by delays and difficulties, as has quantum and other branches of new and non-conventional computing strands.

The issue then is that we face an uncertain future where what is on the drawing board seems stubbornly to want to stay there, while limitations are more apparent and difficult than ever. Overall, computer development may reach a plateau where advances are made in scale and orchestration only, as actual computational limits are reached and require new computers to breach.

It is an odd situation, reminiscent of the retirement of Concorde, where no comparable new technology was available, leading to a kind of evolutionary dead end. By contrast, the computational barrier has no lack of alternatives on the horizon, but none that can yet be practically developed or deployed. So, we face a situation where x86 will come to the end of its venerable march, but possibly facing an awkward no man’s land before a viable alternative makes it out of the lab.

 

 

Read More:



Comments are closed.

Back to Top ↑