Autonomous apps and infrastructure: who is in control?

Robot
(Image: Stockfresh)

While not yet at sentience, significant advances have been made in autonomic systems, finds ALEX MEEHAN

Print

PrintPrint
Pro

Read More:

12 November 2018 | 0

 

The idea of building IT systems that are self-managing, self-monitoring and self-healing is not particularly new. The concept has been around for years, decades even, but in it is only in recent years that such systems have started to achieve broad adoption.

But when applications and infrastructure are self-managing, a question arises — who is really in control? Without full transparency of how autonomous entities act, can they be guaranteed to conform with your enterprise policies?

For Oracle, one of the innovators of this style of computing, one of the main reasons for promoting and pushing autonomous systems is the degree to which they can alleviate headaches for their owners. And not all such headaches are technical in nature.

“Automation isn’t about removing control, it’s about increasing efficiency and removing some of the human error introduced into operating systems and software maintenance. It’s about shifting the burden of managing systems from people to technologies,” Alan Cooke, IBM Ireland

Resource struggle
“For many of our customers who are grappling innovated technologies such as artificial intelligence, machine learning or just getting value from technical resources, their number one challenge is actually that they struggle to find talented resources or knowledge experts,” said John Abel, vice president for cloud and innovation for the UK, Ireland and Israel with Oracle.

“So what we have done is to embed that technical resource or innovative resource inside our products.”

These products, so the argument goes, are the first generation that are actually self-aware from the point of view of being self-patching, self-updating, self-tuning and self-monitoring.

“We started promoting them on that basis last year and now have autonomous data warehouses and have moved into ATP — Autonomous Transaction Processing. The importance of this is manifold. For example, I’m working with a customer at the moment that has Oracle Analytic Cloud and wants to expose machine learning and AI to the end user,” said Abel.

“They’re using vast arrays of data but don’t have a technical expert that understands the databases. Using this system they can start an autonomous data warehouse, put the data into it and then just start using it and consuming it.”

The power of autonomous, according to Abel, comes from the ability to access, consume and use data at extreme speeds where before they would have to find a technical resource to create that database, do performance tuning and indexing or find a knowledge expert able to resolve the complex problems that can occur as volume and data increase.

“With autonomous, we don’t need to do that anymore — literally you just input your data and then go,” he said.

Interpretations
According to Abel, different arms of the business interpret the use of autonomous and self-managing systems quite differently.

“It’s really interesting, if you talk to a technologist about what autonomous does, they get it but then they ask the question, ‘so what?’ If you go to a business person and say ‘do you want to load up data and then start using a visualisation tool on top of it within a few minutes and start exploring the data?’ they’ll bite your hand off,” he said.

The power of autonomous comes from the ability to access, consume and use data at extreme speeds where before they would have to find a technical resource to create that database, do performance tuning and indexing or find a knowledge expert able to resolve the complex problems that can occur as volume and data increase. With autonomous, we don’t need to do that anymore — literally you just input your data and then go, John Abel, Oracle

To further illustrate this point, Abel describes a challenge given to two Oracle interns last summer. They were given a problem to solve – do data analysis on the Glastonbury music festival and reveal something that cannot simply be Googled.

“I set that task for them on Monday, on Tuesday they loaded the road network data for the UK, they loaded the weather data, they loaded some Wikipedia data that they found, they loaded a PDF document from Survey and they took in Twitter feeds,” he said.

“They then used an autonomous network built into our product and started to probe that. What they found was that the average age of the people attending Glastonbury is 39, that 55% of people who go are female and that most people who go are in a relationship. So it’s not young people that are the main audience,” he said.

“They next created a self-taught neural network. The volume of data was very high, but they found a connection between vegans and an animal quality web site. From this they deduced that there’s a big vegan movement at Glastonbury and that this was an untapped business opportunity that someone could have used.”

In the space of four or five days, they understood the typology of the people going, the products they were buying and from that created a market opportunity, all by using autonomous systems.

“If you talk to most customers that are doing big data projects or technically advanced projects, by the time our interns had finished that piece of assessment, those customers would have still been going through the procurement process.”

Autonomic computing
IBM started its move towards autonomic computing in 2001 when its’ engineers saw the need to develop smart systems that could monitor, repair and manage themselves to a high degree. In a 2004 IBM press book ‘Autonomic Computing’, it described systems that ‘install, heal, protect themselves, and adapt to your needs – automatically.’

“Applications being built today generate a lot more data and there are automated processes in how all that data is analysed. But I don’t think it’s possible to get away from having the underlying technical skills and understanding to be able to define what it is exactly that should be looked at,” Rob Curley, Singlepoint

“Autonomic computing helps to address complexity by using technology to manage technology, shifting the burden of managing systems from people to technologies. The term is derived from human biology — the autonomic nervous system monitors your heartbeat, checks your blood sugar level and keeps your body temperature close to 36.5 degrees C, without any conscious effort on your part,” said Alan Cooke, enterprise business technical leader, IBM Ireland.

“In much the same way, self-managing autonomic capabilities anticipate IT system requirements and resolve problems with minimal human intervention. As a result, IT professionals can focus on tasks with higher value to the business. Removing human intervention reduces costs, improves service levels and simplifies management.”

The argument is simple. Business owners want automation because it lowers costs but the flip side of automation is an assumed loss of control.

“Automation isn’t about removing control, it’s about increasing efficiency and removing some of the human error introduced into operating systems and software maintenance. It’s about shifting the burden of managing systems from people to technologies,” said Cooke.

“Businesses—small, medium and large—want and need to reduce their IT costs, simplify the management of complex IT resources, realise a faster return on their IT investments and ensure the highest possible levels of system availability, performance, security and asset utilisation.”

Autonomous computing addresses these issues, not just through new technology but also through a fundamental, evolutionary shift in the way that IT systems are managed. Moreover, this approach should free IT staff from detailed mundane tasks, allowing them to focus on managing business processes.

Huge leaps
According to Rob Curley, managing director of Singlepoint, in the management many applications, there are huge leaps occurring in how long they are actively managed for and what the predictive analysis is on them.

“We do two things, we build applications and we use automation throughout that application build process. They’re no longer dumb applications that are sitting there waiting to fail,” he said.

“You’re basically running proactive analysis on these things to check where they are at from a housekeeping perspective. You want to know is the application in a good and happy state and will it continue to run for the next period of time or are there some issues that need to be noted, that need to be reconciled or need to be looked at?”

“There’s an opaqueness around most AI because the algorithms that we create are still very complex. They’re multi-dimensional mathematical models which change based upon what they receive, but that’s what machine learning basically is — software that adapts based upon its inputs,” Matt Walmsley, Vectra

As this sort of technology becomes self-managing, is there an issue for managers in just presuming that whoever put it in knows how it works therefore it can just be let run itself?

“Applications being built today generate a lot more data and there are automated processes in how all that data is analysed. But I don’t think it’s possible to get away from having the underlying technical skills and understanding to be able to define what it is exactly that should be looked at,” said Curley.

“You should still be able to do the technical exercise of going through automated applications and tuning them to tell them exactly how to behave. That’s true not only at the application and infrastructure management level, that’s true for all aspects of machine learning and AI.”

Training needs
All automated and autonomous systems from a machine learning perspective need to be trained. They have to be supervised, monitored and checked periodically to ensure that what that application is training itself on is producing the correct result.

On the subject of the oversight of autonomous infrastructure and the perception that if they are self-managing, then who is really in control, Matt Walmsley, EMEA director for Vectra AI said that this is generally an overblown issue.

“Organisations are moving towards more automation for very rational reasons. Do we need to at least understand some of the fundamentals of how these new AI based tools are working? Of course, yes. This is a new set of technologies that are coming into their world and we need to understand the fundamentals so we can understand how they work,” he said.

“We’re not quite in a Skynet-type situation yet where AI is going to run off and do all sorts of crazy stuff on its own.”

Defined goals
Today, Walmsley suggests, we are in the world of task-specific or cloud AI and writing software that can adapt based on its environment needs to be done with a well-defined goal.

“For us as security specialists that goal is how do we spot the subtle behaviours of cyber attackers based on what they do, not what they are. But even then, there’s an opaqueness around most AI because the algorithms that we create are still very complex,” said Walmsley.

“They’re multi-dimensional mathematical models which change based upon what they receive, but that’s what machine learning basically is — software that adapts based upon its inputs.”

The result of this is that it can be very difficult for the owners of an autonomous system to say definitively how an AI system came to any one particular conclusion.

“But we can know how it’s designed and what features or attributes it’s looking for. We can know that it’s inspecting communications between inside and external bodies and inspecting the timing, cadence, volume and frequency as well as other technical things that are within protocols,” said Walmsley.

“That’s one of the things that people just need to get their head around. If they can’t then maybe what they’re looking for is transparent AI but that comes at a cost. If you want AI to do really complex things at speed and scale, you can’t slow it down to human speeds and expect the same result.”

Hawthorne effect
This is an example of the Hawthorne effect — the alteration of behaviour by the subjects of a study due to their awareness of being observed – and the fact that you want to stop and study how a system is doing what it is doing, that is going to materially inhibit or degrade its capability.

“But if we apply it in a task specific area, in a task specific way, then at least we know it’s quarantined. We’ve got a piece of AI that’s looking for nefarious web sites that are being used to orchestrate or command and control malware attacks, for example. We know that’s all it’s looking for — it’s not going to run off and suddenly start attacking the coffee machine,” said Walmsley.

“These things aren’t sentient, they’re not the kind of science fiction, generalist AI consciousness that goes off and says ‘thanks for building me but I’d much rather go off and do this.’ That’s not really how it works in reality.”

Read More:



Comments are closed.

Back to Top ↑