Mathieu d'Aquin, Insight

Focus on research: Prof Mathieu d’Aquin, Insight

Life
Mathieu d'Aquin, Insight

11 September 2018

Prof Mathieu d’Aquin is a professor of informatics at the Insight, the SFI centre for data analytics based at NUI Galway. In this interview he talks about applying old wisdom to new ideas in artificial intelligence and the role of ethics and fiction in guiding research.

What are your areas of research?
My research is on various aspects of artificial intelligence, taking traditional approaches like knowledge representation and reasoning, and combining them with recent developments, such as knowledge graphs and large scale data management/analytics.

Much of this has taken place in the context of the Semantic Web, with the ultimate goal of gaining knowledge of a particular domain, practice, behaviour or context from data, expertise and practice.

This research mostly starts from the requirements of a specific application domain: I worked on medicine (oncology) during my PhD; in education and the humanities in later years; on personal information management; and, more recently, in the areas of smart cities and the Internet of Things.

One very interesting thing about all of those domains, and the approaches taken within my group to address them, is the need to combine the purely technical with methodologies taking into account non-technological aspects, including collaboration and ethics.

You’ve said one of the challenges facing the Semantic Web is to “bring the things that really address the semantic layer into usable forms”. Can you unpack this?
The Semantic Web is an interesting initiative, which is still not very well understood. In the simplest form, it is about getting information on the Web which is directly processable by automated processes. In other words, it is about making the Web one large universal knowledge base.

This is still a relatively new research area but it has had a lot of impact. The Google Knowledge Graph, for example, can be directly traced back to Semantic Web research, and so can IBM Watson or Apple Siri, even if those systems are rarely described that way.

Despite those results, we are still very far from this idea that information on the Web could be automatically interpretable and therefore processable. In other words, there is still a lot of research to be done on the inference mechanisms, ontological representation approaches, and the knowledge modelling methods that can help lift the immense amounts of data we are processing into true knowledge graphs.

We need such methods, most of them originating from decades-old AI approaches, to interpret the great number of models and patterns that are nowadays being produced through the increasingly popular data analytics techniques available, without which those models might remain unexploitable, or be exploited in very misguided ways.

You’ve looked at the area of personal analytics as a way to improve productivity. Can you go into detail on how this works?
Personal analytics is the idea that individuals can use data analytics approaches to analyse their own behaviour. It has been very much democratised through the emergence of self-tracking and the quantified-self.

One thing we do, especially through the AFEL project (Analytics for Everyday Learning) is to apply this idea to what we do and, more specifically, what we learn on the Web. We are literally trying to build a fitbit for online learning.

There are many, many technical and non-technical challenges in doing this, but what’s most interesting is the way it helps improve behaviour.

Self-directed learning on the Web is complex: It is not structured, not assessed, and without a clear, end. What personal analytics can do is help users understand how they learn, what they are learning about, and what they have done towards that, so they can set their own objectives.

The goal of personal analytics is not just to have nice visualisation of one’s behaviour, but to be able to say ‘ah, yes, that’s what I have been doing’ and ‘I want to do more of it’ or ‘I would rather be doing something else instead’. This applies a lot to learning nowadays, as more learning happens in a self-directed way through online resources. It applies to many other aspects of our lives too, where making the data we create available and interpretable can make a huge difference.

There’s some debate as to where data science actually came from: statistics or computer science. Where do you think its roots lie?
My biased views is that it came from computer science, but also that it does not really matter where it came from.

I am a pure computer scientist myself, so I see better the reason to consider it like this: A lot of data science is about ‘hacking the data’: exploring it, formatting it, reorganising it, processing it through complex, distributed mechanisms. That means a lot of the data science process is enabled by many different areas of computer science.

However, exactly the same thing can be said of statistics. Exploring data relies on a lot of computing descriptive statistics. Even machine learning and data mining approaches that are not explicitly said to produce statistical models are fundamentally based on statistics.

In the end, where data science originated is not what counts. What’s the point of having a new discipline if it is just a refinement of an existing one? Data science is both and more, and that’s the way it needs to be taught.

You’ve spoken about the prophetic power of science fiction. Is our relationship with tech more Black Mirror than Things to Come?
The reason to look at the relationship between science fiction and technology research from the darker perspective is exactly because our society tends to go the other way: I don’t think I would be controversial saying that research leading to technology development, especially in data science and data analytics, is amongst the most commercially-oriented of all.

We hear how smart cities are going to make our life better, how intelligent assistants will help us in our daily life, or how smart shopping apps will help us being more efficient. Obviously I think that’s great. However, we have to be careful not to get stuck in this over-positive, over-seductive message, and forget that those things actually affect people’s lives.

What I’m advocating for is an ethics-aware method for data science research: understand what could be the implications of your research, what kind of products it could lead to, and what social phenomena might emerge from their widespread adoption. It is not easy to do. It needs a strong involvement from the social sciences, and a capacity of anticipation that is not generally found in the practices of data science, or even, paradoxically, of artificial intelligence.

The idea of integrating science fiction narratives there is inspired by many different things: Black Mirror, obviously, design fiction, as well as concrete examples from the past, such as Leo Szilard, who went from being a Nuclear Physicist to a politician and science fiction writer after World War II, using science fiction as a tool to drive his political message, for example about the establishment of science foundations in the US (see The Mark Gable foundation in the book The Voice of the Dolphins and Other Stories).

The goal of using those, sometimes dystopian, narratives from a ‘not too distant future’ as suggested in design fiction is not to prevent those developments to happen. It is so that they are designed in a way that maximise both innovation and ethics-awareness, getting as much of the benefits, while having plans to avoid as much of the risks as possible.

Read More:


Back to Top ↑

TechCentral.ie