Joeran Beel, Adapt

Focus on research: Joeran Beel, Adapt

Joeran Beel, Adapt

16 May 2017

Joeran Beel is Assistant Professor in Intelligent Systems at Trinity College Dublin and a member of the Science Foundation Ireland-backed centre for digital content research, Adapt. His work focuses on personalisation and the development of recommendation services. In this interview he talks about the role and challenges of personalisation in business and academia.

Most people’s experience of personalisation would be within services like Amazon, Spotify and Netflix. What does personalisation mean to you?
Nowadays, personalisation is probably most prominent in the fields of shopping, advertising and entertainment. However, I believe that personalisation has much higher potential in other fields such as medicine, nutrition, and science. Having personalised music playlists is nice to have, but I doubt they will change the world for the better (or worse).

In contrast, if more personalised medical treatments or nutrition could help to extend the human life-span, this would have a real impact on society. Similarly, personalisation in academia that helps scientists do better work that could have a huge impact on quality of life.

It seems SMEs have little interest in personalisation. Why that might be?
Many small companies do not have the resources to develop a recommender system. Developing an effective recommender system is a big effort and takes months of development time, plus ongoing maintenance.

Companies like Netflix even employ more than 100 software engineers to develop and maintain their recommender system. Smaller companies simply cannot afford this.
However, there are ‘recommender systems as-a-service’ (RaaS) companies that allow smaller businesses to easily integrate a recommender system into their products.
These systems require a few days, or less, to set up. RaaS is one of my major research fields, and I am also offering a RaaS under the name Mr. DLib.

Currently, we are offering the recommendation service for academic organisations such as digital libraries and universities. In the future, we will offer solutions for other industries.

One of the things we’ve learned from search engines is that users don’t click through pages of results, most of the time they stay above the fold on the first page. What kind of patterns are you seeing when it comes to clickthroughs?

There is little research on such questions relating to recommender systems. What I observed in my systems is that click-through rates strongly decrease, the more recommendations are displayed. We are currently trying to find out why that is.

Your other projects look at using machine learning to compress academic texts and developing an academic recommendation engine. What unique challenges have you experienced with these projects?
I see two main challenges when dealing with recommendations in academic scenarios. The first challenge relates to reproducibility. I have developed several recommender systems for different academic platforms, and on each platform, different algorithms performed best. This makes the development of novel recommendation approaches very difficult, because you can never be sure how well the novel approach performs in a specific scenario, until you tested the approach thoroughly.

The second challenge relates to identifying the quality of the research articles and books which should be recommended. Every year millions of new articles are published and, as every researcher knows, many of them are of poor quality. Separating the good from the bad ones is challenging.

Some of your work has involved the analysis of mind maps. Has the way people report thinking about a problem found its way into your current body of research?
Analysing mind maps is more difficult than analysing more ‘common’ documents such as research articles. Research articles usually follow a given structure, and have certain headings, which makes it rather easy to identify e.g. the abstract.

In contrast, mind-maps have much more variation, and users create mind maps quite differently. I remember that some of the mind-maps we analysed contained tens of thousands of nodes, each which dozens of words. Other mind maps contained only few nodes, each with only one or two words. Some mind maps used lots of visual support (images, arrows, colours), while others were very simple.

Privacy looms large on any project dealing with user data. With the General Data Protection Regulation almost upon us, how do you think this will impact your research?
The General Data Protection Regulation will increase my administration and development effort. It also makes it more difficult to find partners for my research projects, because many (potential) partners are currently insecure about what data they can give out.

To avoid making mistakes, some partners decided to just not give out any data any more that somehow might relate to users.

Read More:

Back to Top ↑