FRIBOURG NETWORK FREIBURG
PROFESSOR OF INFORMATION TECHNOLOGY,
UNIVERSITY OF FRIBOURG
In 2016, you received a grant of two million Euros from the European Research Council to fund your work on big data. Can you explain what your research is about?
The use and analysis of big data, the volume of which has literally exploded in the last few years, is revolutionizing our lives. My team of 12 researchers and I focus our efforts on tackling a fundamental problem faced by all large firms: the integration of textual content in big data infrastructures. At the moment, conventional software programs concentrate on what we refer to as "structured" data that can be presented in tabular form. They are incapable of understanding natural language, whether it be scientific documents, web pages or even tweets. The aim of my research project is to develop algorithms that can extract this textual content so that it can be saved in a big data-compatible format.
What kind of concrete applications will this research pave the way for?
There are countless applications. For example, we are currently working on a platform that will collect scientific publications from around world and then extract their core content. This will make it easier for researchers to navigate the oceans of documents that are published every year.
Should we be worried about the omnipresence of big data?
The effect of big data on our lives is direct, constant and growing stronger every day. Sooner or later, we will need to start thinking about and discussing the social and ethical boundaries that we want to incorporate into these algorithms.