How does artificial intelligence (AI) view the world?
Freiburg, Jan 26, 2024
We know from previous research on internal world models that people are very good at navigating the unknown. When a person plans a trip to a foreign city, they don't start from scratch. From previous city trips, they know the difference between trams and underground trains and that sights are often located in the city centre. How would an AI plan a city trip? AI does not gain knowledge, even about cities, from its own experiences. Does it nevertheless have an idea of what is meant by the term "city"? Does AI develop abstract models of reality? Prof. Dr Joschka Boedecker (Computer Science), Prof. Dr Ilka Diester (Biology) and Junior Prof. Dr Monika Schoenauer (Neuropsychology) approach these questions with the help of the concept of internal world models.
World models in people: From experiences to predictions
People make predictions about what to expect in certain contexts based on their experiences. “Over time, we develop an internal world model that represents a possibly distorted mirror of the physical world around us,” says Ilka Diester. This world model not only relates to our spatial imagination, such as in a foreign city, adds Monika Schoenauer, but also to behaviour in certain social contexts. Among other things, such world models help us to cope with new situations. “A new situation is not threatening for us because we have an expectation of what will happen,” says Schoenauer.
Cognitive psychologists and neuroscientists can question participants to discover their world models. Using neuroimaging approaches, they can also recognize patterns in brain activity that point towards these internal models.
Video interview with Monika Schoenauer.
World models of artificial intelligence: Less flexible than humans
The principle according to which AI is programmed is based on the structure of the human brain. Several levels of computing nodes are linked together, similar to neurons that form a network via their synapses. "Technical systems learn to behave optimally in certain environments and with certain tasks. What they have learnt is then reflected in how the various links between nodes are weighted," explains Joschka Boedecker. The perception that AI has of the world is therefore fed solely by the context-specific data that it has available. Accordingly, AI is less flexible than humans, who often interact with the world spontaneously and playfully, without any specific optimisation guidelines. The importance of a comprehensive data basis is therefore evident when training robots, for example. "Robots that learn to control their joints and motors by trying them out in the 'real world' do this much better than those that we only train using simulations," says Boedecker.
Video interview with Joschka Boedecker.
The fact that AI's knowledge of the world is only fed by pre-filtered data creates a problem: AI adopts distortions in data sets one-to-one into its world model. "I can rationally explain to a human why, for example, most high earners are male - and that this is not due to women's lower abilities. AI, on the other hand, learns these statistics and then assesses it as fundamentally unlikely that a woman could take on a very well-paid management position," says Monika Schoenauer. The solution to this problem is a current research topic.
Learning from one another; Holistic research on internal world models
Despite this shortcoming in terms of their flexibility, AI is an important piece of the puzzle when it comes to researching world models. It allows scientists to specifically adjust individual parameters and examine how AI changes as a result. “In order to understand all facets of internal world models, we work together across disciplinary boundaries,” says Ilka Diester. “While humans can verbalise their thoughts, we instead measure neuron activity in animals very precisely and recognise patterns in them depending on the decision. This allows us to physiologically determine the implementation of world models. AI, in turn, helps us to analyse data and also serves as a model system that we can design from scratch and adapt flexibly to test predictions.”
Video interview with Ilka Diester.
Joschka Boedecker, Ilka Diester and Monika Schoenauer are available for media interviews.
Together with Thomas Brox and Andreas Vlachos, Ilka Diester is spokesperson of the Cluster of Excellence initiative BrainWorlds; Joschka Boedecker and Monika Schoenauer are PIs. For more information about it along with the Freiburg Excellence Strategy as a whole, please visit: https://uni-freiburg.de/university/topics-in-focus/excellence-strategy/
Joschka Boedecker
Head of Neurorobotics Lab
University of Freiburg
Tel.: +49 761 203-8040
e-mail:jboedeck@informatik.uni-freiburg.de
Foto: Jürgen Gocke
Ilka Diester
Head of Optophysiology Lab
University of Freiburg
Tel.: +49 761 203-8440
e-mail:ilka.diester@biologie.uni-freiburg.de
Foto: Jürgen Gocke
Monika Schoenauer
Junior Professor of Neuropsychology
University of Freiburg
Tel.: +49 761 203-2475
e-mail:monika.schoenauer@psychologie.uni-freiburg.de
Foto: Jürgen Gocke
Contact:
University and Science Communications
University of Freiburg
Tel.: +49 761 203 4302
e-mail: kommunikation@zv.uni-freiburg.de