NWO grants for AI research on privacy-preserving cancer studies and virtual harassment
Two projects by Utrecht University researchers are receiving a grant from the National Growth Fund program AINed. This funding will propel promising, innovative, and bold initiatives in the field of artificial intelligence, addressing pressing needs in healthcare and virtual reality.
Picture: Unsplash
Julian Frommel, Assistant Professor in the Interaction/Multimedia group of the Faculty of Science, will conduct research on harassment in virtual reality environments. His colleague Wilson dos Santos Silva, newly appointed at the same faculty, will focus on improving AI models that can learn from patient data from different hospitals without breaching privacy.
Robust AI models for cancer research
Dos Santos Silva joined Utrecht University just recently, as Assistant Professor Explainable AI for Life at the group of Professor Sanne Abeln. “It’s wonderful to obtain my first Dutch grant within six months of starting my Assistant Professor position,” says Dos Santos Silva, who also works at the Netherlands Cancer Institute.
With the grant of 80,000 euros, he will focus on making AI models used in cancer research in the Netherlands more robust. Currently, this type of research relies on collecting medical data –such as MRI scans– from different hospitals, which are then used to train AI algorithms at a central location.
It’s wonderful to obtain my first Dutch grant within six months of starting my Assistant Professor position"
Wilson dos Santos Silva, Assistant Professor
“The problem is that, although the data is pseudonymized, you can still derive information about the patient from the biometric characteristics in the images”, says Dos Santos Silva. This is why the focus is shifting to decentralized learning methods, where AI models are trained locally at each hospital. Hospitals then share only the information about the models, not the data itself.
One challenge that arises is that the data varies greatly from hospital to hospital, and therefore so do the local models contributing to the aggregated final model. This leads to uncertainty in the model’s predictions.
Uncertainty in predictions
Dos Santos Silva: “For example, if an AI model is trained exclusively on data from the Netherlands Cancer Institute, it will be primarily exposed to severe cancer cases and will not have data on healthy and mild cases. However, during model aggregation, this model –which is uncertain about less severe cases– is given the same weight as a model trained on a more comprehensive dataset with a full spectrum of patients, which has a more accurate representation of the overall disease distribution.”
Dos Santos Silva’s research aims to make AI algorithms more robust so that they can take into account the different types of data they have seen, ultimately enhancing cancer care by supporting more accurate clinical decisions.
Harassment in the virtual world
Julian Frommel also receives 80,000 euros for his AI research. Frommel and his colleagues are delving into the world of social extended reality (XR). These immersive, virtual environments are becoming increasingly popular, for example in gaming, but also for meetings and social interactions. However, users are also increasingly confronted with intimidation. Frommel: “People can be confronted with insults, discrimination, or threats, but also with the invasion of someone’s personal space or virtual groping. This is a new phenomenon and very specific to these virtual environments.”
Currently, there are human ‘moderators’ to whom such behavior can be reported. These moderators assess whether the complaint is justified and can impose penalties, such as removal from the platform. “We want our AI models to learn to understand social interactions between people. If we know how that works, the AI models can help these moderators by detecting potentially inappropriate behavior and prioritizing what to review when there are many complaints.”
Virtual environments must feel safe and inclusive for everyone"
Julian Frommel, Assistant Professor
The ultimate goal of the project is to limit this kind of dangerous behavior. “This is enormously important because such behavior is also harmful in the virtual world. Users can experience anxiety, low self-esteem and distress. Virtual environments must feel safe and inclusive for everyone.”
Frommel’s research will be conducted in the AI & Media Lab, part of the Utrecht AI Labs. In these labs, researchers from different disciplines, along with experts from public and private organisations, governments and other knowledge institutions, are developing new knowledge and applications in the field of artificial intelligence.
About the NWO grant
The NGF AiNed XS Europa grant from NWO is part of the national AI research agenda AIREA-NL. A strong AI knowledge and innovation base is of great importance to the Netherlands. An important aspect of this is the connectedness of Dutch researchers worldwide, and especially in Europe. All ten projects awarded in this grant are therefore collaborations between at least two European partner organizations.
Julian Frommel’s project collaborates with Technical University Darmstadt (Germany). Wilson dos Santos Silva will work together with the private non-profit research association INESC TEC in Portugal.
Source: Utrecht University