
Researchers from the University of Twente, MST and Politecnico di Milano conducted a pilot study to explore whether a GPT-controlled social robot can support patients with medical information in a hospital setting. The first results are cautiously positive: patients and caregivers accept the technology. The research focuses on technical, organisational and ethical feasibility.
Healthcare systems are under increasing pressure. Due to staff shortages and a growing demand for care, the accessibility of care is under pressure. Clear and effecitve patient communication remains essential, especially in chronic conditions. Digital technology can help with this, but it also raises questions about reliability and trust.
In that context, scientists from the University of Twente, together with healthcare professionals from Medisch Spectrum Twente, investigated whether a GPT-controlled social robot can inform patients about their condition and treatment. The system consists of a physical social robot with a face, facial expressions and speech capabilities. It can answer questions through natural conversation with the patient.
The study indicates that this physical presence was accepted by both patients and caregivers. Patients experienced the conversation as accessible and pleasant. "This should not be interpreted as evidence that care quality improves," emphasises lead researcher Jan-Willem van ’t Klooster. "We investigated whether such a system can function in practice, not whether it already improves care."
The research began with a lab study, but was then tested in the hospital’s daily practice. A total of 21 patients with osteoarthritis and 7 healthcare professionals spoke with the robot. Both patients and healthcare providers rated the system positively in terms of ease of use and acceptance. According to Van ’t Klooster, this is important: "Acceptance is a first step. Then you can investigate whether such a technology really contributes to better information provision, therapy adherence or time savings for healthcare providers."
A crucial part of the research was the way in which AI was used. The GPT technology was not given free access to the internet, but was only allowed to use information from pre-approved, doctor-validated medical websites. In this way, the researchers wanted to limit the risk of incorrect or fabricated answers (hallucinations).
"The debate is often about whether you should use AI in healthcare," says Van ’t Klooster. "We show that it’s mainly about how you set it up. By setting clear boundaries, control remains in the hands of healthcare professionals."
The project was very much a team effort, bringing together expertise from behavioural sciences and clinical practice. In addition to researchers from the University of Twente, healthcare providers, designers and international partners were also involved. "It is precisely this collaboration that makes this kind of research possible," says Van ’t Klooster. Follow-up research remains necessary, including knowledge transfer and long-term use. An investigation into the language level to be used is currently taking place.
Jan-Willem van ’t Klooster is the scientific director of the BMS Lab and an associate professor. The research was carried out by the University of Twente in collaboration with Medisch Spectrum Twente and Italian research partner Politecnico di Milano. The researchers published their results in an article entitled ’ A GPT-reinforced social robot for patient communication: a pilot study ’. It has been published in the scientific journal Frontiers in Digital Health.
10.3389/fdgth.2025.1653168
