Clone
1
Home
Bo Vanhoof edited this page 2026-02-02 11:25:46 +01:00
This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

Welcome to the "Ondersteuning diagnostiek neuropsychiatrische aandoeningen" Wiki.

This project tackles the gap between todays fragmented, hospitalspecific diagnostic workflow for neurodegenerative disorders and a target state where clinicians and patients benefit from a unified, datadriven process. In the current “AS IS”, overlapping cognitive, affective, and behavioral symptoms (e.g., in Alzheimers disease vs. depression) make purely clinical differentiation difficult. Supporting tests—structured neuropsychological assessments, CT/MRI/FDGPET—are often performed across multiple institutions, fragmenting information over a patients lifetime and complicating communication, integration, and avoiding duplicate exams.

Our “TO BE” introduces a crosshospital application that retrieves a patients data across institutions to present a complete, gapaware view and embeds an AIdriven decision support (DS) module trained on ~15 years of realworld data. The tool informs differential diagnosis and the added value of additional (potentially invasive) tests, aiming to increase transparency, reduce redundancy, and broaden the evidential base. Data sharing is designed to align with international standards (FHIR, SNOMED CT, DICOM) to ensure secure, scalable interoperability.

Within this project, we (1) build a large, multimodal dataset of diagnostic trajectories (primarily UZ Leuven and UPC KU Leuven, enriched with RZ Tienen and Alexianen Zorggroep Tienen); (2) address interoperability via standards and clinical consensus on relevant data; (3) develop stateoftheart ML models spanning demographics, psychiatric history, neuropsychology, and imaging; (4) evaluate performance with accepted metrics and clinical review; (5) apply explainable AI (XAI) to make model reasoning transparent; (6) test external generalizability on data from other hospitals; (7) assess usability of the DS application; and (8) pave the way for clinical deployment so clinicians can safely leverage crosshospital information, minimize redundant testing, and reach more accurate, timely diagnoses.