Interview with Mathias Goyen, Prof. Dr.med., Chief Medical Officer at GE HealthCare

title
title
title

Accelerating your AI Success

Explore
June 23, 2025 | 6 min read

As artificial intelligence continues to redefine the future of radiology, clinical and industry leaders are focused not just on innovation, but on trust, integration, and outcomes.

In this exclusive interview, we spoke with Mathias Goyen, Prof. Dr.med., Chief Medical Officer at GE HealthCare about the immediate impact AI is having on radiology workflows, the challenges around adoption, and the essential role of data partners like medDARE in building safe, clinically meaningful AI tools. From workflow optimization and diagnostic accuracy to regulatory compliance and ethical AI development, this conversation offers valuable insights for anyone working at the intersection of healthcare and technology.

Where do you foresee the most immediate impact of AI in radiology over the next 2–3 years? 

The most immediate impact of AI in radiology will be in workflow optimization and triage. AI is increasingly being used to prioritize cases, for example, flagging critical findings such as pneumothorax or intracranial haemorrhage. This ensures that the most urgent cases are reviewed first, improving outcomes. Another near-term impact is AI-assisted image acquisition, particularly in ultrasound, where AI guidance can help standardize acquisition and improve reproducibility, especially among less experienced users.

What do you see as the biggest barriers to widespread AI adoption in radiology today – technological challenges, regulatory constraints, or clinical trust – and how might we address them?

The barriers are multifactorial. Technologically, many solutions are not yet fully integrated into PACS/RIS systems, creating workflow friction. Regulatory frameworks are evolving, but the lag between innovation and approval can slow deployment. However, the biggest hurdle remains clinical trust. Radiologists must understand how an algorithm works, its limitations, and how it performs in diverse patient populations. Transparent validation, robust post-market surveillance, and clear communication of AI performance metrics are essential to building trust.

How can AI be effectively integrated into radiology workflows without disrupting the physician-patient relationship or contributing to burnout among healthcare professionals?

AI should be designed as a supportive tool, not a disruptive one. Seamless integration into existing workflows, rather than adding extra steps or requiring separate platforms, is crucial. By automating repetitive tasks such as measurements, protocoling, or report pre-filling AI can reduce cognitive load and free radiologists to focus on complex cases and patient communication. Human-centered design principles and regular feedback loops with clinicians are key to ensuring AI tools reduce burnout rather than exacerbate it.

In your view, how might AI improve diagnostic accuracy, particularly in the context of rare diseases or subtle pathologies that are often overlooked?

 AI excels in pattern recognition across vast datasets, which can be particularly beneficial in identifying subtle findings or rare conditions that may escape human detection. For instance, in interstitial lung diseases or early-stage cancers, AI can detect faint patterns or changes over time that are easily missed. When trained on high-quality, diverse datasets, AI can serve as a second reader, improving sensitivity without significantly compromising specificity, especially valuable in complex or high-volume settings.

How critical is data quality and diversity in training AI models for radiology, and what role should healthcare institutions play in supporting these efforts?

Data quality and diversity are foundational. Poorly annotated or homogeneous data leads to biased models with limited generalizability. Healthcare institutions have a duty to enable responsible data sharing by participating in federated learning frameworks, contributing to annotated datasets, and ensuring that data represents a wide range of patient demographics and clinical scenarios. Institutions should also play an active role in governance and oversight to safeguard privacy and ethical standards.

Trust appears to be a major factor—both from radiologists and clinical leadership. What approaches have you seen succeed in managing change and building confidence in AI tools as part of everyday diagnostic practice?

Change management in AI adoption must be intentional. Successful approaches include involving end users from the start, offering transparent performance data, and implementing AI tools in a stepwise, non-disruptive fashion. Peer-led training, internal champions, and published clinical validation studies go a long way in demonstrating credibility. Equally important is the ability for users to provide feedback and for vendors to act on it. Trust is built through performance, transparency, and collaboration.

From your experience, how do large healthcare organizations such as GE HealthCare approach external partnerships for AI data services like data collection and dataset annotation? What do you look for in a reliable vendor?

At GE HealthCare, we approach partnerships with a focus on clinical relevance, data integrity, and compliance. A reliable vendor must demonstrate not only technical capability but also an understanding of clinical workflows and regulatory requirements. We prioritize partners who offer traceability in data annotation, adhere to privacy standards like HIPAA and GDPR, and show a commitment to ethical AI development. A collaborative mindset and transparency are equally important to ensure mutual success.

How important is collaboration with data partners who understand clinical nuances when it comes to building effective and safe AI tools for radiology?

Clinical nuance is everything in radiology. Subtle distinctions can make the difference between a benign variant and a serious pathology. Data partners who understand these nuances can ensure that annotation is clinically meaningful and aligned with diagnostic standards. Such collaboration leads to the development of AI models that are not just accurate but clinically useful. Misinterpretation at the data curation stage can propagate errors into the model, so clinical expertise is non-negotiable.

Given the sensitivity of medical data, how do you evaluate the trustworthiness and compliance standards of third-party providers contributing to AI model development?

We evaluate third-party providers through a rigorous due diligence process that includes assessments of data security, compliance certifications (like ISO 27001), and adherence to relevant regulatory frameworks. We also review their data governance protocols, consent management practices, and capabilities around anonymization and de-identification. Beyond technical safeguards, we look for a demonstrated culture of responsibility and transparency in how data is handled and protected.

As someone who likely oversees or influences innovation partnerships, how do you see the value of working with specialized firms like medDARE that focus exclusively on healthcare data collection and annotation?

Specialized firms like medDARE bring domain focus and operational agility that are incredibly valuable in a rapidly evolving space like AI in healthcare. Their ability to scale, deliver high-quality annotations, and incorporate clinical context into datasets accelerates development cycles. When such firms operate under robust compliance frameworks and engage in co-creation with healthcare providers and OEMs, they become key enablers of clinically relevant AI innovation.

What advice would you give to companies like medDARE that work behind the scenes to power AI development in healthcare, especially when collaborating with top-tier institutions like yours?

Understand the clinical problem deeply. Engage early with end users – radiologists, technologists, and data scientists – and treat them as co-developers. Prioritize quality and transparency over speed. Build processes that are auditable and compliant and be proactive in addressing ethical and bias concerns. Most importantly, maintain a mindset of partnership rather than service provision. Long-term success in this space will be driven by those who integrate technical excellence with clinical insight and trustworthiness.

You may also like:

Want to know how we can accelerate your AI success?

Get a quote