
Anastasia Budkina:
Sailesh, thank you for taking your time to have this interview with us. The first question would be quite general – where do you see the strongest influence of AI right now? I’d love to hear your perspective on both radiology and endoscopy, since you have experience in both.
Sailesh Conjeti:
That’s an excellent question, and one that keeps evolving. If we take a snapshot of where we are today, a few years ago the biggest opportunities seemed to lie in detection and characterization of diseases in radiology – and this continues to be a strong area of innovation especially as we are bringing similar AI capabilities into endoscopy.
At the same time, with the rise of generative AI and ambient voice technologies, we also see a lot of growing opportunities in workflow optimization. AI is helping optimize clinical, operational, and financial processes — with operational AI seeing the highest adoption today. You see strong momentum in areas like AI scribe technology and revenue cycle management, especially around coding and billing. If AI can help organizations code more efficiently, it essentially pays for itself. Over time, a combination of AI for clinical decision support and workflow AI will drive the biggest impact for clinical and operational efficiency in tandem.
Anastasia Budkina:
On your LinkedIn, you mention your goal of seamlessly integrating AI into healthcare. What do you think are the biggest barriers to widespread adoption?
Sailesh Conjeti:
One big misconception is expecting AI to perform equally well across all contexts. That’s simply not realistic. Instead, it’s about knowing when AI works — and when it doesn’t.
Transparency and explainability are key. We need systems that allow clinicians to trust but also verify AI results. There will always be false positives, false negatives, and bias in training data. That’s why it’s important to invest in model monitoring, especially for out-of-domain cases.
Another barrier is generalization. Many AI tools are built generically but need extensive local testing. Your hospital might use different acquisition protocols or compromise on image resolution for throughput. These variables affect AI performance significantly.
There’s also a huge opportunity to co-create solutions with vendors — especially if you have a strong use case and the data. That’s where companies like medDARE come in — helping build datasets that allow models to generalize.
On the operational side, integration remains a challenge. We talk a lot about workflow fit — how do we reduce clicks, make AI less visible, and more naturally embedded in clinical routines? That’s the direction we need to explore further.
Anastasia Budkina:
How much data is “enough” for training AI?
Sailesh Conjeti:
It really depends. On the intended use, on the difficulty of the problem, and on your acceptance criteria. You might reach clinical acceptance with a relatively modest dataset — especially if you augment it with standardized public datasets.
Also, regulators are introducing frameworks like the Predetermined Change Control Protocol (PCCP), which allows vendors to continuously update models with new data. So the idea of a static AI model is fading. Thanks to cloud deployments, models should evolve constantly.
Let’s keep the training data coming — because it’s never really “enough.”
Anastasia Budkina:
Where do you see the role of synthetic data? When does real-world data become necessary?
Sailesh Conjeti:
Synthetic data has evolved rapidly, especially with generative models inspired by vision transformers and stable diffusion. If these models can simulate the physics of image generation — which is critical in medical imaging — then they can be powerful training tools.
That said, synthetic data is best suited for training and stress testing. Validation, on the other hand, should still rely on real-world data to reflect clinical conditions. But even for validation, synthetic data can help simulate corner cases or rare variations.
Anastasia Budkina:
What AI tools do you personally use — whether for work or daily life?
Sailesh Conjeti:
At work, I rely heavily on the Deep Research tool from ChatGPT. It’s great for market research, literature reviews, or analyzing product feedback. I also use tools like Perplexity Labs for similar tasks.
NotebookLM from Google is another favorite — it turns long academic papers into podcasts, which I can listen to while walking my dog. If something seems worth a deep dive, I’ll go old school: print it and review it with a highlighter.
In my personal time, I enjoy experimenting with N8N for automations. I recently tried to automate booking a trip to Switzerland — didn’t quite succeed, but it was a fun project. I’m also playing around with Replit for vibe coding projects.
Anastasia Budkina:
I relate to that. But I do still appreciate when messages are written by people — not AI. It feels more personal.
Sailesh Conjeti:
Absolutely. You can’t blame someone for using the right tools — but I agree, AI-generated content can feel impersonal. I used to write all my LinkedIn posts myself, and now it’s a mix. Platforms like LinkedIn are even starting to limit AI-generated content in their algorithm.
It’s made me think more about reducing over-reliance on AI. I recently picked up a book called Irreplaceable by Pascal Bornet — about standing out in an age of AI. I haven’t started it yet, but it’s next on my list.
Anastasia Budkina:
Thank you again, Sailesh — it was a pleasure speaking with you, and I really appreciate you taking the time.






















