Neel Mehta: barriers preventing healthcare innovations from scaling beyond pilots in 2026
Neel Mehta is a MedTech investor and strategy leader with extensive experience across digital health platforms, healthcare systems, and large-scale innovation programmes. His work is rooted in implementation — the stage where most healthcare technologies either gain real traction or quietly disappear.
As healthcare enters 2026 with unprecedented levels of AI adoption, investment, and regulatory scrutiny, Mehta argues that the industry continues to repeat a fundamental mistake: treating technology as innovation. In this interview, he explains why many solutions fail beyond pilot stages, where AI truly adds value, and what must change for healthcare innovation to reach everyday care.

1. From your perspective, what do most people misunderstand about “healthcare innovation” today?
Over the past decade, healthcare has seen rapid growth in digital health investment, platform launches, and technology adoption. Yet many core challenges in care delivery, workflow strain, clinician burnout, and patient disengagement remain largely unchanged. This contrast highlights a common misunderstanding: the belief that innovation is the same as technology. Too often, healthcare innovation is reduced to new software, devices, or algorithms. But healthcare does not improve simply because a new tool exists. It improves when people change how they work, communicate, and make decisions.
Inside hospitals and health systems, innovation succeeds or fails at the human level. Clinicians, administrators, and patients operate under constant pressure. Industry experience shows that many well-funded solutions fail to move beyond pilot stages, often within the first one to two years, because they do not make daily work clearer or easier. When friction persists, technical sophistication quickly loses relevance.
Another misconception is that better technology automatically produces better outcomes. In practice, outcomes improve when patients feel informed and involved, and when clinicians feel supported rather than overwhelmed. Technology can enable that environment, but it cannot impose it. Healthcare innovation ultimately depends on trust, understanding, and shared decision-making.
2. Why do you think technology has become the dominant lens through which the future of healthcare innovation is discussed?
In healthcare, the things we talk about most often are the things we can see, measure, and present, and technology fits that mold perfectly. From my experience building and implementing healthcare solutions, technology dominates the conversation because software, devices, and dashboards are tangible. They can be demonstrated, funded, marketed, and tracked through adoption metrics. Culture, behaviour, and trust are far harder to quantify, so they tend to receive less attention, even though they shape outcomes more deeply.
There is also a natural tendency to simplify complex healthcare innovation challenges into technical problems. If the issue is framed as a data gap, more data feels like the answer. If efficiency is the concern, automation appears to be the solution. But healthcare rarely works in straight lines. Many of the most persistent barriers are human: resistance to change, misaligned incentives, and communication breakdowns across teams and institutions. Technology also creates a sense of progress. It feels like movement. But movement is not the same as impact. I’ve seen organisations invest heavily in tools without addressing how those tools fit into daily workflows or how people are expected to use them. Technology should be part of the discussion, but not the centre of it. Real progress happens when technology supports human needs rather than trying to redefine them. That’s where meaningful change begins.
3. At what point does innovation stop helping care and start complicating it?
When you spend time with clinicians, you quickly see how thinly they are already stretched. Healthcare innovation stops helping care when it adds friction instead of clarity. If a new solution requires more documentation, more clicks, or more systems to manage, it becomes a burden rather than a benefit.
Healthcare is already complex. Patients come with uncertainty, emotion, and urgency. Clinicians balance safety, speed, and judgment every day. Innovation should reduce that load, not add to it. When technology pulls attention away from patients and toward screens, something fundamental is lost.
Another red flag is when innovation focuses more on metrics than meaning. If success is measured by usage statistics rather than understanding or outcomes, care becomes transactional. Patients notice when conversations feel rushed or scripted. Healthcare innovation should simplify decisions, improve communication, and create space for human connection. When it disrupts workflows without clear value, or when it replaces listening with logging, it complicates care. At the end of the day, innovation only helps when it strengthens the relationship between patients and providers.
4. Why do so many promising digital health or MedTech solutions fail to gain real-world adoption?
Despite the rapid growth in digital health investment over the last decade, many solutions fail to move beyond pilot deployment. What I’ve seen repeatedly in healthcare is that failure rarely comes from bad technology. It comes from poor fit. Many digital health and MedTech solutions are designed around theoretical workflows rather than how care is actually delivered, creating a gap between design intent and daily use.
Clinicians work under intense time pressure, with little tolerance for added steps or complexity. Patients often struggle with understanding, adherence, and follow-through once they leave clinical settings. When a solution doesn’t respect these realities, adoption becomes difficult. Even clinically validated tools can fail if they disrupt workflows, increase cognitive load, or expect behaviour change without adequate support. Institutional inertia adds another layer of challenge. Healthcare systems are highly regulated and operationally complex. Decisions involve leadership, compliance, IT, finance, and frontline teams, all with different priorities and risk thresholds. Without alignment across these groups, promising solutions stall despite early enthusiasm. Over time, I’ve learned that adoption happens when solutions meet people where they are. Tools gain traction when they save time, reduce confusion, and integrate naturally into existing systems. Technology earns trust by being useful and reliable in real-world care.
5. In your experience, what matters more for impact: invention or implementation?
If you look at healthcare over the past ten to fifteen years, the biggest gains have rarely come from breakthrough inventions alone. They have come from solutions that were implemented patiently, repeatedly refined, and supported over time. In my experience, implementation matters far more than invention. Healthcare innovation is not short on ideas. It is short on the ability to execute those ideas consistently in real clinical environments.
Implementation forces reality into the conversation. Questions around workflow fit, ownership, maintenance, and long-term support cannot be deferred. Industry patterns show that many solutions demonstrate promise in early pilots but struggle to sustain usage beyond the first 1 to 2 years because these practical considerations were never fully addressed. Without clear accountability and operational ownership, even strong concepts fade once initial momentum slows. Proving that something works in isolation is only the starting point. Real impact appears when a solution becomes part of everyday care rather than an additional layer clinicians must manage. That transition requires training, leadership commitment, and time for trust to develop.
Invention creates possibility, but implementation creates change. In healthcare, where decisions directly affect lives, lasting impact matters far more than novelty or technical ambition.
6. Where do you see AI genuinely adding value in healthcare innovation and where is its role currently overstated?
Over the last few years, as AI tools have moved from research pilots into clinical environments, one pattern has become clear to me: AI creates value when it reduces complexity rather than adding to it. Healthcare teams are dealing with growing volumes of data, and AI genuinely helps when it supports pattern recognition, information synthesis, and decision support without demanding constant attention.
When used well, AI saves time and sharpens focus. It brings relevant insights forward, flags risks early, and reduces the effort required to navigate fragmented records. That shift matters, especially as clinician workload and documentation demands have steadily increased over the past decade. In these situations, AI supports better care by staying out of the way. Where AI’s role is often overstated is when it’s framed as a replacement for judgment, empathy, or accountability. Healthcare is not just a data problem. It’s a human experience shaped by uncertainty, trust, and context, elements that algorithms do not understand.
AI works best in the background, reinforcing good decisions rather than trying to make them. When positioned as “the solution,” expectations become unrealistic. Its real value emerges when it improves efficiency, strengthens communication, and supports trust. Technology can assist care, but people must remain at the center.
7. What are the key risks of treating AI as a standalone solution rather than part of a broader system?
One thing I’ve learned working in healthcare is that shortcuts rarely work, especially when they involve technology. Treating AI as a standalone solution creates a dangerous sense of false confidence. When people begin to rely on outputs without understanding context or limitations, risk increases. Healthcare decisions aren’t abstract; they affect real people at vulnerable moments. Removing human judgment from that equation introduces uncertainty that technology alone can’t resolve.
Another major risk is fragmentation. AI tools that are designed in isolation often fail to integrate smoothly into existing workflows. Instead of simplifying care, they add more systems to manage, more alerts to interpret, and more cognitive burden for clinicians. Over time, this leads to fatigue and increases the likelihood of errors, the opposite of what innovation is meant to achieve.
There’s also a cultural risk that’s easy to overlook. When organizations believe AI will fix structural problems on its own, they delay addressing deeper issues like workflow design, communication gaps, and misaligned incentives. Technology becomes a distraction rather than a catalyst for meaningful change. AI works best when it exists within a broader, human-led system. Accountability, ethics, and clinical judgment must remain central. Innovation succeeds only when it strengthens the system as a whole, not when it tries to replace it.
8. How should responsibility, trust, and ethics be addressed in AI-driven healthcare?
From my experience, responsibility in healthcare must always remain human. AI can support decisions, surface insights, and improve efficiency, but it cannot own outcomes. Patients deserve to know that a real person, a clinician or care team, is accountable for the decisions affecting their health. That clarity is essential not just legally, but ethically. Trust is built through transparency and communication. Patients and providers need to understand what AI is doing, what it isn’t doing, and how its recommendations are used in care decisions. When technology feels like a black box, trust erodes quickly. People don’t expect healthcare systems to be perfect, but they do expect honesty. Being upfront about limitations is just as important as highlighting capabilities.
Ethics requires intention from the start. AI must be designed to respect patient dignity, privacy, and autonomy. It should help reduce disparities in care, not reinforce existing biases. That means thoughtful data selection, continuous monitoring, and strong human oversight. At the end of the day, AI should operate within a governed, human-led framework. When responsibility is clear and ethical considerations are embedded early, AI can enhance care. Without that foundation, even well-intentioned tools can do real harm.

9. Why is healthcare innovation uniquely resistant to scaling compared to other industries?
Healthcare’s resistance to scaling innovation is rooted in how the system has evolved over time. Unlike industries built around speed and optimisation, healthcare has been shaped over decades around safety, accountability, and risk avoidance. Every process, regulation, and approval pathway exists because mistakes carry irreversible human consequences. That historical foundation makes caution not a flaw, but a necessity.
Another limiting factor is fragmentation that has developed alongside this growth. Healthcare is not a single operating system. It is a collection of hospitals, clinicians, payers, regulators, and vendors that have grown independently, often with conflicting incentives. A solution that improves outcomes in one part of the system can create operational or financial strain elsewhere. Aligning these moving parts is slow by design.
Cultural inertia also plays a role. Clinical practice is built on training models, hierarchies, and norms that have been reinforced across generations. These structures protect patients, but they also make large-scale change uncomfortable and deliberate. Approaches that work in software or retail simply don’t translate when trust and clinical judgment are central. Innovation can scale in healthcare, but only when it respects this history. Sustainable progress prioritises patient outcomes, integrates carefully into workflows, and earns trust over time. In healthcare, impact is measured in years, not quarters.
10. What structural or cultural barriers prevent healthcare systems from adopting new solutions effectively?
One of the biggest structural barriers to effective adoption is rigid workflow design. Many healthcare systems are organised around compliance, billing, and documentation rather than the realities of care delivery. When new solutions add steps instead of reducing friction, they are quickly seen as obstacles, regardless of how advanced or well-intentioned the technology may be. If innovation does not fit naturally into existing clinical workflows, it struggles to gain traction.
Documentation burden compounds this challenge. Clinicians already spend a significant portion of their time interacting with systems rather than patients. When new tools increase clicks, data entry, or cognitive load, they are viewed as liabilities instead of support. Adoption slows not because clinicians resist change, but because they are protecting time and focus for patient care. Culturally, healthcare is built on risk minimisation. That mindset is essential for safety, but it can also make change feel threatening. Without visible leadership backing and clear guidance, innovation is perceived as risky rather than helpful. People hesitate to alter routines that feel safe, even when those routines are inefficient.
Misaligned incentives further limit progress. If systems do not reward better outcomes, improved experiences, or long-term value, motivation to adopt new solutions remains low. Successful adoption requires more than tools. It depends on trust, alignment, and sustained support through change.
11. What does a healthcare innovation ecosystem look like in practice?
From my experience working across startups, health systems, and patient communities, a healthcare innovation ecosystem is fundamentally collaborative. It’s not built around a single organization, product, or stakeholder. Instead, it brings patients, clinicians, technologists, researchers, and institutions into alignment around shared goals. When those groups operate in silos, innovation slows. When communication is open and insights are shared, progress becomes possible. In a healthy ecosystem, innovation isn’t judged by how advanced a solution looks, but by whether it’s actually used and trusted. A tool isn’t successful because it’s impressive on paper; it’s successful because it fits into care delivery and improves outcomes. That’s why implementation is valued just as much as invention.
Patients are not treated as an afterthought. Their lived experiences shape design decisions, feedback loops, and ongoing improvement. Clinicians are supported with tools that respect their time and clinical judgment, rather than adding friction. Trust and learning are central. Feedback is encouraged. Failure isn’t hidden; it’s used to improve. When collaboration replaces competition and outcomes matter more than optics, innovation becomes sustainable. That’s what a healthy ecosystem looks like in practice.
12. How can healthcare innovation be designed around humans rather than systems or processes?
To me, designing healthcare innovation around humans requires a fundamental shift in perspective. Instead of asking how people can adapt to systems, we need to ask how systems can adapt to people. Many healthcare environments are still shaped by legacy processes and operational convenience, which often overlook how care is actually experienced by patients and delivered by clinicians.
Human-centred healthcare begins with respect for time, understanding, and judgment. Patients need clear explanations, realistic expectations, and a meaningful role in decisions about their care. Clinicians need systems that support their expertise and reduce unnecessary administrative effort, allowing them to focus on what matters most: patient care.
Equally important is alignment around outcomes that matter to patients. Care works best when people feel heard, respected, and involved in decisions affecting their health. Human behaviour does not change simply because new technology is introduced. It changes when individuals feel understood, trusted, and empowered within the system. Over the next decade, progress in healthcare innovation should strengthen human connection rather than dilute it. When tools help clinicians communicate more clearly and help patients better understand their options, care becomes more effective. Simplicity is essential. Even powerful technology loses value if it is difficult to use, interrupts workflows, or creates friction during critical moments of care. Designing for humans also means recognising that care extends beyond clinical settings. Health outcomes depend on what happens after the visit. Ongoing communication, education, and follow-up are essential. When systems serve people rather than processes, healthcare innovation becomes more humane, trusted, and effective.
13. Is “patient-centric innovation” realistically achievable today, or mostly a theoretical ideal?
From my perspective, patient-centric innovation is absolutely achievable today, but it only works when organisations treat it as a responsibility rather than a philosophy. It becomes real when patients are involved early in the process, not as an afterthought, and not just as end users, but as active partners in shaping design, feedback, and improvement. When patient input is considered essential rather than optional, innovation stays grounded in real-world needs. Patient-centricity also depends on how seriously we take continuity of care. Healthcare does not end at discharge or at the close of a consultation. Patients need ongoing communication, education, and reassurance as they manage their health beyond clinical settings. Technology can help enable that continuity, but intention and consistency are what make it effective.
Too often, patient-centric healthcare innovation is discussed as an ideal instead of practiced as a discipline. Many systems speak the language, but fail to operationalise it. When organisations invest in patient advisory groups, accessible tools, and clear, respectful communication, patient-centric care becomes tangible rather than theoretical. It isn’t an abstract concept. It’s practical, achievable, and already happening in pockets. But it requires systems to act, not just talk. When patients feel heard, informed, and respected, innovation begins to deliver real and lasting value.
14. What do you believe healthcare truly needs more of over the next decade, beyond innovation?
When you look beyond the headlines and the constant focus on healthcare innovation, it becomes clear that it needs cultural progress more than additional tools. Technology can support care, but it cannot replace trust, communication, or shared purpose. Without a cultural shift, even the most advanced systems struggle to deliver real value in everyday clinical settings.
Healthcare needs environments that encourage collaboration rather than competition, learning rather than blame, and engagement rather than compliance. Clinicians must feel supported when adapting to change, not exposed to risk or added burden. When teams are given time, psychological safety, and clarity of purpose, improvement follows more naturally and sustainably.
15. If you had to define success in healthcare innovation, what would it look like?
In healthcare, defining success in innovation requires looking past launches, adoption numbers, and technical sophistication. Those measures may signal activity, but they do not reflect impact. True success becomes evident only when innovation improves the lived experience of care for both patients and clinicians. When innovation succeeds, patients feel informed, respected, and confident in the decisions being made about their health. At the same time, clinicians experience systems as supportive rather than burdensome, allowing them to focus on care instead of navigating complexity. These outcomes matter far more than how advanced a solution appears on paper.
Successful innovation integrates seamlessly into care delivery. It does not depend on constant workarounds, extra steps, or exceptional effort to sustain. Instead, it becomes part of routine practice, quietly supporting decision-making and communication. Over time, this kind of integration earns trust, rather than relying on mandates or incentives to drive adoption. Most importantly, meaningful innovation strengthens relationships. When communication improves, continuity of care becomes reliable, and understanding deepens between patients and providers, better outcomes follow. Technology recedes into the background and serves its purpose. In healthcare, success is defined not by novelty or speed, but by lasting, human impact in real-world care.
Healthcare innovation in 2026 continues to generate increasingly sophisticated technologies. Their long-term value, however, is determined not in laboratories or pitch decks, but inside institutions, workflows, and human behaviour.
This conversation suggests that healthcare innovation does not fail because of insufficient intelligence or investment, but because systems struggle to absorb change at the pace it is created. Healthcare innovation reaches patients only when organisations are structurally prepared to carry it.