Digital Health or Digital Snakeoil: How to evaluate solutions in 9 questions
Until recently, there were no clear criteria to evaluate a good digital health vendor from a bad one (or an unscrupulous one). This article provides 9 questions to ask that will differentiate between snakeoil and best-in-class.
Co-authored by Grace Lethlean, Co-Founder of ANDHealth, and Hugo Rourke, Co-Founder of Perx Health.
Unfortunately, it’s too late for Walgreens.
In 2016, Walgreen’s rolled out Theranos blood testing to 40 stores only to discover that the tests were inaccurate and unproven. Similarly, it’s too late for the 8,000 consumers who bought themselves the Scanadu Scout. The hype of a sleek “Tricoder” that would measure heart rate, blood pressure and body temperature, was no match for the regulatory hurdles of the FDA.
These scandals were unravelling about the same time ANDHealth and Perx Health were founded. At that time, our fellow innovators, doctorpreneurs and enterprise customers were all figuring-out on-the-fly what makes a digital health company good enough, or, better, truly great. This is no easy task as the commercialisation of digital health products is inherently complex: there are requirements across clinical research, real-world R.O.I, data privacy & security, regulation, plus considerations for implementation in complex workflows with multiple stakeholders.
The historical absence of clear evaluation criteria has slowed the adoption and health impact of Digital Health in two ways:
- Without clear criteria there is no available yardstick for digital health buyers to measure solution vendors against. Without a common understanding of what’s bad, good, or great, it is far easier for favouritism, glamorous marketing or incumbency to influence decision-making. This applies equally to the various digital health buyers, including insurance executives, healthcare administrators, and pharmaceutical leaders.
- Without clear criteria there is no roadmap for potential digital health innovators to follow and measure their progress. Without this roadmap, it’s too easy for innovators to develop their solutions in the wrong sequence, overdeveloping some areas like commercial pilots while under-developing others like security or clinical evidence.
Now, the published national standards have caught up.
There are some great resources available that create clear criteria for our industry with standards and frameworks published by Public Health England, the American Medical Association, Australia’s RACGP as well as ANDHealth (Australia’s digital health commercialisation initiative), and the UK’s National Institute for Health and Care Excellence. Some of these resources are general across digital health solutions and some address specific sub-segments.
In synthesizing these standards, we are going to focus on Digital Therapeutics and other patient-facing digital health programs such as those targeting condition management. Hugo has written previously about these terms alongside other digital health sub-segments and why it's important to be specific. Further industry perspectives can be found here.
9 simple questions to identify a good digital health vendor
- What technology does the product use?
- How does the product work?
- Does the clinical evidence base demonstrate effectiveness?
- Has effectiveness been validated in the real world?
- What evidence is there for successful adoption?
- What support will the vendor provide for implementation?
- How does the vendor manage privacy & security?
- Does the vendor meet and understand relevant regulations?
- How invested is the vendor in long term partnership?
_________________________________________________________________
PRODUCT
What technology does the product use?
Bad: any product that has been built on niche technology yet to reach substantial adoption. If you have never used the underlying technology yourself, it’s unlikely your mass-market of clinicians or patients will adopt it any time soon (sorry Google Glass and anything blockchain).
Good: a product that is built on technology that is both usable and already adopted. The clear trend over the last decade has been away from single-purpose proprietary technology to multi-purpose, industry-standard platforms, such as patient smartphones or all-purpose medical iPads. This means that adopters are likely to have the right technology at-hand and already be familiar in its use.
Great: technology that is already used by the target user in the intended context. By building on technology that is already in use, and user familiarity with the platform, you can minimise disruption to workflows, accelerate adoption and leverage context-specific triggers to act.
The AMA suggests testing technology with a patient advocate or staff member unfamiliar with the project as a good litmus test. (AMA)
It’s easy to be Good but then go slightly wrong getting to Great. We have seen all of the following: expecting a doctor to use an iPad in theatre; requiring a patient to download an app for a one-off survey; or reminding someone to complete a glucose test via a notification on their glucometer (it’s the one’s without their glucometer at hand that most need reminding!).
How does the product work?
Bad: solution uses non-specific mechanisms like “education”, “gamification” or “coaching” without any specificity or empirical proof of effectiveness for patients or doctors.
Good: the vendor is able to show that the solution’s mechanisms of action are appropriate and consistent with recognised behavioural theory and models. However, just because a theory is peer-reviewed and published does not mean it is actually effective. Popular theories today may be proven wrong in the future or fall out of favour (just ask Freud’s psychoanalysts). When it comes to behaviour change, most models are still just theory.
The National Institute for Health & Care Excellence reviewed the 4 most popular health behaviour change models. The review found that these models have only weak explanatory power for retrospectively observed behaviour change. Worse, when used prospectively for designing interventions, the evidence of effectiveness is “mixed” and “very limited”. They conclude that interventions based on these models are “no more likely to be effective in achieving outcomes than alternative interventions”. (NICE)
Great: vendor’s solution is grounded in empirically-supported techniques that are both widely published and repeatedly proven effective. Empirical science is far more reliable than scientific theory.
For designing behaviour change intervention (whether patients or care providers) the National Institute for Health Research’s Behaviour Change Techniques Taxonomy is a great place to start for 93 empirically-supported techniques.
_________________________________________________________________
VALIDATION
Does the clinical evidence base demonstrate effectiveness?
Bad: perhaps this is best defined as “not good” as there are many, many ways to have a poor-quality evidence base:
- no peer-reviewed published research with only “case studies” or “real-world data” from the marketing team. While these may be valuable (see next section) they are not a substitute for clinical evidence.
- claims of being “evidence-based” by referencing the research of outwardly similar solutions or interventions.
Just because I make a pill that is both little and blue, doesn’t mean my pill is going to achieve the same effect as Viagra.
- only qualitative and descriptive peer-reviewed studies. These can be helpful as background reading but do not prove effectiveness.
- any published research authored by an employee or commercial partner of the company, where their objectivity can reasonably be questioned.
Good: observational or quasi-experimental peer-reviewed studies that demonstrate outcomes that are relevant to the solution’s claims.
Great: high-quality intervention study of quasi-experimental or experimental design that incorporates a comparison group. It should show improvements in relevant outcomes over standard care. A randomised, controlled trial (RCT) is the gold standard, but other designs like a before-and-after study or crossover trial can also build evidence of effectiveness.
RCTs can prove whether a product has caused an outcome, for example, whether your app helped users to stop smoking. The NICE Evidence Standards Framework for digital health technologies recommends RCTs for evaluations of [Digital Health Products] that seek to prevent, manage, treat or diagnose conditions. (Public Health England)
When it comes to proving effectiveness, quantity is no substitute for quality.
Has effectiveness been validated in the real world?
Bad: the vendor has no or limited commercial case studies with outcomes of financial relevance to the buyer. This critical gap is often obfuscated with data from non-commercial populations such as testing with friends & family or repurposing trials data where user adoption is enforced by protocol.
Good: the vendor can provide commercial case studies with relevant commercial outcomes from reputable customers.
Successful digital health companies have one thing in common: they can demonstrate their commercial need. Health economics, real-world healthcare utilisation data and strong return on investment rationales are key (ANDHealth)
Great: commercial case-studies with relevant commercial outcomes backed up by customers who are willing to act as a reference and/or the commercial outcomes are independently evaluated by third-party researchers.
Do your due diligence. Don’t rely on the sales pitch to provide all the information you need. Ask for case studies and referrals to support the pitch, and ask to speak with the product engineers and existing customers to gain a realistic picture of the process to integrate this solution into your organisation. (AMA)
_________________________________________________________________
IMPLEMENTATION
What evidence is there for successful adoption?
It’s easy for a vendor to sound good at reaching patients and driving adoption: “we use an omnichannel communication strategy leveraging the psychographic targeting and viral dynamics to reach all patients” — what does that actually mean?!
But ultimately adoption is simply success at converting the eligible population of patients or healthcare professionals to users and then sustaining their engagement.
Bad: the vendor avoids talking about adoption outcomes quantitatively and instead only wants to talk about the methods of enrollment. BIG RED FLAG! If the product is not appealing to the target end-users, the most psychographically-optimised email campaign in the world will not save it.
Good: 20–30% enrollment is the benchmark performance of established players using email, direct mail-outs and other activation methods. Ideally, the vendor has third-party research and customer references to back-up their claims.
Great: any enrollment over 30%. Livongo is one of the longest-running digital health (coaching) platforms and after a decade of optimising and hundreds of millions of dollars of VC funding behind their enrollment teams, they are able to get over 30% enrolment in ideal circumstances.
At the end of twelve months, our average enrollment rate for Livongo for Diabetes clients who launched enrollment in 2018 is 34% of the total recruitable individuals at a client (Livongo S-1)
What support will the vendor provide for implementation?
Bad: the vendor has given little thought to the buyer’s product implementation whether for the initial launch, support, life-cycle management or continuous improvement. The vendor’s account servicing stops at providing the product. Upfront and/or purely fixed pricing is another BIG RED FLAG!
Good: The vendor can provide reference materials for implementation to the buyer: whether that’s launch plan templates, staff training materials, workflow designs, or patient education brochures.
Great: The vendor is willing to embed staff, shift mountains and do whatever it takes to deliver success. The vendor can back up their promises with a track record of successful implementation across a variety of organisations and clinical environments.
AMA suggests that buyers talk with vendors about value-added services they may be able to provide, such as project management, staff and patient training, patient engagement management, etc. (AMA)
_________________________________________________________________
RISK
How does the vendor manage privacy and security?
Bad: a hallmark of a vendor with significant security risk is appearing to be at either end of the security spectrum:
- little thought for security (both technical and otherwise) or unwilling to share security practices in detail and document the privacy flows.
- promising 100% security, which is far more often due to naivety rather than any world-beating security expertise.
When it comes to privacy, make sure that the health benefits generated by the product are the main game. Any monetisation of data, targeted ads or sharing data to unrelated third-parties, suggests that your patients are the product being sold. You are evaluating a marketing tool not a digital health solution.
Good: the vendor ticks all the boxes and is compliant with all relevant regulations, which will likely vary from country to country.
Great: the vendor is privacy-first and sees this as mission-critical and a differentiator. The vendor is fully transparent with their privacy practices and dataflows. A great vendor will be similarly transparent on security and willing to share processes for downside scenarios, with mature realism about risk and clear mitigation strategies. The vendor will also ideally have an ongoing agenda of work to continuously monitor and improve security.
As a starting point, the AMA’s Digital Health Implementation Playbook has 11 key privacy and security questions for vendors. Importantly, these questions are both easy-to-understand for you and insightful on the vendor (AMA)
Does the vendor meet and understand relevant regulations?
Bad: the vendor claims to be “unregulated” but cannot clearly articulate how or why it sits outside the regulations.
Good: the vendor understands the regulatory landscape including Medical Devices and Software-as-a-Medical-Device and maintains a watching brief over the current evolution of regulations in this space across relevant jurisdictions.
The question isn’t whether digital health is regulated, the question is whether the product is a medical device (TGA, more here)
Great: the vendor has met all relevant regulatory requirements and/or is in dialogue with relevant regulators or industry bodies about regulation. The vendor supports regulation as a mechanism to protect consumers while classifying digital health products for medical application and reimbursement.
Digital health innovators should be encouraged to view regulation as a competitive advantage as it can smooth the adoption and customer acquisition process by indicating a product is safe and efficacious as verified by an independent body, the regulator (ANDHealth)
How invested is the vendor in a long-term partnership?
Bad: clear counter-party risk signalled by issues such as suspect financial sustainability, unclear commitment to the solution, or references from unknown and/or unidentifiable customers.
Good: stable vendor with demonstrable financial sustainability and customer references, but unwilling to commit to performance transparency (or, even better, value-based pricing).
To ensure value for money to the health and social care system, the DHT (Digital Health Technology) owner must commit to providing data demonstrating that people using the DHT are showing the expected benefits from its use. This could include improvements in symptoms or general health measures (NICE)
Great: Both sides of the partnership are willing to continuously validate and optimise impact in the specific setting, and tie fees to product performance and health outcomes achieved. A bonus is a growing vendor with product and business velocity so that each customer stands to gain from ongoing investment in product and technology, and broadening experience in implementation and clinical workflows.
Key sources
- NICE’s Evidence Standards Framework for Digital Health Technologies
- American Medical Association’s Digital Health Implementation Playbook
- Public Health England’s Guidance on Evaluating Digital Health Products
- Royal Australian College of General Practitioners’ “Factsheet: Health Apps”
- ANDHealth’s report Digital Health: The Sleeping Giant of Australia’s Health Technology Industry
_________________________________________________________________
Liked the article?
Download the two-page summary for reference at the end of this article.
About the authors
Grace Lethlean is Co-Founder of ANDHealth, Australia’s only dedicated digital health commercialisation support organisation, patented co-inventor of a digital intervention which she took from clinical trials to IPO, and a Churchill Fellow (Digital Health). Over the past 3 years Grace has worked with over 200 digital health companies.
Hugo Rourke is Co-Founder and COO of Perx Health, a leading Digital Therapeutic and Digital Care company applying advanced behavioural science and unprecedented health engagement to improve condition management. Perx Health’s breakthrough product has been proven in commercial trials and in clinical research. Hugo is passionate about using the best of consumer engagement to improve health outcomes in a person-centred and evidence-based way.
Unlock this piece of content
Thank you!
Here is the downloadable asset from this article
:
Digital Health or Digital Snakeoil: How to evaluate solutions in 9 questions