App StorePlay Store
BackBACK

When the Evidence Base is all Sizzle and No Steak: How to cut through the Sales Spin

After 7 years in tech pitching his own digital health business, Scott Taylor provides his insider's cheat sheet for catching out digital health vendors. In this second of 3 articles, Scott covers establishing a "Quality Evidence Base."

Hero Image

"Our evidence is backed by science, research, and a really convincing PowerPoint presentation."

When you're investing in a new digital health solution, it's important to make sure that you're getting what you pay for. Unfortunately, like in any industry, the digital health space has their share of snake oil salespeople who are more interested in making a quick buck than helping their customers improve patients’ lives.

That's why it's crucial to learn how to spot a quality evidence base and weed out the dodgy actors. It’s all too easy for a digital health vendor to tweak sample sizes, flash up impressive graphs, gloss over study design details and skip citations to present an overly rosy picture of health outcomes.

But how do you pick the winners from the flops?

In this article we cover 4 key questions to ask that will get to the truth of a vendor’s evidence claims, and allow you to take off those rose-colored glasses. In fact, we’ll give you the tools to take those glasses off and beat them with a sledgehammer until they are just sad little pink shards all over the boardroom table.

You’ll finish this article with a new depth of understanding around how clinical evidence can be misrepresented, and the knowledge needed to cut through the spin. The 4 questions we’ll cover are:

  1. Does the clinical evidence demonstrate effectiveness?

  2. Was effectiveness proven in real-world settings?

  3. How impartial is the research body?

  4. What is the evidence for sustained long-term health impact?

...

Q1. DOES THE CLINICAL EVIDENCE DEMONSTRATE EFFECTIVENESS?

Well duh, of course you'll ask for evidence of effectiveness. This seems such an obvious question, but the devil is in the detail.

What you're trying to establish here is the veracity of the framework that supports the vendor's claims of effectiveness. Having watched or read roughly a hundred digital health pitches, it’s clear there are two main ways people obfuscate around a weak evidence base:

Making claims about research, rather than research of the program itself

This is a way of papering over holes by making logic bridges between ideas. It sounds like this; "research shows that taking your blood pressure medication every day reduces the chances of a cardiac event in those over 65 by 27%. Our digital health solution supports your members to take their medication every day; therefore it will reduce your heart failure hospitalization events." There’s a massive assumption in the middle there, that the solution will actually have any impact on medication behavior, and so drive the subsequent outcome. Similarly we see coaching solutions that point to third-party studies of coaching programs to claim their program is effective, but you have no idea if the design and implementation of their coaching is similarly effective until they do a study of their solution. 

Instead, what you want to hear is that they conducted a controlled 2-arm study with HF patients, with only half using their product, and they quantified a causal reduction in hospitalizations.

2. Reliance on internal datasets

In this situation, claims about outcomes are made with only "case studies" or "real-world data," either from the data analytics or just as likely the marketing team! Whilst real-world data is incredibly important (see next section), if the data that underpins the vendor's claims has not been published it cannot be independently verified. As such, you have to take the word of Jeremy from Marketing that his analysis and Instagram polling proves effectiveness. We all know you can torture statistics to the point where they will tell you what you want to hear, which is why prospective, peer-reviewed studies are so important.

Marketing Husky
Jeremy from Marketing working through his clinical evidence analysis

What you do want to see is:

  • A randomized, controlled trial (RCT) is the gold standard, but other designs like a before-and-after study, or crossover trial, or even observational study can build evidence of effectiveness

  • Peer-reviewed published research. It doesn't have to be in a Tier 1 journal, but it does need to be scrutinized by peers in the industry for scientific rigor

  • High-quality intervention study of experimental design that incorporates a comparison group

...

Q2. WAS EFFECTIVENESS PROVEN IN REAL-WORLD SETTINGS?

The goal here is to determine how closely the research setting matches the lives of your members. Think of it as a scale; at one end is a highly-controlled clinical trial, at the other a fully representative, real-world study (OK, so the world’s most boring scale but a really important one all the same).

Now, clinical trials are great. Everybody should hug a clinical trial they know (although something tells me that a RCT might not actually be a “hugging” type person). 

It's just that you are evaluating a digital health program that is going to be used in the real-world, so you want to know how the solution also performs in that setting.

Some of the limitations of clinical trials are:

  • They can be conducted over a short period of time, whereas you'll want to see evidence of long-term outcomes (see next section). In digital health we’ve seen trials conducted over just a handful of weeks, when outcomes take months and years.

  • Participants may regularly check-in with clinical staff or know they are being observed, both of which can skew to more positive outcomes compared to real-world use

  • Participants may have elected to participate, which leads to more health-conscious and health-literate users than you'll find in a real-world population. This is especially worrying when comparisons are then drawn to the general population; people who sign up for clinical trials are more engaged in healthcare

  • There may have been selection criteria used (e.g. exclusion of multi-chronic patients) that generate age, gender or socioeconomic biases

By comparison, evidence generated in real-world settings takes into account all the annoying human messiness of life. Elderly users who lack the tech literacy to download an app without support from trial nurses. Medicaid users who worry about using up their phone data. People who can't get transport to their medical appointments. Those who are too depressed to get out of bed, let alone manage their health. Users with multiple chronic conditions who aren't solely focused on the particular disease you care about. Houses with trip hazards. Communities without access to fresh food.

You get the idea. You want a realistic picture of how the digital health solution will work with your population, so ask lots of targeted questions and make your salesperson squirm.

Real life
Real life often looks nothing like a clinical trial setting
...

Q3. HOW IMPARTIAL IS THE RESEARCH BODY?

This one is pretty self-explanatory, but it's a line of enquiry that often gets missed in presentations and pitches. You want absolute confidence that there was no incentive to arrive at rosy conclusions, or to cherry-pick the most favorable outcomes.

Impartial research is so important because it ensures that the conclusions drawn from the research are truly accurate. That much is obvious. But it also builds trust and credibility. You could be starting a new relationship with this vendor, so you're looking to establish how transparent and credible they are in a short period of time.

The main red flag here is a conflict of interest in authorship or funding. This is also the easiest to establish. Ask about the author's relationship to the business and their funding model, keeping your eye out for those who are employees or commercial partners of the company, like a pharmaceutical or medical device partner. By my estimate, almost half of digital health research is co-authored by employees of the company (biased much?) and more than 1-in-10 is co-authored by a Founder who’s personal identity and professional success are completely tied to a positive result - big red flag!

Great impartiality usually means a 3rd party research organization collating evidence at arm's length. This could be a university, a not-for-profit or a health system. If your salesperson discloses this information up-front, give them a gold star. If they proactively share any conflicts-of-interest or financial ties without you having to ask them, they are a diamond-in-the-rough so lock the doors and don't let them get away.

...

Q4. WHAT IS THE EVIDENCE FOR SUSTAINED LONG-TERM HEALTH IMPACT?

As we discussed in the previous article on genuine engagement metrics, everyone in digital health should have this equation tattooed to their forehead:

Outcomes → Behavior Change → Engagement

The health outcomes you want require sustained behavior change. It’s helpful to think of behavior change in two stages. The first is short-term results, and the second is when the behavior becomes automatic and integrated into a person’s lifestyle.

It’s not a perfect analogy but think about what it was like to learn to drive a car. There’s a period at the beginning where you have to consciously think through the steps of driving - “clutch, stick, release clutch, don’t cross my hands, blinker on, look in the blindspot, ignore the people honking, I’m sure that rattle is perfectly normal, breathe breathe” and so on. Now, when you drive, it’s instinctive and you don’t even notice what you're doing. Driving has become an automatic habit.

Getting to this second, automatic stage requires time. According to a 2021 study, it can take an average of 59 to 70 days until a new habit becomes automatic. Don’t take this as a hard truth however; the length of time varies from person to person, the frequency of repetitions, and the complexity of the behavior.

Sustaining a behavior change also requires a change in mindset. For example, you may want to shift from Dad Bod to Dwayne The Rock Johnson and begin exercising at the gym for a few weeks. But to sustain the habit you need to change your mindset. This is where digital health programs can excel – the good ones support you as you develop a consistent routine that eventually leads to a new way of thinking.

All this is to say, you have to see results for many months (and preferably up to a year) to be confident that the outcomes you are promised will be delivered on. If the clinical trials you are being shown are only 3 months or less, then you should be skeptical that the program is going to work. Ask what outcomes look like after 6 months or a year, and don’t let that salesperson fudge the answer.

...

Efficacy, Efficiency and Effusive Promises

A high quality evidence base is the key to understanding if a digital health solution is going to deliver the outcomes you want. Whilst a salesperson might promise the world, you just need to know if their solution will work in the real world (boom boom).

To get to the cold hard truth, ask them these 4 questions:

  1. Does the clinical evidence demonstrate effectiveness?

  2. Was effectiveness proven in real-world settings?

  3. How impartial is the research body?

  4. What is the evidence for sustained long-term health impact?

These will help you discern if the evidence base is biased and how effective and sustainable the program may be. 

When You’ve Seen One Population, You’ve Seen…. One Population

Now that you have your Diploma in Quality Evidence Interrogation (QUEEIN), and have accepted your graduation crown, it’s time to turn our attention to how relevant that evidence is to your population.

Watch out for the final article in this three-part series, where we delve into how to determine whether the vendor’s solution will succeed with your members. We’ll touch on how to translate their clinical evidence claims to your unique population and how to understand demographic differences.

We’ll also dig into how the digital health vendor is showing their experience in this space. This includes proof of adapting the solution to your population’s needs, the vendor’s implementation experience, their support for a pilot program and evaluation planning.

We'll give you 3 key questions to ask so you can confidently decide whether to move forward with a pilot. After all, the only way to be certain if a program will succeed is to implement and find out! But we’ll make sure you have all the tools needed to give it the best chance of success.

...

About the author

Scott Taylor is Co-Founder and CEO of Perx Health, a digital health company changing the way health plans engage with their high-risk members.

Perx enrols a higher proportion of members, interacts with them more frequently and keeps them engaged for longer than any other digital health program. They achieve this by tailoring behavioral motivation strategies to the individual, ensuring the completion of 90% of critical daily care tasks like medication adherence, physical therapy and attending appointments. 

Perx Health has already helped over 30,000 patients achieve better health outcomes and partnered with over a dozen healthcare organizations. Email us hello@perxhealth.com to learn more. We're always happy to chat.

Further questions?

We love discussing our research - reach out

Contact Perx