Learn how to build an effective program evaluation strategy for digital health initiatives. Discover Springbuk's 4-step approach using engagement data, population cohorts, and balanced metrics to optimize your health programs and drive measurable outcomes.
Not all data are equal - you need the right data to help you understand whether or not your program offerings are providing value to your population.
At Springbuk, we understand the importance of using data to evaluate your digital health programs.
Knowing whether or not you have the right data starts with defining your business goals for the program. Specifically, you should identify the business impact or population health outcomes that you want to achieve through your partnership with a digital health solution. Once you have identified these goals, you can identify the data you need to analyze.
Springbuk works closely with a variety of digital health solutions, and through these partnerships, we have learned that engagement data is an essential component of program evaluation.
Engagement data tracks program participation at the member level. In general, this data captures activities completed, phone calls, provider contacts, device data (e.g., glucose monitoring), health risk indicators such as stress and depression scores, and health outcomes, including biometrics and lab values over time. It can also identify why members opt-out or disenroll in programs.
Program engagement data is integrated with other employer data, such as demographic data, including social determinant of health (SDoH) indicators, medical claims, pharmacy claims, lab results, and disability data. It’s also integrated with Springbuk data enrichments, including evidence-based guideline adherence, predictive risk scores, and episodes of care, depending on the employer’s business or population health goals for the program. Springbuk then often works with the employer and digital health solution to formulate a balanced approach to program evaluation.
Engagement data is fundamental to defining your engaged cohort. This data may also provide insight into the population who qualified for the program but did not engage.
We have found that most digital health solutions do not capture data on non-participants. This is why Springbuk helps our clients identify a matched cohort of non-participants from a study population. Matching criteria may include member characteristics such as age, gender, geography, and clinical conditions.
It is important to keep people in the study who will best represent those in the program and remove outliers that can skew the analysis. For example, high-cost outliers can significantly impact outcomes and should often be removed from the analysis.
Also, defining the right time frame for the analysis is crucial. Many programs work well when members first engage. This may be related to members’ excitement about having access to resources or a new way to address their health issue(s). However, program engagement tends to decrease over time or change when the status of a member’s condition changes.
Although program tracking can start immediately, to help you check assumptions and expectations, we recommend waiting at least 12 months after the program starts before performing any evaluation. However, when the time comes to evaluate the program engagement, easy-to-use tracking tools, like Springbuk Timeline, can help you visually demonstrate the impact of implemented programs and strategies by comparing metrics against your recent actions.
Often, employers elect to partner with digital health solutions or implement particular programs because they have identified an issue in their population that they want to address. Creating goals or identifying particular health outcomes that address these issues guide your evaluation of program performance.
For example, an employer may have noted that a significant number of members in their population were struggling to manage their chronic condition. These members need help with medication adherence and medication affordability, access to appropriate care, better treatment pathways and health coaching. As such, the business goals and health outcomes for the program may include the following:
Once the goals and outcomes are defined, the next step is to align them with what we refer to as “metrics that matter” and can be measured in the data. This may include:
Much like the scientific method, program evaluation is a process where we question, research, hypothesize, analyze, and then draw conclusions. This process creates a feedback loop to help digital health programs improve. For example, the learnings found during a program evaluation can inform member intervention strategies or refine program member identification algorithms.
You may also uncover key characteristics about the population who are most likely to engage or those who are most likely to benefit from the program intervention. Notably, it will also uncover surprising results - possibly highlighting needed goals or outcomes that the program is not addressing. All of these findings are then used to enhance the program, then the evaluation process starts again in 12 to 18 months.
At Springbuk, we are committed to working side-by-side with employers, brokers, consultants, and digital health partners to create balanced and transparent program evaluation analyses. When all stakeholders work together to define and measure program success, members benefit.