IBTN 2020 Session 7: Innovation in Digital Behavioural Interventions

CONTENTS:

View entire session (1 hour 37 minutes)

View the Q&A session

Introduction

Transforming behavioural interventions to digital: developing e-health solutions for behavior change (5:10)

Innovations in e-health solutions in behavioural medicine (26:42)

Professional and organisational factors to consider for successful digital behavioural interventions (49:58)

Discussion (1:13:00)

Speakers have kindly provided responses to questions submitted by conference participants during the Discussion session that did not get an opportunity to be discussed.

  1. Question from Simon Bacon – Given that tailoring is probably the key to achieving mass behaviour change, how best can we leverage machine learning within digital interventions to get personalisation of the interventions?

My apologies in advance for my answer as this one is a bit of “trigger” question for me. After you read this, I hope you can understand why. And, also, note that I don’t fault you or anyone for asking this question. We only know difference when that difference matters to us. I hope, after you read my response, you can see that, in my view, if you want to do tailoring well and understand it mechanistically, we, as a field, are going to need to start being able to understand differences of tools. In my view, the first step is to learn about the many tools and resources we have available to us to do tailoring. Sorry if this is a bit crass but, often when I talk to people about this, there is a sort of implicit structure around terms like machine learning that goes something like this… “I’m going to use machine learning to do X. I’m going to do what I’ve always done, do magic through machine learning, and then the answer I’ve always sought will be available to me.” The problem I have with this is that “machine learning” is not a single thing but, a very broad class of tools. Not only that though, it is not all encompassing, by any stretch, on the types of tools one might bring to issues like tailoring. Finally, there are times when machine learning is most definitely the wrong tool for some issues with tailoring. All of this is to say, it isn’t magic; often what is needed is robust thinking about the complexity of phenomenon, which is then matched with the appropriate mathematical, computational, and hardware solutions to enact it. Doing this requires an understanding of the issue being studied, how the math and computational apparatus works, and the hardware it is built on. We can’t rely on magic. We need to learn how the magic works.

Eric Hekler – presenter

I think we are still at the infancy of ML-driven personalization methods. There are real challenges involved in doing this type of personalization well–many of them having to do with data sparsity. There are methods currently being developed to adaptively pool across people to deal with these issues, but they are still extremely immature. So, we are still a number of years away from having robust ML personalization methods. This also means that we currently don’t have any data on whether these types of personalization methods are actually useful. We are going under the assumption that if done well they will be useful, but I suspect that they will not be uniformly helpful. We will need a lot of evidence about the types of interventions and contexts where personalization helps. Again, we are nowhere near to having that evidence right now.

Predrag Klasnja – presenter
  1. Question from Reyhaneh Yousafi – What are the points to take into account for having digital interventions with more sustained positive effects over a long period of time?

In my view, the key steps are to clearly define your success criteria, with those criteria achievable by a single unit/person, not a success criteria that is about comparisons between groups (e.g., significant difference; mean differences, and other nomothetic statistical notions). For example, people need to hit a benchmark that they can specifically achieve (e.g., 10% reduction in weight loss, 150 minutes/week of MVPA, etc). Next, one needs to think through the active mechanisms that directly impact that success criteria being achieved or not (e.g., the concrete mechanisms that, if change change the outcome). With that, the next step is to think through how to operationalize interventions that meaningfully impact those mechanisms. After that, think about how context and other factors might overwhelm those mechanisms or undermine their use. From that, you’ve built a causal loop diagram or some other variant conceptual model of your intervention that you can then use to guide your research. As you work, if you aren’t hitting your success criteria for specific people, ask why. What happened? Every time something doesn’t work, even at an N-of-1 level, is a time for reflection. There’s something you don’t understand. It might be poor measurement, a poor intervention, a poor causal model, etc. Take the time to understand your phenomenon and you can make progress on it.

Eric Hekler – presenter

Many of our digital interventions don’t really take time seriously. Something like Fitbit, for instance, does the exact same thing whether a person has only been using it for 2 week or 2 years. There has been very little effort in this domain to figure out how intervention function needs to change over time. Should we be using the same intervention components the whole time? Should we be providing them on the same kind of schedule? Currently, the implicit answer has been that yes, there is little need for meaningful adaptation over time, but the high level of intervention abandonment tells us that this is unlikely to be true. Similarly, the current interventions don’t take into account day-to-day changes in people’s needs. We still give people step goals even when they have the flu. To me, a key question for digital interventions is how to effectively adapt them over a long time course. What data should we be using? What adaptations are meaningful over a course of a year or two? We need more work in this area.

Predrag Klasnja – presenter
  1. Question from Simon Bacon – Are digital health behaviour change interventions the pinnacle of personalised medicine and what should we be teaching mainstream medicine about this?

That’s quite the leading question…. =) I think we don’t have the evidence to make such a statement; we have some conceptual reasons to think that might be the case but, as the answer to the above question suggestions, there’s a lot of work we need to do as a field if we want to realize the potential of these tools. In terms of teaching mainstream medicine, I think the short summary of what we need to teach is that behavior is complex. Because of htat, it requires methods and approaches that match the inherent complexity of the phenomenon. With that said, we are increasingly making progress, as a field, on handling that complexity. Beyond this, we can’t fall into “biologicalism” and think that all health issues can be solved through impacting biological mechanisms. Many of the issues we are trying to make progress on are now driven more by behavior, social determinants, and context, with biology just being the conduit of these more root causal issues. Therefore, we need to engage with the complexity if we have any wish of actually helping people. Not doing so is, in my view, implicitly falling into the lessons we learned about racism. As Dorothy Roberts, a Sociologist at UPenn has written convincingly, racism is a fatal invention grounded in biologicalism. I would argue that much of the BIOmedical research enterprise continues to fall into the same trap. Something is only “real” if it can be linked to “objective” biology. Mainstream medicine will continue to struggle with issues like chronic disease, health promotion, and the like until they recognize that what they are working on is more complex and can’t be solved with a pill targeting only the biological parts of the issue. See here for more context to that argument I just made: https://academic.oup.com/abm/advance-article/doi/10.1093/abm/kaaa018/5825668

Eric Hekler – presenter

Given that the evidence for the effectiveness of the current crop of digital behavioral change interventions is pretty underwhelming, I would feel very uncomfortable arguing that they are a pinnacle of precision medicine. That said, I am not sure that any other part of precision medicine has done much better so far either, so we may all be in the same boat right now. I think there is potential in this class of interventions, but that potential is far from realized right now.

Predrag Klasnja – presenter
  1. Question from Simon Bacon – Can the panel talk about some of the analytical complexity of some of these designs and what is the best way for the standard behavioural interventionist to access the expertise to be able to do this kind of work?

The key, in my view, is to get educated on these tools and methods. Take courses, collaborate with others who are experts in the methods, talk to behavioral scientists who have collaborated with others already to learn how to do the collaboration and build win/win collaborations.

Eric Hekler – presenter

A key piece of this is also to be thoughtful about what methods are best suited for addressing the questions that a researcher is trying to answer. Methods are just tools, and we need to be using the right tool for the job at hand. That, of course, means that a person has to have a sense of the range of available options and what they are good for, which goes back to the point that Eric was making about reading up on methods, taking courses, etc. ICPSR is a great resource for this, as is the Methodology Center at Penn State, among many other resources.

Predrag Klasnja – presenter
  1. Question from Simon Bacon – When testing digital interventions how much do we need to be testing at both the individuals and the system level? How might we be able to marry these 2 aspects within specific feasibility (at this stage) studies?

When you say “at the systems level” I’m not exactly sure. I’m assuming you are alluding to Dr. Gagnon’s talk and, thus, describing this in terms of how a digital tool will need to be used and implemented in real-world contexts and “systems,” such as healthcare systems. If that assumption is right, in my view, the two are intimately linked. See here on an approach to science the Pedja and I have been articulating that explicitly builds out success for an individual unit as a strategy for studying fit into context and, thus, seeing how an intervention works for a person “in a system.” https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-019-1366-x?utm_campaign=BMCF_TrendMD_2019_BMCMedicine&utm_source=TrendMD&utm_medium=cpc

Eric Hekler – presenter

I agree with Eric here. An important part of how we are approaching this work is to be explicit about what the success criteria are. In different projects, some of these success criteria are likely to be at the individual level (% of body weight lost, level of activity, etc.), some may be at the level of a family or a community, and some may be at the organizational level. But as long as the success criteria are clear, one can find ways to assess if we are meeting them/getting closer to them.

Predrag Klasnja – presenter
  1. Question from Simon Bacon – In a number of AI-based digital settings there are major concerns about inherent bias (e.g., facial recognition and non-white individuals). How can we ensure that these interventions, both at the individual and system level, are not biased by the perspectives that we overlay onto the interventions we put into place?

This is a profoundly important question and one we should all spend a lot more time thinking about. We so easily build our beliefs, narratives, and viewpoints into the work we do. As time passes, the subjectivity of the researcher gets lost and, what is left is, “objective fact.” (see Michael Polayni’s book, Personal Knowledge for an interesting philosophical grounding that undergirds the claim I just made). While I don’t claim to have a firm grasp on this, in my view, one of the first things we need to recognize is that the success criteria of any algorithm (what you are labeling “AI-based digital”) are the success criteria of it and then the data sources that go into it. In my view, a key strategy we need to work on is engaging with the people we serve with a particular focus on ensuring they deeply and meaningfully understand the success criteria and pathways we are using to get to those success criteria. I see this as working through citizen-led science and other efforts. If you are curious, here’s a talk I gave that is directly relevant to this: https://youtu.be/hBSPuqBnhXs

Eric Hekler – presenter

In addition to what Eric said, there is a whole area of computer science now that is studying these issues. People are working on making algorithms themselves more transparent, developing methods for querying the algorithms to make it possible to understand why the algorithm did what it did (this is called interpretable AI), as well as modeling sources of bias and creating structures for training of classifiers that will minimize such biases. As our field (digital interventions) develops, we need to be bringing in best practices from these other areas, in addition to engaging in our own inquiries into the ethics and power implications of the work that we are doing.

Predrag Klasnja – presenter
  1. Question from Simon Bacon – I wonder if the panel can talk about the contrast between patient/stakeholder integration in the research process vs. continual fine tuning in how to identify the optimal way of developing individualised behavioural interventions?

I personally think there’s a great deal more we can do about “tuning”; I offered a control systems way of thinking about it. I also think there are ways to invite people to “tune” themselves (see previous discussions for bits of this, particularly the video above; also see this paper: https://dl.acm.org/doi/fullHtml/10.1145/3381897

Eric Hekler – presenter

Patient and stakeholder integration in the research process is important to make sure that we ask the good questions, that the problem is framed in a way that corresponds to people’s life. Then, the intervention can be fine tuned to match individual preferences and characteristics. Both are important.

Marie-Pierre Gagnon – presenter

The two questions (for stakeholder engagement and tuning) seem to me to be related by separable. The former is about how we develop interventions and to what extent patient/stakeholder perspectives and needs guided project and intervention design. This goes all the way down to co-construction of success criteria, and even framing of the problem itself. I see tuning as an ongoing process of micro-adaptations over time to continuously adjust the system functioning to the changes in individuals’ needs.

Predrag Klasnja – presenter
  1. Does it make sense to think about idiosyncratic populations, and that continuous tuning could be done for population level interventions? Fascinating talks, thank you!

Yes, though I would suggest the concepts of both “levels” and “populations” will lead us astray. They are the wrong metaphorical structure for us to be working in. In my view, I think the concept of holons is more valuable. Something that is both whole and part of something else. I think working on differently sized holons (e.g., the holon of a person, to a family, to a neighborhood, to a community, to a city, etc) is a valuable way of going. I see the “tuning” approach as directly linkable between helping people to a “learning healthcare system” with the learning healthcare system as a larger holon. Key though is to start thinking of things as systems and not as “levels” and populations”. Those are more statistical artifacts in my view that are valuable, from a data structuring perspective but, from the goal of understanding the real-world phenomenon under study, is problematic.

Eric Hekler – presenter
  1. When designing digital interventions and deciding on underlying theoretical models, choosing operating condition (rewards) would seem to render the user forever dependent on the intervention (app) to maintain behavior over the long term…how can we build digital interventions that develop more autonomously regulated behavior (using self-determination theory for ex), so that people no longer need the constant rewards provided by the intervention to engage in a behavior?

This is a great question and I answered it in the talk. In my view, this is exactly the right goal though. We want to do everything we can to make sure people don’t get addicted to our tech. See my earlier paper on “tuning” as a startin conceptual frame for self-study. https://dl.acm.org/doi/fullHtml/10.1145/3381897

Eric Hekler – presenter

I have a slightly different perspective on this. In my view, while we want to be focusing on interventions that can help individuals internalize motivation and capacity, I don’t see any fundamental issues with the interventions continuing to function in the background so they can provide support when a need arises again. Most of the behaviors digital behavior change interventions aim to support are complex, context-dependent behaviors that are highly dynamic. Even individuals who do Ironman events get sick, get injured and have babies; even skilled patients with type 1 diabetes experience changes in their physiology over time that require them to re-adjust how they eat, exercise, manage stress, etc. If an intervention can provide support when such needs arise, even if it’s just sitting quietly in the background for stretches of time when things are going well, I don’t see this as being inherently problematic.

Predrag Klasnja – presenter

PLEASE NOTE: Though numerous questions were submitted by conference participants, only the questions for which we obtained responses are shared here.