View entire session (1 hour 40 minutes)
- Chaired by IBTN Co-Lead Kim Lavoie, PhD
BrainPark: Using Neuroscience to Create Healthy Habits, Brains and Lifestyles (4:47)
Development of a weight loss intervention considering resource constraints (24:24)
Developing an evidence-based and patient-informed psychological intervention for infertility-related distress (40:33)
Data harmonization and individual patient data meta-analysis: exploring new avenues for evidence summaries and intervention development (55:26)
Speakers have kindly provided responses to questions submitted by conference participants during the Discussion session that did not get an opportunity to be discussed.
- To determine intervention components, when should you go with Delphi process vs MOST?
MOST is a framework with multiple phases. Within each phase of MOST exists MANY different methods one could select. So, to my mind, it would seem that the Delphi process could be used in the preparation phase, perhaps while trying to decide which intervention components should be designed and tested. I could also see it being used during optimization when you get to a decision making phase about assembling and either further optimizing or evaluating a treatment package.
I completely agree with Dr. Pfammatter – I don’t see the two approaches as being mutually exclusive. We can start with the Delphi to get a broad perspective of promising intervention components and then use MOST to identify which should be retained.
- Though there have been great advances in patient-oriented research / ppi, there are still a number of researchers who are sceptical about using these methods or concerned about how to get started. What advice and practice suggestions can you give the audience?
I was also hesitant but it’s absolutely been the best thing for my research. For one thing, it’s made me much more aware of what research would actually make a difference for this population. And it’s encouraged me to devote more time and attention to knowledge translation to the broader public. Sadly, it isn’t something I gave much thought to before but has been very rewarding, hearing “thank you so much for promoting education about this issue”. I will say that I’ve enjoyed having a panel of patient-advisors more than having one or two patient-investigators – I’ve found that you get a broader range of perspectives rather than getting a single perspective and assuming that it’s representative of all people from that population.
- Thinking about this multiple related issues around infertility-related distress, how do you think this information can be best used to help women deal with these behaviours?
Women will, of course, differ in terms of which issues are most prominent. One possibility I’ve thought about is tailoring the set of intervention components to the woman’s specific set of issues. Before starting the intervention, a brief questionnaire would assess to what extent each of the issues is relevant in this specific case, and deliver the most relevant components.
- How was the semi-structured interview constructed to identify the components of infertility-related stress?
The questions were open-ended – e.g. “On a day-to-day basis, what do you find to be the biggest emotional challenge about infertility?” for women and “What would say are the unique psychological challenges that these women face?” for the mental health professionals. Most components were endorsed by both the women and the mental health professionals, while others came primarily from the mental health professionals: for example, avoidance and narrowing of activities were not necessarily recognised as being potentially problematic by the women but were very commonly cited by the mental health professionals. So I think that having both groups was important – they each brought a unique perspective.
- The process of data harmonisation and IPD seem to be quite complex, can you provide the audience with some idea of the main benefits, as well as the costs of doing this?
Benefits of adopting this approach include: opportunity of including unpublished, and/or poorly reported data, including studies with lots of missing data. It increases statistical power, data quality and generalizability of findings. From the analytical perspective, it is great for complex data types (time to event data for instance); it allows to perform refined subgroup analysis and explore mediators. In terms of cost, it’s quite challenging to provide an overall estimate, as it will depend on the type of studies and complexity of captured data. Some resources suggested a cost of £1000 per trial or £5-£10 per participant included in the IPD meta-analysis (please refer to Cochrane’s IPD meta-analysis Methods Group for further reading: https://methods.cochrane.org/ipdma/about-ipd-meta-analyses)
- In comparison with a regular meta-analysis, how different are the findings of a harmonization study in terms of reliability in clinical practice?
In a clinical trial area, comparison of the overall findings from the two approaches indicates the same direction of results, providing that the absolute information size is large. However, larger discrepancies are expected if the focus of the analysis is on exploring interactions and mediating effects of different variables. In terms of reliability, there is no doubt that IPD meta-analysis represents the gold standard of evidence summaries. Implementation of the IPD meta analytic approach will ultimately depend on the specific research question, and it is important to balance the benefits and the challenges associated with this approach.
- What would you say is the greatest challenge to apply IPD and data harmonisation in behavioural research?
In my opinion, the main challenge to performing an IPD meta-analysis is actual access to the data coming from behavioral trials. Data sharing platforms have been developed to allow researchers access to the data in the area of pharmaceuticals, and numerous data transparency policies enforce the “open data” movement (supported by the institutions such as Institute of Medicine, European Medicines Agency and International Committee of Medical Journal Editors). Overall, poor reporting and registration of the behavioural trials will make it harder to identify all of the studies of interest, inevitably leading to selection bias. In terms of data harmonisation, lack of uniform definitions of behavioral exposures, diversity of behavioural interventions and measurements applied across the different studies requires multidisciplinary teams that would tackle these complexities in a systematic manner enabling a creation of consistent data structure.
PLEASE NOTE: Though numerous questions were submitted by conference participants, only the questions for which we obtained responses are shared here.