Duration: 19 minutes
Presented by Kate Guastaferro, PhD
Speakers have kindly provided responses to questions submitted by conference participants during the Discussion session that did not get an opportunity to be discussed.
- Question from Justin Presseau – I love the emphasis on experimental design for optimisation and delighted to hear you did 2×2 at evaluation in one of your studies. I wonder if you can clarify something that has confused me for a while now: why does MOST use the term ‘experiment’ at the optimisation phase (with all the neat complex factorial options) and the term ‘RCT’ at evaluation as if the latter phase is somehow not also an opportunity to use factorial designs? I’m aware of a lot of e.g. factorial 2×2 effectiveness trials (and implementation trials). My point – and question – is why not clarify that ultimately the key and emphasis is preferencing experimental designs and selecting whichever experimental designs answers questions at optimisation or evaluation rather than distinguishing the RCT as seemingly something different?
This has puzzled me – especially as psychologists who are experts in experiments sometimes don’t carry the same rigour into RCTs. I think the distinction is that RCTs have DVs that are valued in a practical applied sense(but can also inform science) whereas experiments have DVs that are valued as informing science.
- Question from Simon Bacon – How do we incorporate the issues of outcomes measurement into the ORBIT and MOST frameworks? What implications does this have for early phase vs. late phases as well as defining clinical significance?
ORBIT emphasizes the importance of building efficacious interventions before conducting large trials. Phase I and Phase II research might include “outcomes” related to intervention development that are assessed like the outcomes in later stage (Phase III) research, but Phase I and Phase II “outcomes” are not assessed for confirmatory hypothesis testing like the outcomes in Phase III research and should be considered with different goals, registration and reporting requirements, etc. in mind. Notably, early stage research should also include pilot and feasibility “objectives” related to future study design. Such objectives might include acceptance of randomization, completion of visits and questionnaires, missing data, and the variation in individual results (which might be useful for power analysis). I recommend Sandra Eldridge’s work in this area, for example: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0150205.
PLEASE NOTE: Though numerous questions were submitted by conference participants, only the questions for which we obtained responses are shared here.