The first time I heard “Dissemination and implementation science frameworks are like toothbrushes, everyone has one and nobody wants to use someone else’s,” I almost spit out my room-temperature conference coffee. A little humor goes a long way when we’re discussing a field that has emerged from many (from social work to organizational psychology, and engineering to nursing research, behavioral medicine and quality improvement). It surprises no one that the taxonomy and available frameworks are, well, taxing.
With over 167 theories, models, or frameworks reported it is hard to discern which framework best suits your goals (see dissemination-implementation.org).
Over the last 12 years I have used RE-AIM for planning and evaluation because it has always been relatively straight forward and easy to describe to my stakeholders. I felt confident in using RE-AIM while planning interventions thanks to guidance from papers like Klesges et al 2005 and Glasgow et al., 2007. My team often uses a process (or logic) model to complement the exploration of each dimension (Estabrooks et al., Balis et al., 2018), and readily acknowledges that what we plan at the beginning may not be what transpires so we need to use a assess, plan, do, evaluate, report cycle (Harden et al. 2018).
When I discuss priority issues with our stakeholders, we decide what is most important, right now. This may be influenced by end-user needs, funding opportunities, or the best building block for “the next” grant or opportunity. Indeed, priority means prior to all things, so there can be only one—ok maybe there can be more than one (unlike Highlanders). The team needs to drive which dimensions require more attention at that point in time in a particular project.
This priority setting informs resource allocation as well as. While RE-AIM consists of five dimensions (and each is important for overall impact), it is likely that applying all dimensions in one project will be challenging. By way of example, perhaps we’ve prioritized implementation fidelity, but we still want to know why people did or did not register for the program (reach). We can assess this through survey and qualitative inquiry and the article by Glasgow and Estabrooks, 2018 discusses these and related pragmatic issues in applying RE-AIM especially when there are limited resources or time.
As a mixed-methods researcher, I have always appreciated we can use both deductive and inductive approaches to analyze these RE-AIM qualitative data. We have set up the interview guide by dimension of RE-AIM a priori, and in other times have also found that through a grounded theory approach, the meaning units still align with dimensions of RE-AIM. Please click here to view Dr. Holtrop’s 2018 paper or webinar discussing RE-AIM for qualitative inquiry related to RE-AIM.
Finally, I have always been intrigued by maintenance. Does the project end after the GRA submits their dissertation? Do participants continue the behavior change when nobody’s there to support? Do the settings we partner with continue, adapt, or discontinue the program we have been studying after the research funding ends? In the original 1999 Glasgow, Vogt, and Boles paper, it was suggested that maintenance could be 6 months after intervention. However, it is important to emphasize that RE-AIM has changed significantly (Glasgow et al, 2019) from this original article (which should no longer be cited as the definitive statement on RE-AIM) this maintenance time horizon should also be determined by the research question and the team of individuals carrying out the research. For example, sometimes the maintenance stage is as long as 5 years (Estabrooks et al., 2008; Estabrooks et al., 2011; new article in press by Shelton et al)!
My experiences with RE-AIM have been extensive, but some of the most interesting – and frustrating-experiences are when people contact me to share that they’ve received a poor score on a grant that says things like “RE-AIM cannot be used in planning”. We hope that this blog, as well as some resources on our website help debunk that and related myths (see FAQs), and we can continue to see RE-AIM meet the needs of researchers, practitioners, and students throughout the planning, delivery, and evaluation stages of research and practice.
We appreciate you visiting the website and want to highlight a few places that may be helpful: slide decks you are encouraged to use in presentations, the FAQ and Resources sections, and our webinar series.
As always, please let us know if you have questions, ideas for webinars or suggestions for the website.
Samantha Harden and the National Working Group on RE-AIM Planning and Evaluation Framework