mHealth Research: Current Methods, Alternative Approaches, and Issues for Discussion

Topics: Behavior Change Behavioral Health mCessation mHealth Monitoring and Evaluation Public Health Research Smoke Free Smoking Cessation


On December 3, 2015, the National Cancer Institute convened a group of 18 behavioral scientists for a day-long meeting to discuss scientific methods for evaluating technology-mediated behavior change interventions, with an emphasis on mobile smoking cessation interventions. This post contains excerpts from a document provided to participants in advance of the December 3 meeting. The document was used to set context for discussion by providing a high-level overview of the methods most commonly used in published mHealth research; limitations worth considering related to current methods; alternative methods to consider; and associated theoretical and empirical questions.

Methods in mHealth research: Current state

Similar to the broader behavioral medicine literature, most mHealth studies have used observational (e.g., [1]) or randomized trial methods (e.g., [2]). Single-case experimental studies have been used as well (e.g., reversal and multiple-baseline designs [3]; for a summary of “potential research designs to evaluate the efficacy and effectiveness of mHealth interventions,” see [4](p. 232)).

Broadly, there are two types of outcomes in mHealth research:  process outcomes and clinical outcomes. Process outcomes, in this case, refer to data indicating an individual’s use of the mobile application being studied. As such, process outcomes are usually considered to be how we determine evidence of engagement in mHealth research; that is, whether and how people use the mHealth intervention. Process outcomes are data that document, for example, how many times an individual opened the app; their usage patterns over time; and their adherence to data collection protocols (e.g., time- or event-based ecological momentary assessment). Clinical outcomes refer to data indicating the status of an individual in relationship to an intermediate or distal outcome associated with the behavior of interest. In the case of smoking cessation mHealth research, clinical outcomes include achieving initial abstinence; slips; lapses; relapses; and sustained cessation.

Implicit in mHealth research, though not always explicitly stated, is that process outcomes (i.e., engagement) mediate the relationship between exposure to the mHealth intervention and clinical outcomes.

And given the dynamic nature of behavior and behavior change, there are multiple contexts in which the relationship between process and clinical outcomes is meaningful; that is, whether the individual is initiating change or working to maintain change, both before and after the achievement of an interim or distal goal:

For as long as the individual has access to the mHealth intervention, engagement with the intervention (as measured by process outcomes) is one mediator of movement from one context to another:

Limitations of the current state

Limitations of the most common approaches in mHealth research have been well documented [4, 5]. These limitations include the disparity between the speed at which research is usually conducted and the speed at which technology and mHealth solutions evolve or iterate (in 2013, Dr. David Mohr and colleagues published a methodological framework, “Continuous Evaluation of Evolving Behavioral Intervention Technologies” to specifically address these limitations [6]).

Another limitation of existing research is a relative lack of granularity in the examination of outcomes in relationship to a user being exposed to a mobile application. That is, studies often examine relationships between use of a mobile application designed to affect a specific behavior and changes in that behavior over time (e.g., do users of a smoking cessation mobile application achieve cessation and maintain it?) rather than relationships between use of functional capabilities within a mobile application and changes in behavior in near-real-time (e.g., do users of a smoking cessation mobile application report fewer cravings in the four hours after viewing a tip about quitting provided by the app?). Another limitation worth noting, though addressing it goes beyond the scope of this document, is that very few mobile applications that target health-related behaviors are meaningfully integrated into clinical care.

These two limitations – that commonly used methods in mHealth research do not accommodate iterative software development and that more nuanced, frequent endpoints are often not considered – can both be addressed through use of innovative, more agile research methods. Both are also relevant to the question as to whether a dual-focus on both process and clinical outcomes would benefit the field. When there is a failure to adequately distinguish process and clinical outcomes, an important opportunity for optimization is lost. The nature of exposure in the context of mHealth research is fundamentally different than the nature of exposure to a medication, for example. A process outcome like “adherence” in mHealth research cannot be divorced from the iterative design of the mHealth application – there is always an opportunity to leverage the design or functional capabilities of an app to affect process outcome such as use of the system, which ultimately have an impact on the clinical outcomes of interest.

Alternative methods

Agile designs. Alternative methods are being used with more frequency as the field of mHealth research grows. Most notably, sequential, multiple-assignment randomized trials (SMART; [7]) have received increased attention. SMART designs are well suited for mHealth research in that there are more frequent assessments of outcomes, so change is measured at both proximal and distal intervals. Additionally, there are multiple points of randomization, which allows for exposing the same study population to more than one version of an intervention. SMART designs may be considered as an example of an “agile” method [8], which are methods conceptually informed by engineering in their appreciation for the dynamic nature of behavioral systems [9]. 

Just-in-time adaptive interventions. More recently, just-in-time adaptive interventions (JITAIs; [10]) have emerged. To some extent, JITAIs may be thought of as real-time cases of SMART designs.  JITAIs have five components:

Of particular interest is the proximal outcome, or a localized indicator of intervention success. A proximal outcome is a near-real-time outcome associated with exposure to a real-time intervention. So, rather than ask “does use of a smoking cessation app affect probability of quitting,” a proximal outcome would be the focus of the question “does exposure to tips about quitting in real-time affect change in urge level over the course of 15 minutes?” In this way, the consideration of proximal outcomes is a way to address the limitation of existing mHealth research regarding a relative lack of granularity in the examination of outcomes in relationship to a user being exposed to a mobile application.

Sensor-enhanced research. The more frequent assessments that are necessarily a part of JITAIs that include proximal outcomes require a higher level of data density than may otherwise be needed. From a process perspective, there is a need to acquire more data from a participant without (ideally) over-burdening them with data collection requests. Even though many studies that include intensive ecological momentary assessment protocols show that participants are often very adherent to time-based assessments (e.g., [11]), there are usually conditions in these studies that would not be feasible to scale (e.g., participants are paid for their adherence to a data collection protocol). An alternative to self-reported data that is becoming more widely utilized to meet the needs of data-dense research are sensor-enhanced methods; meaning, data are collected from a participant via passively sensing information that is valuable to the study. Sometimes the sensors embedded in most mobile devices are used (e.g., accelerometers, location data) whereas other times separate sensor devices are used and paired with an mHealth application.

Micro-randomization. Another alternative example to consider is micro-randomization. This method considers every deployment of an intervention option (i.e., every time a mobile application exposes a user to stimuli or content that is intended to affect the target behavior) as an opportunity to randomize the exposure. In the context of a JITAI, this may also be thought of as a stochastic decision rule, because the decision rule that dictates deployment of an intervention option operates on a random schedule [10]. In operation, when available data indicate a user is eligible to receive a real-time intervention, micro-randomization leads to a user having an intervention experience that is selected randomly from a set of options – including no intervention at all. An example would be a mobile smoking cessation intervention that could deliver real-time motivational interventions when a user experienced a high urge to smoke. If randomization occurred at each incidence of a high urge reported by the user, then some percentage of the time the user might receive a motivational text; some percentage of the time the user might receive a motivational image; and some percentage of the time the user might not receive any intervention.

There are several potential advantages to this approach. One is the ability to answer empirical questions related to what real-time interventions are most affective for whom and under what conditions, because variability is introduced into the participant experiences of intervention. A second advantage is that fewer participants may be needed to achieve an adequate level of statistical power because the deployment of real-time intervention is the unit of analysis, not the individual. That is, to understand the impact of one of the real-time interventions a mobile app is able to deploy, the analysis can consider deployments of particular interventions across all users as the sample, not the number of users who experienced the interventions (though statistical adjustments for within-user factors or other correlations in these kinds of data may be required).  

Moving forward

The group convened by the National Cancer Institute used their day-long meeting as an opportunity to both improve and extend the ideas presented here. In future posts, we will share their comments and suggestions for refining the ideas in this post; the additional challenges they identified that face the field today; and their recommendations for using the alternative methods discussed here for advancing the field and continuing to push mHealth research to reach its full potential.


1.              Riley, W., J. Obermayer, and J. Jean-Mary, Internet and mobile phone text messaging intervention for college smokers. J Am Coll Health, 2008. 57(2): p. 245-8.

2.              Brendryen H, D.F., Kraft P, A Digital Smoking Cessation Program Delivered Through Internet and Cell Phone Without Nicotine Replacement (Happy Ending):  A Randomized Controlled Trial. JMIR, 2008. 10(5).

3.              Dallery J, C.R., Raiff BR, Single-Case Experimental Designs to Evaluate Novel Technology-Based Health Interventions. Journa of Medical Internet Research, 2013. 15(2): p. e22.

4.              Kumar, S., et al., Mobile health technology evaluation: the mHealth evidence workshop. Am J Prev Med, 2013. 45(2): p. 228-36.

5.              Riley, W.T., et al., Health behavior models in the age of mobile interventions: are our theories up to the task? Transl Behav Med, 2011. 1(1): p. 53-71.

6.              Mohr DC, C.K., Schueller SM, Hendricks Brown C, Duan N, Continuous Evaluation of Evolving Behavioral Intervention Technologies. American Journal of Preventive Medicine, 2013. 45 (4): p. 517-523.

7.              Collins, L.M., S.A. Murphy, and V. Strecher, The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions. Am J Prev Med., 2007. 32(5 Suppl): p. S112-8.

8.              Hekler, E. Agile Science: www.agilescience.org [cited 2015.

9.              Rivera, D.E., M.D. Pew, and L.M. Collins, Using engineering control principles to inform the design of adaptive interventions: a conceptual introduction. Drug Alcohol Depend, 2007. 88 Suppl 2: p. S31-40.

10.           Nahum-Shani, I., Smith, S. N., Tewari, A., Witkiewitz, K., Collins, L. M., Spring, B., Murphy, S.  , Just in time adaptive interventions (jitais): An organizing framework for ongoing health behavior support. . 2014, Methodology Center technical report. p. 14-126.

11.           Shiffman, S., Ecological momentary assessment (EMA) in studies of substance use. Psychol Assess, 2009. 21(4): p. 486-97.

First Name: 
Last Name: