In the past few months, APS entered into some exciting partnerships. You probably already know that we partnered with the March for Science. I marched in Chicago with APS member Judy Moscowitz (pictured), our friends and family, and 60,000 other supporters of science (not all pictured).
We joined the Behavioral Medicine Research Council, led by Ken Freedland.1 The Council’s mission is to “identify and prioritize strategic research goals and to catalyze concerted, multidisciplinary efforts to achieve them.” All of you likely appreciate that behavioral interventions could improve the public health to a much greater degree than they currently do. Stronger evidence for behavioral interventions should lead to more utilization of such interventions, more third-party endorsement of and compensation for interventions, and wider integration of interventions into practice. APS, along with the Academy of Behavioral Medicine Research, the Society for Health Psychology, and the Society for Behavioral Medicine, will send two representatives to the Council, initially serving 3-year and 2-year terms. The Nominations Committee has selected Elissa Epel (3 years) and yours truly (2 years) to serve as our first representatives. The Council will identify important preclinical and clinical research questions, produce scientific statements, form strategic research networks, and ultimately put the “evidence” in “evidence-based behavioral medicine”. You can appreciate that this is not a quick or simple process, but the cause is just, and the goal is critical. Many thanks to Ken for investing so much of his time and energy in forming the BMRC. I’m excited to be involved in this initiative, and I’m looking forward to working with Elissa, Ken, and the other representatives on the Council.
APS also partnered with the NIH Science of Behavior Change initiative, supported by the NIH Common Fund. For those of you who did not get a chance to hear APS member Lis Nielsen’s1 presentation in Seville, the SOBC seeks to bridge insights from basic and clinical behavioral science and to connect the various silos of problem behaviors, such as smoking, drinking, diet, physical activity, and non-adherence. Its vision: “Infuse the study of mechanisms of behavior change across Institutes, behaviors, diseases, and translational stages on a large scale.” Taking an experimental medicine approach, the SOBC seeks to identify measurable, malleable, and causal processes that lead to behavior change in interventions. That is, what basic processes mediate behavior change in interventions? You can read more about SOBC at https://scienceofbehaviorchange.org/.
The current SOBC target processes are self-regulation; stress resilience and reactivity; and interpersonal and social processes. Of course, APS members recognize the importance of these processes, and a glance at the SOBC research network finds many APS members. For all of us in and outside the network, an important resource on the website is the Measures page. This page provides access to dozens of SOBC-validated measures relevant to the three target processes. The measures can be filtered by process, time for administration, and type of measure. Want a self-report measure of interpersonal process that can be completed in less than 5 minutes? How about an observational measure of self-regulation? (Note: the page will be in constant development, so keep checking back.) Another interesting resource is the SOBC Grand Rounds (look on the News page). This month, Santosh Kumar, Ph.D., will present his work on digital mHealth biomarkers and sensor-triggered interventions. You can subscribe to the mailing list (at the bottom of the SOBC webpage) to get SOBC news and information sent to your email.
In contemplating these two networks, I am struck by the importance of concerted, cooperative efforts to move psychosomatic and behavioral medicine forward. As concerns about replicability in science mount, we should strive to do work that at minimum employs reliable and appropriately generalizable measures (including biomarkers!), large sample sizes, and valid statistical inference. We need to be running multiple replications of our own work and that of others. In psychology, only 3% of journals state in their aims or instructions that they accept replications, but that translates to 33 journals.2 Replications can find outlets! We need to start preregistering our analyses and our replications. If you’ve written your IRB application, you’ve pretty much already done your preregistration work. Why not get credit for it?3
I, too, remember the days when N = 100 was considered a large sample – and it might still be considered large for a rare disease sample or an early-stage trial. But the research methods bar has been raised, and we can work individually and together to reach it. Is there still a need for smaller-scale research efforts? Certainly. Large-scale, high-impact projects are based on smaller, exploratory studies. But there’s always a next step that makes an initial result more credible or more useful. Consider the lowly correlation. For a correlation of .30, you can achieve minimal 80% power with N = 82.4 But you won’t get an accurate (within .10) estimate of the magnitude of the correlation until N > 200.5 These days, N = 100 is just the beginning. If, consequently, psychosomatic and behavioral medicine scientists cooperate and collaborate more, that is surely not a bad thing.
1 Thanks to Ken and Lis for sharing their materials with me.
2 Martin, G.N., & Clarke, R.M. (2017). Are psychology journals anti-replication? A snapshot of editorial practices. Frontiers in Psychology, 8, 523.
3 You can see sample preregistrations online at https://osf.io/e6auq/wiki/Example%20Preregistrations/
4 Calculated with G*Power, two-tailed alpha = .05.
5 Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47, 609-612.
Suzanne C. Segerstrom, PhD
President, APS 2017/18