Webinars

Staff Perspective: Don’t Take My Word for It - How to Choose a Training

Staff Perspective: Don’t Take My Word for It - How to Choose a Training

I get a lot of flyers for various continuing education opportunities. Some of the workshops sound interesting, but I have to admit, some of them sound…well, a little far-fetched. Let’s just say I skeptically wonder about the credentials of the trainer and whether research supports the content. Potential attendees must often take the trainer’s word about the validity of the training. As a trainer, along with the other CDP faculty members, that leads me to contemplate what I specifically I bring to the table when delivering trainings, and more broadly what we at CDP have to offer. In other words, if you are considering attending a CDP training, why should you take our word for it?

Probably the first question I personally think of when deciding whether to sign up for a training opportunity is whether or not the treatment taught is evidence-based. Start by questioning efficacy – meaning, does the treatment work under ideal settings – then question effectiveness – meaning, does the treatment work in real world settings. I appreciate randomized controlled trials (RCTs), where the treatments are standardized and controlled for more straightforward comparison. If there are RCTs that support the efficacy of that treatment, just how many studies are there, in what populations was it conducted, and how do its outcomes compare to treatment as usual? That said, we want to see research on effectiveness, too. What happens with this treatment when patients have comorbidities? What about in a real world setting where drop out happens not infrequently and patients are at various stages of readiness for change? While you may not have the resources to answer these questions on your own prior to attending the training, I believe your trainer should be able to substantively address all of the points and references for the training should be made available.

The next question you can ask in determining the potential benefit of a training is a comparison of the treatment taught to other existing empirically-supported treatments (ESTs). In my opinion, if you know one therapy that works for 20% of your patients, or five therapies that each work, but for the same 20% of patients, you have not really made any gains. Certainly not from the other 80% of patients’ perspective! So you might want to consider how this training confers any additional benefit beyond your existing skillset, if you have prior training in an EST for the same issue. If an approach works for a different percentage of the population, or covers a treatment that is perhaps more acceptable to some patients, that has a benefit as well.

One personal consideration I would add to the determination of a workshop or training’s value is ‘will providing this treatment enhance my patients’ sense of self-efficacy?’ Let’s say it was found that taking a new supplement was equally as effective as Cognitive-Behavioral Therapy for Insomnia (spoiler alert: currently, CBTI is the recommended gold standard treatment for chronic insomnia and supplements have not been found to be equivalently effective). Both treatments hypothetically would improve sleep for the same number of patients. I would still prefer CBTI, as it teaches patients skills to manage their own sleep in perpetuity versus promoting reliability on something external. For instance, what would happen if the stores all ran out of that supplement? By the way, this something external could also be the therapist. If a patient depends on the therapist alone for improvement it seems they will have less confidence and more difficulty navigating future life events. In which case, I ask again, am I really offering my patients a benefit? I prefer treatments where I can teach a patient skills to successfully treat themselves.

If you read the last paragraph wondering what is my basis for making these “gold standard” claims about CBTI and where I got my information on supplements, kudos! Your critical analysis approach will definitely help you evaluate training opportunities. By the way, those references are Qaseem et al (2016) and the AASM (Sateia et al, 2017). You may also have wondered if I should have asked not only for how many patients would a treatment improve the presenting problem, but in which ways and by how much. Outstanding!

We’re still not done, so hang in there. Once we determine that a training covers an EST that does offer benefit in comparison to existing options, we now must evaluate the presenter. Presenter personality attributes frequently come to mind as considerations: is the presenter engaging? Energetic? Humorous? Flexible? Responsive? These are good qualities, although you may have no way to determine these before signing up for a training. Still, when I attend a workshop I know I often think of many questions; I would at least like to know a presenter can answer those questions. We want a presenter then that is a subject matter expert, and is an accurate, reliable source of information. I suggest perhaps somewhat provocatively that a person can write dozens of opinion-based or anecdotal books on a topic, or have dozens of years of patient experience (facts I have seen advertised in numerous continuing education presentation flyers), and not provide a factually-based level of presentation. Consider with more weight perhaps the presenter’s relevant training including graduate school and post-graduate work, their degree and credentials, participation in professional organizations, any peer-reviewed presentations or publications, engagement in research or program development, or any other evidence of a scientific mindset. Keep in mind in a training, we are to a large extent putting our trust into the presenter’s evaluation of the literature and interpretation of that treatment.

Let me guess, you are probably wondering if I practice what I preach. How do CDP’s trainings compare? To give you a little behind the scenes peek at how we develop our workshops and programs, we may spend months or even years in a program development effort. This includes an exhaustive literature search, review of epidemiological datasets, collaboration with our network of colleagues, and critical analysis of findings. This analysis informs the determination of learning objectives for the training, a.k.a. essential elements to learn and skills to master, which guides material presented. We have a hard-working review team including our curriculum developer and research editor/librarian. Once we develop a training, our program evaluation team designs assessments to evaluate our training outcomes and acceptability at every training. Every year, our training events are reviewed in light of program evaluation findings, new research and developments. Moreover, we also offer post-workshop consultation and resources.

As for what I bring to the table, I can only promise that I will continually ask myself what I have to offer trainees. My CDP colleagues and I stay engaged in the field, consume and participate in research, and seek out trainings from other subject matter experts. We also highly value input from our trainees via post-training feedback surveys. Every time I see a continuing education advertisement from here on will remind me to provide each training to the best of my ability.

In short, we should all carefully consider both the research behind the treatment presented as well as the attributes of the presenter when signing up for a training. When I apply a critical lens to our trainings and faculty at CDP, I am confident there is a lot on the table for attendees. Just don’t take my word for it; take a look for yourself!

The opinions in CDP Staff Perspective blogs are solely those of the author and do not necessarily reflect the opinion of the Uniformed Services University of the Health Science or the Department of Defense.

Diana Dolan, Ph.D., CBSM, is a clinical psychologist serving as a Military Behavioral Health Psychologist at the Center for Deployment Psychology (CDP) at the Uniformed Services University of the Health Sciences in Bethesda, Maryland.

References:

Qaseem, A., Kansagara, D., Forciea, M.A., Cooke, M., & Denberg, T.D. (2016). Management of chronic insomnia disorder in adults: a clinical practice guideline from the American College of Physicians. Annals of Internal Medicine 165(2): 125-133.

Sateia, M.J., Buysse, D.J., Krystal, A.D., Neubauer, D.N., & Heald, J.L. (2017) Clinical practice guideline for the pharmacologic treatment of chronic insomnia in adults: an American Academy of Sleep Medicine clinical practice guideline. J Clin Sleep Med 13(2):307–349.

Staff Perspective: Don’t Take My Word for It - How to Choose a Training | Center for Deployment Psychology

Error

The website encountered an unexpected error. Please try again later.