Screening Tools
Mental Health Tests, Quizzes, Self-Assessments, & Screening Tools PsyCom is committed to connecting people concerned about their mental health with medically-reviewed mental health articles and quizzes. The status of standardized assessment tools is constantly in flux, with new tools created and old tools updated on a regular basis. Future efforts to document such tools in a web-based repository such as the American Psychological Association (APA) PracticeOUTCOMES website are ideal; however this service requires an APA membership.
- With psychological assessment. Data from more than 125 meta-analyses on test validity and 800 samples examining multimethod assessment suggest 4 general conclusions: (a) Psychological test validity is strong and compelling, (b) psychological test validity, is comparable to medical test validity, (c) distinct assessment methods provide unique.
- CAGE AID is a commonly used, 5- question tool used to screen for drug and alcohol use.The CAGE Assessment is a quick questionnaire to help determine if an alcohol assessment is needed. If a person answers yes to two or more questions, a complete assessment is advised.
- Sep 23, 2019 In some tools, patients are asked to draw pictures or are shown pictures and asked to explain how the image makes them feel, or to create a story about the image. These types of psychological assessment tools are based more on the understanding and judgment of the psychiatrist, rather than a preset rating scale as with objective tests and tools.
One of the most common psychological assessment tools is the BASC, which is meant for children between the ages of 2 and 21. This tool uses several different sets of questions, collected from the child, parents or caregivers, and teachers, and then translates this data into a ranking system.
Used appropriately as part of a broader assessment, screening tools give clinicians a common language and objective metric. They provide a consistent approach to testing for the presence or absence of a disorder and help patients receive effective treatment.
We've collected some of the more commonly used public-domain screening tools. These self-reports are sensitive and research supported. The generic measures cover a broad range of psychiatric or substance use orders while specific measures target a particular disorder.
Click on the links below to download the screening tools as PDFs. Be sure to download the Guide for Using the Screening Measures.
Generic measures | Specific measures |
Modified Mini Screen Mental Health Screening Form III CAGE Adapted to Include Drugs Simple Screening Instrument | Center for Epidemiologic Studies Depression Scale PTSD Checklist Social Interaction Anxiety Scale |
Associated Data
Abstract
Evidence-based assessment has received little attention despite its critical importance to the evidence-based practice movement. Given the limited resources in the public sector, it is necessary for evidence-based assessment to utilize tools with established reliability and validity metrics that are free, easily accessible, and brief. We review tools that meet these criteria for youth and adult mental health for the most prevalent mental health disorders to provide a clinical guide and reference for the selection of assessment tools for public sector settings. We also discuss recommendations for how to move forward the evidence-based assessment agenda.
The thorniest challenge facing the mental health field is the dissemination and implementation (DI) of evidence-based practices (EBPs) in community settings (). EBPs refer to “the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences” (, p. 1). Despite the proliferation of many EBPs for both children and adults suffering from psychosocial difficulties (), these treatments are not widely available in community settings where the majority of individuals receive services (President's New Freedom Commission on Mental Health, 2003). Implementation science focuses on determining how to most effectively transmit knowledge about EBPs (i.e., dissemination) and how to use strategies that allow for increased adoption of such treatments (i.e., implementation; ). The desired result of implementation science is to ensure that community clinicians are providing EBPs to youth and adults with the ultimate goal of improved quality of care. One aspect to this pursuit that has to date received little attention is evidence-based assessment (EBA), a critical foundational component of EBPs (; ).
The scope of EBA is twofold, encompassing both the process through which assessment is conducted, and the instruments utilized for evaluation (). The scope of this review will focus on the latter (i.e., instruments used for evaluation). We first briefly highlight the importance of EBA in the context of EBP. Assessment is inherently a decision-making task fraught with the biases that plague clinical decision-making (; Garb, 1998). For example, clinicians are subject to cognitive heuristics and biases such as confirmatory bias (i.e., preferentially seeking evidence consistent with an initial conceptualization at the cost of considering emerging contradictory information; ). These biases may be more avoidable with a systematic and empirically-based, research-driven approach to assessment. The importance of an accurate diagnosis is an implicit prerequisite to the selection of EBPs, which are largely organized by specific disorders. Moreover, diagnostic categories are the common language through which we think about, question, and communicate about research findings and clinical problems. Without accurate assessments yielding accurate diagnoses, we may widen the research-practice communication gap (). There is also some evidence that accurate diagnosis is associated with better treatment outcomes (; ). Finally, emerging evidence suggests that simply tracking progress during treatment and providing feedback to clinicians results in better treatment outcomes (; ). Therefore, EBA is critical to any evidence-based treatment approach. Given the importance of EBA, to date, two special issues of peer-reviewed journals have focused on EBA in both adult and youth populations: see special issues of Psychological Assessment () and the Journal of Clinical Child and Adolescent Psychology (). These special issues have resulted in recommendations on EBA for a variety of disorders, including youth and adult anxiety (; ), adult depression (), youth disruptive behavior disorders (), and youth bipolar disorder ().
Although these reviews have resulted in important recognition of the importance of EBA and preliminary guidelines, they have not always been as applicable to low resource mental health settings such as those in the public sector because they have featured resource-intensive ways to engage in EBA. In the pages that follow, we identify and address issues related to the use of standardized tools in low resource mental health settings. The challenge of identifying which standardized instruments to use in the public sector is complicated by the sheer volume of assessment methods and processes and the many purposes of assessment compared to treatment (). Treatment providers in agencies in public settings must often contend with high workload, poor financial compensation, limited time, and intense demand for resources (). Assessments must not add unnecessarily to the paperwork burden for providers and agencies, lest the cost, time, and resource requirements of EBA become barriers that outweigh the potential benefits (). Given the known barriers to implementation of EBPs in community settings and our desire to increase EBA in the public sector, assessments must be brief, free or low cost, validated for use in multiple populations particularly ethnic minority and low socioeconomic status individuals, and straightforward and brief to administer, score and interpret (). These recommendations are echoed by public health researchers who recommend that for standardized assessment instruments to be usable, they must be important to stakeholders in addition to researchers, low burden to administer, broadly applicable, sensitive to change, and represent constructs that are actionable (i.e., clinician or patient can do something about them; ).
Accordingly, the goal of this paper is to conduct a review of EBA instruments for the most prevalent mental health disorders in youths and adults that meet the criteria delineated above. We focus on instruments that can be used for screening (i.e., identifying those at risk for a disorder), diagnosis (i.e., identifying those who meet DSM criteria), and/or treatment monitoring and evaluation (i.e., evaluating the success of treatment or interim response to treatment (Hunsley & Mash, 2008). We hope this manuscript can serve as a clinical guide and reference for the selection of assessment instruments for low-resource mental health settings.
Methods
Search Methods
We searched PsycINFO, PubMed, and Google Scholar using this search term as our template: (“disorder name or type” or “mental health”) AND (instrument OR survey OR questionnaire OR measure OR assessment). For “disorder name or type”, we used the following terms: “trauma,” “trauma exposure,” “depression,” “anxiety,” “obsessive-compulsive disorder,” “panic,” “worry,” “generalized anxiety disorder,” “eating disorder,” “anorexia nervosa,” “bulimia nervosa,” “suicide,” “suicidality,” “self-injurious,” “schizophrenia,” “psychosis,” “personality disorders,” “borderline personality disorder,” “conduct disorder,” “oppositional defiant disorder,” “attention-deficit disorder,” “bipolar,” “mania,” “quality of life,” “functioning,” and “general functioning.” For disorders that could apply to both youth and adults (e.g., anxiety), we inserted “child,” “youth,” or “adolescent” in front of the disorder name or type when searching for youth-specific measures. We also searched for adolescent versions of all child and adult measures identified in our search. We employed a snowball sampling technique in which we searched the reference sections of located articles for potentially eligible measures. Also, due to its specific relevance, a textbook referencing EBA instruments was searched by hand for relevant measures (Hunsley & Mash, 2008). Finally, we reached out to experts to ensure that we did not miss any instruments. Specifically, the first author queried members of the Association for Behavioral and Cognitive Therapies (ABCT) via the ABCT members’ listserv and engaged in conversations with experts about measures they had used previously in studies. We also included measures we have used in previous studies in low-resource settings.
Inclusion and Exclusion Criteria
We utilized the following criteria when deciding whether or not to include measures: we required that the measures be free, easily accessible via the Internet or the author of the measure, brief (items < 50), have established reliability and validity, and be relevant for the most prevalent mental health disorders (e.g., anxiety, depression, trauma-associated disorders, oppositional behavior disorders; ; ). We crafted these criteria based on a recent paper written by ) encouraging the use of pragmatic measures. Specifically, recommend that instruments be: important to stakeholders, low burden to administer, broadly applicable, sensitive to change, and measure actionable constructs. Our inclusion criteria map on these recommendations explicitly. The instruments we included are: (a) of importance for stakeholders in that they meet the needs for outcome assessment, a growing reality and requirement in many public systems, (b) are low burden to administer because they have fewer than 50 items, (c) are broadly applicable because they are appropriate for high prevalence conditions, (d) are sensitive to change when intended to be used as progress monitoring instruments, and (e) measure actionable constructs such as symptoms of a mental health disorder that are amenable to change through treatment.
See Figure 1 for the number of instruments that were identified, reviewed, included, and excluded. Two hundred and sixty four instruments (134 adult, 130 youth) were initially located: 25 adult and 54 youth instruments were excluded because they had a financial cost associated with them, 15 adult and nine youth instruments were excluded because they could not be accessed (e.g., only available through journal articles which required a library subscription), 18 adult and 12 youth instruments were excluded due to number of items (i.e., >50 items), 11 adult and 10 youth instruments were excluded due to inadequate psychometrics (e.g., no information provided on reliability and/or validity) and 36 adult and 25 youth instruments were excluded due to being too specific or pertaining to a low base rate disorder (e.g., an instrument to diagnose personality disorders in incarcerated adolescent males; an instrument to rate trichotillomania symptoms). In all, 49 instruments (29 adult, 20 youth) were included.
Instrument Classification
Given that instruments can serve multiple purposes (i.e., diagnosis, screening, and treatment monitoring/evaluation), we elected to classify the manner in which each instrument could be used. Instruments may be used for: (a) diagnosis: to determine “the nature and/or cause of the presenting problem”; (b) screening: to identify “those who are at risk...or who might be helped by further assessment or intervention”; and/or (c) treatment monitoring and evaluation: “track changes in symptoms and functioning” or determine “effectiveness...of the intervention.”(p. 6, Hunsley & Mash, 2008). An instrument could be designated as meeting all three criteria. We operationally defined instruments as appropriate for diagnosis if they were created to map on to DSM criteria. Sometimes, authors of instruments stated that it was explicitly not meant to be used for diagnosis (e.g., eating disorder instruments). However, to be consistent, we included any measure that mapped on to DSM criteria as meeting the “diagnosis” definition. Instruments met criteria for screening if the questions queried for symptoms of a mental health disorder or behavioral and/or emotional difficulties. Finally, instruments were classified as treatment monitoring and evaluation instruments if they could be used for screening or diagnosis, and data were available on the instrument's sensitivity to change following psychotherapy or psychotropic medication.
Reliability and Validity
Because methods and metrics to assess validity were not consistent across measures (e.g., concurrent validity, convergent validity, divergent validity), it was not possible to apply a validity coding scheme across instruments. Therefore, Appendices A and B summarize any evidence of validity as presented in the original psychometric papers. For reliability data, specifically internal consistency, inter-rater reliability, and test-retest reliability, we considered adequate, good and excellent reliabilities based on the criteria set forth by Hunsley & Mash (2008). These criteria are reviewed in Table 1.
Table 1
Criteria | Preponderance of evidence indicates: |
---|---|
Internal Consistency | |
Adequate | α values of .70-.79 |
Good | α values of .80-.89 |
Excellent | α values ≥ .90 |
Inter-rater reliability | |
Adequate | k values of .60-.74, or Pearson correlation or intraclass correlation values of .70-.79 |
Good | k values of .75-.84, or Pearson correlation or intraclass correlation values of .80-.89 |
Excellent | k values ≥ .85, or Pearson correlation or intraclass correlation values ≥ .90 |
Test-retest reliability | |
Adequate | test-retest correlations of at least .70 over a period of several days to several weeks |
Good | test-retest correlations of at least .70 over a period of several months |
Excellent | test-retest correlations of at least .70 over a period of a year or longer |
Results
Twenty-nine adult and 20 youth instruments were identified. All instruments are free1, can be accessed through a website or by emailing the author, and contain less than 50 items. Additionally, all have reliability and validity information available. See Tables 2 and and33 for a list of all instruments and selected information (i.e., number of items, age range, sensitivity to change, reporter, and classification). More in-depth descriptions, including reliability and validity data, as available, are presented in the appendices (see Appendices A and B).
Table 2
Adult Instruments | ||||||||
---|---|---|---|---|---|---|---|---|
Measure | Where to obtain | Number of Items | Ages | Reporter | Sensitive to change | Screening | Diagnosis | Tx Monitoring & Evaluation |
Anxiety | ||||||||
The Clinically Useful Anxiety Outcome Scale (CUXOS) | http://www.outcometracker.org | 2Q | 18+ | S | X | X | X | |
Generalized Anxiety Disorder Screener (GAD-7) | http://www.phqscreeners.com | 7 | 18+ | S | X | X | X | X |
Hamilton Rating Scale for Anxiety (HAM-A) | http://www.outcometracker.org | 15 | 18+ | C | X | X | X | |
Liebowitz Social Anxiety Scale Clinician-Report (LSAS-CR) | http://healthnet.umassmed.edu/mhealth/LiebowitzSocialAnxiet yScale.pdf | 24 | 18+ | S and C | X | X | X | |
Liebowitz Social Anxiety Scale Self-Report (LSAS-SR) | http://asp.cumc.columbia.edu/S AD/ | |||||||
Panic Disorder Severity Scale (PDSS) | http://www.outcometracker.org | 7 | 18+ | C | X | X | X | X |
Fear Questionnaire (FQ) | http://www.outcometracker.org | 24 | 18+ | S | X | X | ||
Penn State Worry Questionnaire (PSWQ) | http://www. outcometracker.org | 16 | 18+ | S | X | X | X | |
Social Phobia Inventory (SPIN) | http://www.psychtoolkit.com | 17 | 18+ | S | X | X | X | |
Worry and Anxiety Questionnaire (WAQ) | http://www.psychology.concor dia.ca/fac/dugas/downloads/en/ WAQ.pdf | 11 | 18+ | S | X | X | X | X |
Depression | ||||||||
The Clinically Useful Depression Outcome Scale (CUDOS) | http://www.outcometracker.org | 18 | 18+ | S | X | X | X | X |
Hamilton Rating Scale for Depression (HAM-D) | http://www.outcometracker.org | 17 | 18+ | C | X | |||
The Inventory of Depressive Symptoms and the Quick Inventory of Depressive Symptoms (IDS and QIDS) | http://www.ids-qids.org | 30 16 | 18+ | S and C | X | X | X | X |
Patient Health Questionnaire-9 (PHQ-9) | http://www.phqscreeners.com/ | 9 | 18+ | S | X | X | X | X |
Eating Disorders | ||||||||
Eating Disorder Diagnostic Scale (EDDS) | http://homepage.psy.utexas.edu/homepage/group/sticelab/scales/#edds | 22 | 18+ | S | X | X | X | X |
Sick, Control, One, Fat, Food Screening Tool (SCOFF) | http://www.marquette.edu/coun seling/documents/AQuickAssessmentforEatingConcerns.pdf | 5 | 18+ | S | X | |||
Mania | ||||||||
Altman Self-Rating Mania Scale (ASRM) | http://www.cqaimh.org/pdf/tool asrm.pdf | 5 | 18+ | S | X | X | X | |
Bech-Rafaelsen Mania Scale (MAS) | http://opapc.com/images/pdfs/ MRS.pdf | 11 | 18+ | C | X | X | X | |
Young Mania Rating Scale (YMRS) | http://dcf.psychiatry.ufl.edu/files/2011/05/Young-Mania-Rating-Scale-Measure-with-background.pdf | 11 | 18+ | C | X | X | X | |
Overall Mental Health | ||||||||
National Institutes of Health Patient Reported Outcomes Measurement Information System (PROMIS) | https://www.assessmentcenter.net/promisforms.aspx | 4-30 | 18+ | S | X | |||
Patient Health Questionnaires (PHQ) | http://www.phqscreeners.com/ | 11 | 18+ | S | X | X | X | X |
Recovery Assessment Scale (RAS) | http://www.power2u.org/downloads/pn-55.pdf | 41 | 18+ | S and C | X | X | ||
Personality Disorders | ||||||||
Borderline Evaluation of Severity over Time (BEST) | http://psychiatry.ucsd.edu/bord erlineServices.html | 15 | 18+ | S | X | X | X | X |
Suicidality | ||||||||
Columbia-Suicide Severity Rating Scale (C- SSRS) | http://www.cssrs.columbia.edu | 20 | 18+ | C | X | X | NA* | X |
The Suicide Behaviors Questionnaire - Revised (SBQ-R) | http://www.integration.samhsa. gov/images/res/SBQ.pdf | 4 | 18+ | S | X | NA* | ||
Trauma | ||||||||
Impact of Event Scale-Revised (IES-R) | ude.fscu@ssiew.leinad | 22 | 18+ | S | X | |||
Los Angeles Symptom Checklist (LASC) | ude.enidreppep@yofd | 43 | 18+ | S | X | X | ||
The Post-Traumatic Stress Disorder Checklist - Civilian Version (PCL-C) | http://www.ptsd.va.gov | 17 | 18+ | S | X | X | ||
The Trauma History Screen (THS) | http://www.istss.org/AssessmentResources/5347.htm | 14 | 18+ | S | X | |||
The Trauma History Questionnaire (THQ) | http://ctc.georgetown.edu/toolkit/ | 24 | 18+ | S and C | X |
Note.
S = self, C = clinician, * = not a diagnosable disorder; cannot be a diagnostic tool.
Table 3
Youth Instruments | ||||||||
---|---|---|---|---|---|---|---|---|
Measure | Where to obtain | Number of Items | Ages | Reporter | Sensitive to Change | Screening | Diagnosis | Tx Monitoring & Evaluation |
Anxiety | ||||||||
Children Yale-Brown Obsessive Compulsive Scale (CY-BOCS) | http://icahn.mssm.edu/research/centers/center-of-excellence-for-ocd/rating-scales | 10 | 6-17 | P | X | X | X | |
Penn State Worry Questionnaire for Children (PSWQ-C) | http://www.childfirst.ucla.edu/Resources.html | 16 | 7-17 | S | X | |||
Revised Children's Anxiety and Depression Scale Youth and Parent Versions (RCADS/RCADS-P) | http://www.childfirst.ucla.edu/Resources.html | 47 | 6-18 | S and P | X | X | X | X |
Screen for Child Anxiety Related Emotion Disorders (SCARED) | http://psvchiatrv.pitt.edu/sites/default/files/Documents/assessments/SCARED%20Child.pdf | 41 | 6-18 | S and P | X | X | X | |
Spence Children's Anxiety Scale (SCAS) | http://www.scaswebsite.com | 44 | 7-19 | S and P | X | X | X | X |
Depression | ||||||||
Center for Epidemiologic Studies Depression Scale for Children (CES-DC) | http://www.bri2htfutures.or2/rnentalhealth/pdf/professionals/bridges/cesdc.pdf | 20 | 6-23 | S | X | X | X | |
Depression Self Rating Scale for Children (DSRSC) | http://www.scalesandmeasures.net/files/files/Birleson%20Self- Rating%20Scale%20for%20Child%20Depressive%20Disorder.pdf | 18 | 8-14 | S | X | |||
Disruptive Behavior | ||||||||
Child and Adolescent Disruptive Behavior Inventory-Parent & Teacher Version (CADBI) | http://measures.earlvadolescence.org/measures/view/40/ | 25 | Not specifi-ed | P and T | X | X | ||
Eating Disorder | ||||||||
Child Eating Attitudes Test (ChEAT) | http://www.1000livesplus.wales.nhs.uk/sitesplus/documents/1011/ChEAT.pdf | 26 | 8-13 | S | X | |||
Eating Attitudes Test-26 (EAT-26) | http://eat-26.com | 26 | 16-18 | S and C | X | |||
Mania | ||||||||
Parent Version-Young Mania Rating Scale (P-YMRS) | http://dcf.psvchiatrv.ufl.edu/files/2011/05/Youna-Mania-Ratina-Scale-Measure-with- backaround.pdf | 11 | 5-17 | C | X | |||
Child Mania Rating Scale - Parent (CMRS-P) | http://www.dbsalliance.ora/pdfs/ChildManiaSurvev.pdf | 21 | 5-17 | P | X | X | X | X |
Overall Mental Health | ||||||||
Brief Problem Checklist (BPC) | http://www.childfirst.ucla.edu/Resources.html | 12 | 7-13 | S and P | X | X | X | |
The Ohio Scale-Youth, Parent, and Clinician versions | ude.uvb@selaoneb | 48 | 5-18 | S, P, and C | X | X | ||
Peabody Treatment Progress Battery (PTPB) | http://peabodv.vanderbilt.edu/research/center-evaluation- proaram-improvement-cepi/rea/ptpb 2nd ed downloa ds.php | 11 | 11-18 | S, P, and C | X | X | X | |
Pediatric Symptom Checklist and Pediatric Symptom Checklist-Youth Report (PSC & Y-PSC) | http://www.massaeneral.ora/psvchiatrv/services/pschome.aspx | 35 | 3-18 | S and P | X | X | X | |
Strength and Difficulties Questionnaire (SDQ) | http://www.sdqinfo.ora/a0.html | 25 | 3-16 | S, P, and C | X | X | X | |
Youth Top Problems (TP) | http://www.wih.harvard.edu/~iweisz/pdfs/2011c.pdf | 3 | 7-13 | S, P, and C | X | X | ||
Trauma | ||||||||
Child PTSD Symptom Scale (CPSS) | ude.nnepu.dem.liam@aof | 24 | 8-18 | S or C | X | X | X | X |
Pediatric Emotional Distress Scale (PEDS) | ude.ledatic@rolvas.vawnoc | 21 | 2-10 | P | X | X | X |
Note.
S = self, C = clinician, P = parent, T=teacher
Anxiety
Fourteen instruments were identified (9 adult, 5 youth) that assessed symptoms of anxiety.
Adult
Adult instruments ranged in length from 7-24 items. The majority of the adult instruments (7) were disorder specific (e.g., assessing for Generalized Anxiety Disorder; Generalized Anxiety Disorder Screener (GAD-7; )), although two instruments assessed general anxiety (The Clinically Useful Anxiety Outcome Scale (CUXOS; ); Hamilton Rating Scale for Anxiety (HAM-A; )). All of the adult instruments could be used as screening and treatment monitoring/evaluation tools. Only three instruments could be used as diagnostic tools (GAD-7; ); Panic Disorder Severity Scale (PDSS; ); Social Phobia Inventory (SPIN; )).
Youth
Youth instruments ranged in length from 10-47 items and were intended for administration with youths 6-19. The majority of the youth instruments (3) assessed general anxiety (Revised Children's Anxiety and Depression Scale Youth and Parent Versions (RCADS; ); Screen for Child Anxiety Related Emotion Disorders (SCARED; ), Spence Children's Anxiety Scale (SCAS; )), although two instruments were disorder specific (Children Yale-Brown Obsessive Compulsive Scale (CY-BOCS; ); Penn State Worry Questionnaire for Children (PSWQ-C; )). All of the youth instruments could be used as screening tools. Only two instruments could be used as diagnostic tools (RCADS; SCAS). Four instruments could be used for treatment monitoring/evaluation (CY-BOCS, RCADS, SCARED, SCAS).
Depression
Six instruments were identified (4 adult, 2 youth) that assessed symptoms of depression.
Adult
Adult instruments ranged in length from 9-30 items. All of the adult instruments could be used as screening tools. Three instruments could be used as diagnostic tools and treatment monitoring/evaluation tools (The Clinically Useful Depression Outcome Scale (CUDOS, ); The Inventory of Depressive Symptoms/Quick Inventory of Depressive Symptoms (IDS/QIDS; ; ; ); Patient Health Questionnaire-9 (PHQ-9; )).
Youth
. Youth instruments ranged in length from 18-20 items and were intended for administration in youths 6-23. All of the youth instruments could be used as screening tools. None were appropriate for diagnostic purposes. Only one tool could be used for treatment monitoring and evaluation (Center for Epidemiologic Studies Depression Scale for Children (CES-DC; )).
Disruptive Behavior Disorders
One instrument was identified that assessed symptoms of disruptive behavior disorders.
Youth
One 25-item instrument, the Child and Adolescent Disruptive Behavior Inventory-Parent & Teacher Version (CADBI; Burns, Taylor, & Rusby, 2001a; 2001b), was identified. This tool can be used as a screening and diagnostic tool, but not for treatment monitoring and evaluation.
Eating disorders
Four instruments were identified (2 adult, 2 youth) that assessed symptoms of eating disorders.
Adult
Adult instruments ranged in length from 5-22 items. Both adult instruments could be used as screening tools; only the Eating Disorder Diagnostic Scale (EDDS; ) could be used as a diagnostic and treatment monitoring/evaluation tool.
Youth
Both youth instruments were 26 items and were intended for administration in youths 8-18. Both instruments could be used as screening tools. Neither was appropriate for diagnostic or treatment monitoring and evaluation.
Mania
Five instruments were identified (3 adult, 2 youth) that assessed symptoms of mania.
Adult
Adult instruments ranged in length from 5-11 items. All adult instruments could be used as screening and treatment monitoring/evaluation tools. None of the tools could be used for diagnostic purposes.
Youth
Youth instruments ranged from 11-21 items and were intended for administration in youths 5-17. Both youth instruments could be used as screening tools; only the Child Mania Rating Scale-Parent (CMRS-P; ) could be used for diagnostic and treatment monitoring/evaluation purposes.
Overall Mental Health
Nine instruments were identified (3 adult, 6 youth) that fell under the category of “overall mental health.”
Adult
Adult instruments ranged in length from 4-41 items. Two adult instruments could be used as screening tools (National Institutes of Health Patient Reported Outcomes Measurement Information System (PROMIS; NIH PROMIS, 2013); Patient Health Questionnaire (PHQ; )). The PHQ could also be used as a diagnostic tool. Two instruments could be used for treatment monitoring and evaluation (PHQ, Recovery Assessment Scale (RAS; Giffort, Schmook, Woody, Vollendorf, & Gervain, 1995)).
Youth
Youth instruments ranged from 11-48 items and were intended for administration in youths 3-18. Four of the instruments could be used as screening tools (Brief Problem Checklist (BPC; ), Peabody Treatment Progress Battery (PTPB; Bickman et al., 2010), Pediatric Symptom Checklist/Youth Report (PSC & Y-PSC; ), and the Strength and Difficulties Questionnaire (SDQ; )). None of the instruments were used as diagnostic tools. All instruments could be used for treatment monitoring and evaluation.
Personality Disorders
One measure was identified that assessed personality disorders in adults; no measures were identified for youths which is appropriate given that personality disorders are not diagnosed in those under 18 years.
Adult
The Borderline Evaluation of Severity over Time (BEST; ) is a 15-item instrument that is a screening, diagnostic, and treatment monitoring/evaluation tool for borderline personality disorder. Tools for other personality disorders were not identified.
Suicidality
Two adult instruments were identified that assessed suicidality; no child instruments were identified.
Adult
Adult instruments ranged in length from 4-20 items. All adult instruments could be used as screening tools. One instrument could be used for treatment monitoring and evaluation (The Suicide Behaviors Questionnaire-Revised (SBQ-R; ).
Youth
We were not able to identify any measures that met our criteria.
Trauma
Seven instruments were identified (5 adult, 2 youth) that assessed symptoms of trauma.
Adult
Adult instruments ranged in length from 14-43 items. All adult instruments could be used as screening tools. None of the tools could be used for treatment monitoring and evaluation. Two instruments could be used for diagnostic purposes (Los Angeles Symptom Checklist (LASC; King, King, Leskin, & Foy, 1995); The Post-Traumatic Stress Disorder Checklist-Civilian Version (PCL-C; Weathers, Litz, Herman, Huska, & Keane, 1993)).
Youth
Youth instruments ranged from 21-24 items and were intended for administration in youths 2-18. Both youth instruments could be used as screening tools; only the Child PTSD Symptom Scale (CPSS; ) could be used for diagnostic purposes. Both instruments could be used for treatment monitoring/evaluation purposes.
Discussion
As evidenced by this review, there are multiple assessment tools that fit the needs of clinicians in low-resource mental health settings; these measures are free, easily accessible via the Internet or email, brief, have established psychometric properties, and are relevant for the most prevalent mental health disorders. It is our hope that community clinicians will use this compendium to select the most appropriate measure for their general population and specific clients. We have identified 29 adult and 20 youth measures that can be used as part of an EBA toolkit for a heterogeneous group of clients. We also believe that this manuscript can provide a valuable resource for implementation scientists interested in promoting the use of feasible EBA in community settings.
This review also provides important insights about where assessment tools are most sorely needed. Whereas instruments to measure anxiety symptoms in adults and youths were well represented, instruments to assist in diagnosis and treatment monitoring for youth with depressive symptoms were sparse. Only one instrument for disruptive behavior disorders was identified, and this instrument can be used only for screening and/or diagnosis; not treatment monitoring/evaluation, suggesting a need for instrument development and validation. Diagnostic and treatment monitoring and evaluation instruments for youth with eating disorders were also unavailable. Of great concern, tools assessing suicidality in youths were unavailable. Diagnostic tools of overall mental health were missing for youth. With regard to adult instruments, a need for treatment monitoring and evaluation instruments for trauma were identified as a needed area, as well as instruments that assess for personality disorders other than borderline personality disorder.
Some have suggested that providing a laundry list of psychometrically validated measures is not likely to be effective in encouraging use of EBA on a wider scale (). However, the provision of this list helps lay a foundation in moving the agenda forward for increasing the use of EBA () through the necessary first step of providing access. A few studies have queried mental health clinicians about their use of assessment tools, finding that the primary assessment method used in clinical practice is the unstructured clinical interview (). Clinicians report that barriers to the use of standardized tools are measure access, time demands, and ease of administration and scoring (; ; ). These practical concerns are particularly pressing for fee-for-service clinicians in the public sector. It is our hope that the publication of this collection of measures increases the opportunity for clinicians to quickly access a list of available, free, standardized instruments from which to select a battery for screening, diagnosis, and treatment monitoring and evaluation. Moreover, in concert with sophisticated guidelines in the process of EBA (See Youngstrom, Coukas-Bradley, Calhoun, & Jensen-Doss, this issue), this list has the potential to make an impact on clinicians, clients, and policy-makers in the public sector wishing to integrate assessment and monitoring tools in their toolkit, as well as highlighting areas of need for future research.
There are a number of important clinical issues that must be considered within the context of EBA, including: How does a clinician decide which standardized tool to use? Should the tools be general to mental well-being or specific to the presenting problem or disorder? How should these tools inform the diagnostic process and treatment monitoring? Which informants should be included? When is the best time to administer such tools? We have not made recommendations about which measure a clinician should select for a particular presenting problem, in large part because such guidelines will be necessarily complex and are beyond the scope of this manuscript. Several manuscripts are dedicated to exploring these issues for particular disorders in the referenced special issues. present key themes and considerations in the development of EBA guidelines, and yet suggest that there is still much work to do in delineating EBA guidelines. Making an exciting stride forward, Youngstrom and colleagues (this issue) make recommendations on a twelve-step approach, using evidence-based medicine principles, that can be applied to streamline the assessment process.
There is a general consensus that prior to treatment, clinicians should select broader assessment tools to cast a wide net regarding the presenting problem of a client, and then using more specific tools as the presenting problem becomes more clear (). To monitor progress over time, specific tools can be used to track client improvement or deterioration. This also speaks to the issue of assessment over time. As states, “ongoing, continuous assessment is needed during the course of treatment.” (p. 554). For example, in the case of a youth presenting for treatment; a general screener such as the BPC () can be administered. If particular elevations suggesting anxiety become apparent, then an anxiety specific standardized tool such as the SCARED can then be administered (). Subsequently, this tool can be used on a regular basis (e.g., every 2 weeks) to monitor treatment progress. At the end of treatment, the BPC and SCARED can be administered again to ensure that initial elevations are no longer present.
Beyond the question of which instrument to use and when to administer it, clinicians are confronted with the question of whom to ask to complete it (). This topic has been explored in great depth in the youth assessment literature, but has received less recognition in the adult assessment literature, despite evidence that, similar to data in youths, there is low cross-informant (e.g., caregiver, spouses) agreement for adults (). Unfortunately, there is little guidance available to help clinicians decide how to weigh informant data in adults. In the youth assessment literature, a plethora of evidence suggests discrepancies among children, parents, and teachers when reporting on youths psychosocial difficulties (). There are different methods to assess such divergence (see ). In the absence of EBA guidelines, clinicians are encouraged to use the “or rule.” If a youth or parent reports symptoms on a standardized tool, the clinician targets treatment towards those symptoms. The “or rule” increases sensitivity compared with the “and rule,” which requires that both the youth and parent report symptoms ().
There are also a number of ethical considerations to consider in the context of EBA. One important question is the appropriateness of standardized assessment tools for ethnic/racial minorities and the use of standardized rating scales to make diagnostic determinations. Many standardized assessment tools cited in this manuscript have not been tested in multiple ethnic/racial groups (), and may not be equally valid in assessing psychopathology or diagnostic criteria. In the rare cases in which standardized tools have been compared across different cultures, similarities have been found in the prevalence and presentation of mental health difficulties (; ). More research of this kind is needed given that many of the youth and adults seen in the public sector are ethnic and/or racial minorities. Another ethical issue concerns the use of rating scales as diagnostic tools. The gold-standard diagnostic process is the structured clinical interview (e.g., Structured Clinical Interview for DSM Disorders; SCID; First, Spitzer, Gibbon, & Williams, 1996). The standardized tools presented in this manuscript are not intended as diagnostic tools even if they map on to diagnostic criteria; they are all intended as screeners for potential disorders (sometimes necessitating further assessment) or symptom rating tools. However, the SCID and similar interviews are time-intensive, generally unbillable, and require intensive training for administration. Given these practical concerns, they are not feasible in the public sector. Clinicians in these settings need brief standardized tools that can be used as diagnostic aids (see Youngstrom et al., this issue). It is not clear from the literature how much EBA presently occurs in community mental health. Given the practical concerns, the answer is possibly very little. Although future research is necessary to examine this empirical question, it may be better overall for clinicians to be using some EBA tools rather than none at all, and this review will hopefully serve as a helpful resource.
There is a critical need to include EBA as part of the process of implementing EBP in community settings. Initial evaluation and ongoing progress monitoring are foundational components of the EBP process; both are expected and routine in other areas of healthcare (Goodman, McKay, & DePhilippis, in press). Use of standardized tools prior to treatment initiation for screening and diagnostic purposes allows clinicians to target treatment and identify appropriate EBPs. The use of standardized tools to monitor and evaluate treatment and provide feedback over the course of treatment can result in improved outcomes in both youths () and adults (; ). Having inexpensive, brief and easily accessible screening and progress-monitor tools is an important first – but by no means the only necessary – step in increasing the use of EBA in community mental health settings.
Several exciting national initiatives will make it easier for clinicians to use standardized tools as part of ongoing practice. The National Institutes of Health (NIH) has developed PROMIS, a set of freely available validated measures of patient–reported health status for physical, mental, and social well–being (http://www.nihpromise.org). Although promising, more work must be conducted on the use of these measures in clinical populations (e.g., youth with anxiety and/or depression) given that they have been primarily used in pediatric populations (e.g., oncology). The National Cancer Institute has sponsored a separate, free repository of available standardized tools to assess various mental and physical conditions (https://www.gem-beta.org/). Further, the NIH now requires that articles published from NIH-funded research be freely available to the public, which increases the likelihood of any new measures created through public funding becoming available to practitioners and consumers.
In the following paragraphs, we suggest some important next steps to increase the extent to which EBA is used.
Develop guidelines
While assessment guidelines are available for some disorders, these guidelines often do not take into account the practical constraints facing clinicians working in low resource mental health settings. Guidelines are needed for general practice and for specific disorders, with consideration of the limited time and other resources available to community clinicians. Specifically, guidelines are needed to help guide clinicians through the decision making process of which instrument to use, when to use it (e.g., screening, diagnosis, treatment monitoring and evaluation), how often to administer (i.e., frequency), and how to integrate information across instruments in a clinically meaningful manner. For example, the work of provides a significant step forward in developing a standardized assessment protocol that is of low burden to clinicians that can inform treatment need for youth in public sector settings. More work of this kind is needed.
Develop training protocols to increase expertise in EBA
Another largely ignored issue is the need for clinician training in the use of standardized tools. Without understanding how standardized tools can be useful clinically, they become another administrative burden with little clinical payoff (). One of the largest challenges in the EBP movement has been training the existing workforce in treatments with which they have little familiarity (). Such efforts to train clinicians in EBP to date have been largely disappointing (). As efforts are made to improve trainings and understand how the public sector context impacts clinician behavior (e.g., ), an additional consideration will be the provision of training on how to administer standardized tools and use the data in meaningful ways (). An exploration of implementation strategies () that increase the use of standardized tools is an important area of future research.
Develop a frequently updated databank of EBA
The status of standardized assessment tools is constantly in flux, with new tools created and old tools updated on a regular basis. Future efforts to document such tools in a web-based repository such as the American Psychological Association (APA) PracticeOUTCOMES website are ideal; however this service requires an APA membership. Other websites exists but none of them offer a comprehensive overview of screening, diagnostic, and treatment monitoring and evaluation instruments for youths and adults (e.g., http://www.psychiatry.org/practice/dsm/dsm5/online-assessment-measures; http://outcometracker.org). A free website similar to this effort that provides an ongoing resource with updated standardized assessment tools by problem area would greatly move the EBA field forward.
Take advantage of new digital technologies
Most measures are administered using paper-and-pencil and require time to score and interpret. Current technology makes it easy to develop software that scores and provide interpretations for clinicians to reduce clinician burden and increase standardization of interpretation. As these technologies become less expensive, clinics could use tablet technology or kiosks to administer measures while clients wait for their appointments. This information then could be transmitted to the clinician in a seamless manner that greatly enhances the accessibility and uniformity of EBA. This may require negotiation with instrument developers as incorporation of instruments into digitalEncéphale. 2001;27(5):475–484. [PubMed] [Google Scholar]