In the 1960s and 1970s, Dr. Achenbach collaborated with Dr. Melvin Lewis of the Yale Child Study Center, a child psychiatrist and former editor of the Journal of the American Academy of Child and Adolescent Psychiatry. Drs. Achenbach and Lewis (1971) applied the empirically based approach in new research and laid the groundwork for the Child Behavior Checklist (CBCL).
The conceptual framework for the ASEBA was outlined in relation to the developmental study of psychopathology in the first and second editions of the book Developmental Psychopathology (Achenbach, 1974, 1982).
Based on the framework presented in Developmental Psychopathology, the first CBCL Manual was published in 1983 in collaboration with Dr. Craig Edelbrock, who was then Associate Professor of Psychiatry at the University of Massachusetts Medical School. The CBCL Manual was followed by Manuals for the Teacher’s Report Form (TRF; Achenbach & Edelbrock, 1986) and the Youth Self-Report (YSR; Achenbach & Edelbrock, 1987). The first Manual for a pre-school version of the CBCL was published by Achenbach in 1992.
When parallel ASEBA instruments were used to obtain data from different informants, it was found that agreement among informants was usually modest, even though the ratings by each type of informant were reliable and valid. Meta-analyses of many studies using many different instruments revealed a mean correlation of .60 between pairs of informants who saw the children they rated in similar contexts (e.g., mothers and fathers; pairs of teachers; Achenbach, McConaughy, & Howell, 1987). The meta-analyses revealed a mean correlation of .28 between pairs of informants who saw the children in different contexts (e.g., parents versus teachers). And the meta-analyses revealed a mean correlation of .22 between children’s self-ratings and ratings by others, such as parents and teachers. The findings of modest cross-informant correlations in many studies using many different instruments indicated that no one source of data can serve as a gold standard. Instead, multiple sources are needed to capture variations in children’s functioning from one context to another, as well as to allow for variations in perspectives from one rater to another. The need for obtaining and coordinating data from multiple informants poses challenges for all assessment procedures.
Meta-analyses have yielded a mean correlation of .45 between self-ratings and informant-ratings of adult behavioral/emotional problems (Achenbach, Krukowski, Dumenci, & Ivanova, 2005). Coupled with findings of very low agreement between diagnoses made from different sources of data (Meyer, 2002), this indicates that multiple sources of data are needed for comprehensive assessment of adults, as well as children.
To meet the cross-informant challenges, major revisions of the CBCL/4-18, TRF, and YSR syndrome scales were made in 1991 (Achenbach, 1991a, 1991b, 1991c, 1991d). Eight cross-informant syndromes were derived from analyses of all three instruments. The cross-informant syndromes reflect patterns of problems that are common to ratings by the different kinds of informants. Instrument-specific versions of the syndromes comprise the specific sets of problem items that operationally define each cross-informant syndrome on each instrument.
Data from nationally representative samples of children were used to construct norms that were age-specific, gender-specific, and instrument-specific. The 1991 editions of the scoring profiles display a child’s score on each scale in relation to norms for the child’s age and gender, as scored from parent, teacher, or self-ratings.
In addition to reflecting patterns of problems derived from ratings by different kinds of informants, the cross-informant syndromes facilitate comparisons between ratings of each child by different kinds of informants. The modest cross-informant correlations found in many studies (documented in the meta-analyses by Achenbach et al., 1987; Achenbach et al., 2005) show that no one source of assessment data can substitute for all others. Comprehensive assessment therefore requires comparisons of data from multiple sources. ASEBA software introduced in 1991 produces side-by-side comparisons of item scores and scale scores obtained from parent, teacher, and self-ratings of each child. These comparisons enable users to quickly see similarities and differences between different raters’ item and scale scores for the child.
To help users evaluate the level of cross-informant agreement, the software prints correlations between ratings by each pair of informants, plus comparisons with correlations between ratings by similar pairs of informants in large reference samples. For example, the correlation between ratings by a child’s mother and teacher are compared with the correlation between ratings found in a large reference sample of mothers and teachers. Cross-informant comparisons were extended to the Child Behavior Checklist for Ages 2-3 (CBCL/2-3) in 1992 (Achenbach, 1992) and to the Caregiver-Teacher Report Form (C-TRF) in 1997 (Achenbach, 1997). The 21st-century ASEBA instruments provide cross-informant comparisons of parallel forms completed by different informants for ages 1½ to 90+.
Multicultural Family Assessment (2015)
Multicultural Family Assessment Module (MFAM) enables users to enter and compare data from the CBCL/6-18, TRF, YSR, ASR, and ABCL. This is especially valuable for family-oriented approaches to working with children and parents. Bar graphs display side-by-side comparisons of scores on 7 syndromes and 4 DSM-oriented scales that have counterparts for ages 6-18 and 18-59. The bars are normed by age, gender, type of informant, and society. If users deem it appropriate, they can show parents the bar graphs to help them understand variations among reports by different informants regarding their child’s functioning and their own functioning. The bar graphs also enable parents to see similarities and differences between what is reported for their child and for themselves. This can strengthen therapeutic alliances with parents to change their own behavior as well as their child’s.
Empirically Based Assessment via Observations and Interviews
In addition to data from informants, it is important to obtain data from people who are trained to observe samples of children’s functioning.
The Direct Observation Form (DOF)
To obtain samples of children’s functioning in group contexts such as classrooms, the DOF enables observers to rate problems and on-task behavior based on 10-minute observations (Achenbach, 1986; McConaughy, Achenbach, & Gent, 1988; McConaughy & Achenbach, 2009). To take account of variations in children’s behavior from one occasion to another, the software for scoring the DOF can average item and scale scores over as many as six occasions for each child. To compare each child’s functioning with the functioning of other children in the same context, the DOF software also compares the child’s scores with scores averaged across those of other children observed in the same context.
The Semistructured Clinical Interview for Children and Adolescents (SCICA)
To apply empirically based assessment to interviews, Dr. Stephanie McConaughy, who is Emerita Research Professor of Psychiatry at the University of Vermont, and Dr. Achenbach developed the SCICA (McConaughy & Achenbach, 1990, 1994, 2001). The SCICA includes an interview protocol of semistructured questions. The SCICA also includes rating forms on which the interviewer rates (a) the problems reported by the child and (b) the interviewer’s observations of the child during the interview. The interviewer’s ratings are scored on a profile that displays syndromes derived empirically from statistical analyses of the SCICA items. A second profile displays DSM-oriented scales scored from SCICA items.
A video for training interviewers to use the SCICA shows samples of Dr. McConaughy’s interviews with children who manifest various kinds of problems. To learn how to rate children in interviews, trainees can watch the interview excerpts, rate the children on the SCICA rating forms, and enter their ratings in the SCICA software. The software displays profiles, item scores, and correlations that compare the trainees’ ratings with ratings by experienced clinicians. Trainees can then identify areas of disagreement and can view the taped interviews again to hone their skills. Dr. McConaughy’s book, Clinical Interviews for Children and Adolescents: Assessment to Intervention, (2nd Ed., Guilford Press, 2013), provides extensive guidance and illustrations for clinical interviewing.