Measuring changes in attitudes and behaviors is important to determine whether or not a program is having the intended effect on youth outcomes (Burlingame et al., 2001). Children’s attitudes and behaviors at program start are compared to those during the program and at program end using the self-report Youth Outcome Questionnaire (Y-OQ) survey. The Y-OQ is a 30 Likert-type question survey designed to measure the impact the program or service has on the child’s functioning (Burlingame, Mosier, Wells, Atkin, Lambert, Whoolery, & Latkowski, 2001). Items are designed to measure symptomatology in the following areas: somatic, social isolation, aggression, conduct problems, hyperactivity/distractability, and depression/anxiety. In addition, Attitudes Favorable to Antisocial Behavior, a scale from the Youth Criminogenic Inventory (Tanana, Davis, & Vanderloo, 2012), when appropriate for the target population.
A growing body of research suggests that “the development of the working alliance…is widely regarded as a critical step in the treatment of adolescents” (Florsheim, Shotorbani, Guest-Warnick, Barratt, & Hwang, 2010, p. 95). Working alliance has been found to be a predictor of therapeutic gains (Horvath, 2001). In addition to this, Florsheim et al. (2010) found that working alliances developed later in treatment (at 3 months or later) were more predictive of progress in treatment than alliances developed early on in treatment. To this end, Florsheim et al. (2010) modified the Working Alliance Inventory (Horvath & Greenberg, 1989) to fit a residential program’s context, replacing “therapist” with “program staff person” and replacing “therapy” with “program.” Elements of this modified Working Alliance Inventory are used to determine the quality of therapeutic relationships in the youths’ treatment.
The surveys include the Y-OQ (Burlingame et al., 2001) and the modified Working Alliance Inventory (Florsheim et al., 2010; Horvath & Greenberg, 1989). They are administered to every child entering a program, mid-program, and at program end via the internet.
Survey results are provided under the Attitudes/Behavior Change header on the evaluation website. This section of the website includes a chart that highlights how the self-reported attitudes and behaviors change over the course of treatment. Figure 4 illustrates how this information is conveyed to providers on the evaluation website.
Figure 4: Pre-Post Tests
In addition to self-reported changes, the evaluation has created a structure for assessing long term outcomes such as the average amount of time it takes a child to return to DCFS involvement and future DCFS involvement. Using information already collected by DCFS in the Utah’s SACWIS system, changes in the children on these outcomes is measured. For this step, outcomes are processed automatically, so we can report on outcomes as frequently as we receive data from DCFS. Results are made available on the same web-based dashboard that presents the other information collected by this evaluation.
Permanency Benchmarking Model
Over the past several years, SRI has developed a benchmarking model for predicting the amount of time it should take a given youth to achieve a permanent placement (reunification with parents, kinship placement, etc). This prediction is difficult because of the many factors that play a role in this type of outcome. To solve this problem we created a neural network survival model (Biganzoli, Boracchi , Mariani and Marubini, 1998). This model is much more flexible than a typical survival model in that it allows different predictors to influence the shape of the survival curve. A visualization of the different shape factors estimated from the model can be seen below in Figure 5.
Figure 5: Visualization Neural Network Survival Model
Using thousands of historical cases, SRI has built a statistical model of the likelihood of permanency for children, based on a series of risk factors. These numbers are computed on an ongoing basis for every residential program in the DCFS system and can be compared to the actual permanency rates for the program (see Figure 6). Putting this outcome front and center on the dashboard for each program helps them remember to emphasize this as a treatment goal. This benchmarking model creates an expected time to permanency rate and compares it to the actual rate of permanency. The dark line represents the actual permanency rate for the program over time and the light line is the expected permanency rate based on the benchmarking model. For this example, we would expect the permanency rate to be close to 45% and the actual permanency rate is above 70%.
Figure 6: Example of Benchmarking Model
Finally, the researchers summarize the information from these models in a chart that shows the permanency rate compared to the benchmarking model on the program’s dashboard. The purpose of this is to emphasize the importance of the outcome of permanency to the intermediate goals of the program; see Figure 7.
Figure 7: Summary of Permanency Rate