The goal of this study is to enhance rater accuracy of the Afterschool Program Practices Tool (APT).
Funded by the William T. Grant Foundation
The primary aim of the APT Validation Study Phase I was to confirm the APT’s technical properties. Overall, the APT shows many strong psychometric properties and evidence suggests that it is a reliable instrument for measuring key quality constructs of afterschool time. Preliminary evidence related to validity suggests that APT is associated with youth’s perceptions of quality, and, particularly with their attitudes and beliefs. This suggests that the tool is particularly well suited for self-assessment and “lower stakes” assessment purposes.
In the follow-up study, which began in 2013, we aim to enhance rater accuracy through rigorous training using the APT Rating Anchors document and master-coded video clips of afterschool programs from a range of K-8 sites varying in type, quality, and racial/ethnic demographics. If sufficient accuracy can be demonstrated through this training, the APT may be a useful tool for higher stakes purposes such as identification of programs in need of funding support or to assess and compare programs in a district or state.
We focus on the following overarching goals:
Aim 1: To determine the extent to which training that includes exemplars of APT item anchors taken from an expert-scored video library enhances rater accuracy among experienced APT raters.
Aim 2: To determine the degree to which access to the revised APT Rating Anchors document, self-guided online practice, in-person training, and individualized feedback contributes to accuracy.
Co-Principal Investigators Allison Tracy and Linda Charmaraman completed the first phase of this follow-up project, funded by the William T. Grant Foundation, designed to increase the accuracy of raters using the APT-O instrument through advanced training emphasizing the correct use of item anchors and practice with master-scored videos of afterschool programs. The first year of the grant involved recruiting eight K-8 afterschool programs representing different types of afterschool care and age groups from the Greater Boston and surrounding areas. Each program was filmed on four days. A total of 351 video clips containing instances of program practices at different times of day (Arrival, Transition, Activity, Informal, Homework, Pickup) were extracted from this footage. Five master coders spent several months viewing and assigning APT ratings for these clips as well as a rationale for each rating. At the end of the independent rating phase, master coders and research team members were convened in a consensus-building meeting in which video clips with high disagreement levels were discussed and a final “gold standard” rating was assigned. Video clips that met the criteria of (a) good agreement among raters, (b) high quality audio and visual components, (c) racial/ethnic and gender diversity of students and staff, (d) program and site diversity, and (e) desirable length of clip were included in a pool of clips to be used for the advanced reliability in-person training.
In the second year of the project beginning in September 2014, we recruited 40 experienced APT users from 4 different states to participate in an advanced training in either Boston or Atlanta. The vetted video clips obtained in year one were used to test raters’ initial rating accuracy as well as improvement in accuracy after becoming familiar with the APT item anchors, after in-person training, after individualized feedback, and after continued practice rating video clips. Results on how much (and for whom) this video-enhanced training method can improve the accuracy of APT ratings will be available in 2015.