Rating Responses to the FIS Stimulus Clips
Our typical method is to rate recorded responses to the FIS task (i.e., the stimulus clips) with observational ratings. This tab provides the rating manual and some examples responses used to train raters.
The FIS Rating Manual
Download the current FIS rating manual here (PDF):
FIS Rating Manual 2018
FIS Rating Manual 2018
Example Responses
The example responses that we provide are for individual research teams and to demonstrate high and low levels of the skills. The examples are provided by research assistants in our lab and are for training purposes only and not for distribution.
Recommended coder training includes the following steps.
1. Raters become acquainted with the stimulus clips by giving their own responses and rcording them.
2. An overview of the FIS construct and the meaning of common therapist factors, both in theory and research. Review the FIS items.
3.. Group practice and discussion of clear high and low responses to various clips. Coders are encouraged to consider how their own recorded responses would be rated with the FIS items.
4. Assigned practice of clips from our Research team. These are genuine responses that were made by our research team for the purpose of coder training.
5. A series of weekly responses are coded by the coding group. Discussion should be more intensive at the beginning. All or nearly all responses are discussed in group. Encourage differences and discussion in order to identify the "true" code.
6. As coders approach reliable rating, weekly coding meeting can focus on those codes that have the greatest difference. It is important at this phase to avoid group-think and consider the merits of diverse codes (even when ultimately they are incorrect).
7. Begin checking the ICCs of the coding group. Calibration of the coders practice codes. When ICCs are consistently above .70, then begin coding the response for your research project. A good rule of thumb is that 3 to 4 trained coders can reach acceptable reliability in 4 to 6 weeks.
We've used a few tools to coordinate ratings for reliability and data collection. We've mostly used an FIS spreadsheet with multiple tabs has been our most used method to coordinate coding groups. The first tab includes a place for individual coders to enter their rating. Each of the FIS items is included in 8 tabs with embedded videos of high and low examples. The spreadsheet can be easily adapted for any FIS coding project. We've found this format to be easy for individual raters to use and then copied by the group leader onto a single spreadsheet for reliability analyses of multiple raters.
Download the rating worksheet file with embedded manual here (Excel format):
FIS Rating Form
Recommended coder training includes the following steps.
1. Raters become acquainted with the stimulus clips by giving their own responses and rcording them.
2. An overview of the FIS construct and the meaning of common therapist factors, both in theory and research. Review the FIS items.
3.. Group practice and discussion of clear high and low responses to various clips. Coders are encouraged to consider how their own recorded responses would be rated with the FIS items.
4. Assigned practice of clips from our Research team. These are genuine responses that were made by our research team for the purpose of coder training.
5. A series of weekly responses are coded by the coding group. Discussion should be more intensive at the beginning. All or nearly all responses are discussed in group. Encourage differences and discussion in order to identify the "true" code.
6. As coders approach reliable rating, weekly coding meeting can focus on those codes that have the greatest difference. It is important at this phase to avoid group-think and consider the merits of diverse codes (even when ultimately they are incorrect).
7. Begin checking the ICCs of the coding group. Calibration of the coders practice codes. When ICCs are consistently above .70, then begin coding the response for your research project. A good rule of thumb is that 3 to 4 trained coders can reach acceptable reliability in 4 to 6 weeks.
We've used a few tools to coordinate ratings for reliability and data collection. We've mostly used an FIS spreadsheet with multiple tabs has been our most used method to coordinate coding groups. The first tab includes a place for individual coders to enter their rating. Each of the FIS items is included in 8 tabs with embedded videos of high and low examples. The spreadsheet can be easily adapted for any FIS coding project. We've found this format to be easy for individual raters to use and then copied by the group leader onto a single spreadsheet for reliability analyses of multiple raters.
Download the rating worksheet file with embedded manual here (Excel format):
FIS Rating Form