RAINN
Phase 4

Intervention Evaluation

Purpose of RAINN Evaluation Data

It provides evidence that the online hotline is meeting its goals.

  • 72% of users report high overall satisfaction and 16% report low satisfaction; the mean rating for each evaluation question is above 4.0.
  • 91% of users find the online hotline easy to use.
  • 2011 consumer satisfaction was significantly higher that 2010 consumer satisfaction for each evaluation question.

It provides information for program improvement. The evaluator meets with RAINN administration to discuss the evaluation results and their implications.

  • Why are 20% of sessions shorter than 10 minutes? Do some topics require longer sessions than others?
  • What can be done to identify and train volunteers who receive ratings lower than 3.0 in “volunteer knowledge and skills”?
  • Only one visitor session resulted in mandatory reporting. Why might that be, since 12% of calls involve incidents related to a minor?

Provides information for future training.

  • The staff/volunteer needed help from a supervisor with a suicidal user in 18% of sessions. Is more or different training needed in this area? What other questions would you want to ask to inform that?
  • Seven staff/volunteers provided qualitative comments that they were uncomfortable dealing with issues of gang rape. This suggests that training in that area needs to be included. How else should the evaluations inquire about areas where additional training is needed?

Provides insights about the nature of online hotline users.

  • 86% of visitors are not in an immediate crisis.
  • Examination of the timeframe of the assault show that only 14% discussed an incident that occurred during the past few days and only 1 percent required a referral to 911.
  • How might these data shape RAINN’s communications about the nature of its work and its client population?

Provides accountability to constituents.

  • The evaluation report is made available to board members, staff, and funders.
  • Results of the evaluation have been published in an academic journal to inform the larger human service community about RAINN.
  • As more organizations experiment with virtual service delivery, evaluation of RAINN’s model can be translated to those other contexts, with recognition of the limitations of such knowledge transfer.

Answer the following questions

  1. What are two examples from the data above that provide process evaluation?
  2. What evidence from the user satisfaction data suggests that the online hotline is meeting its goals?
  3. Some volunteers are rated low in “knowledge and skills”. Given that information, would you fire those volunteers? Why? If not, what would you do instead?
  4. How might you construct an evaluation component that seeks to understand whether and how the COVID-19 pandemic affected the operation of the online hotline? The incidence of sexual violence?
    How could you incorporate evaluation into the design and rollout of the mobile app, as part of RAINN’s services?