Monday 16 January 2012

Assignment 2

      The program that will be evaluated is the prenatal exercise class that was put in place in order to help reduce the incidents of type 2 diabetes as well as gestational diabetes. This Saskatoon – based program focused on pregnant Aboriginal women who have traditionally experienced high rates of diabetes. A complete evaluation of this program would require a longitudinal summative study after a decade or longer so the long-term results of the program can confirm or refute it's success.
      However, in order to properly evaluate this program, I would begin with a modified CIPP model. Due to the relative newness of the program, this formative assessment would focus on the Inputs and the Process. At this point, context could be ignored as, for now, it is a lower priority than Inputs and Process. At a future date, evaluation of the people who volunteered (or neglected to volunteer) would provide valuable information as to how to better serve the focus group (including the 93% of eligible candidates who did not participate). But, for now, it is more important to evaluate if the inputs and process are helping the people who are currently involved.
      The Product is not a priority right now either. First of all, the program is too new to accurately measure whether or not participants are experiencing fewer cases of diabetes and, if so, if this program was the cause of the decrease. As it said in our readings on the CIPP model, “one should not redundantly gather new information if acceptable and sufficient information from another source is readily available.” One could assume that there already exists sufficient proof that exercise, in fact, lowers the risk of diabetes so attempting to re-prove that in this small sample is not an efficient use of resources.
      At this point, the evaluation should focus on the issue of discovering if the Inputs and Process are being used as the initial plan had intended. Sample questions that need to be asked are:
  • What made the participants choose to participate?
  • Do participants find the program too easy/too strenuous?
  • Do participants regularly attend?
  • Are all the sub-programs being utilized (child-care, busing, pool membership, books, etc,)?
  • Why did/didn't participants invite friends?
  • Was once a week enough?
  • Did participants use the pool on non-class days?
      Questions such as these will lead evaluators to how they can better serve those who are attending. This will also lead to modifications that will attract more candidates. Mid-range evaluations will include Context as researchers can begin to examine the demographics of those who participate. While initial data show that half were housewives, half were single, two-thirds were social assistance, etc., it does not indicate why these numbers are the way they are. Do they represent a cross-section of Aboriginal women? Do these women join for health reasons? Social reasons? Are people who join a program like this the type of people that would have exercised anyway so the program – while providing a means for these women – would not, overall, lower the incidence of diabetes? For those who were eligible, but chose not to participate, why did they not participate? What would it have taken to take advantage of the program? These are all valid questions that will need to be addressed as the program begins to mature – perhaps in three to five years.
      As far as Product, occurrence of gestational diabetes could be determined fairly quickly but type 2 diabetes may take years to surface. The outcomes for the program as a whole could not be sufficiently evaluated until enough time had passed to make conclusive statements. Also, it would take time to differentiate between the intensity, frequency, and duration of participating women and the results they achieved.
      One evaluating method that would not be beneficial at this point is the discrepancy model. It is too early to dissect each aspect of the program to determine if the results from that particular event are contributing to the overall success of the program. Again, in five or 10 years, it would be helpful to do this, but for now, it would be nearly impossible to separate the effectiveness of each component of the program.
       One model that would be helpful along side of the CIPP model is the countenance model. Since data at this point may be skewed or difficult to come by, qualitative assessment of the participants, staff, and other stakeholders would provide valuable information into the Process of the program.
Conclusion:
      When analyzing a program, especially one as new as this, the temptation to over-analyze must be resisted. In time, a full CIPP evaluation (or Scriven model examining goals and roles) will be appropriate and valuable but, for now, the more urgent and important components of the program that need to be evaluated are just the Inputs and Process. Obviously it would be crucial to emphasize to the overseers of the program the importance of ongoing evaluation as the program matures and to use the formative evaluations to slowly impact the context of the program. As stated earlier, assuming the that exercise has already been linked to the reduction of diabetes, this program can clearly be invaluable to many high-risk women if it is managed properly. 

Joel 

Monday 9 January 2012

Assignment 1


     The following is my assessment of the Review of Programs/Services for Gifted Students prepared by the Ontario Quality Assurance Department for the Ottawa-Carleton School Board (OCDSB) in 2001. The review of the gifted programs was part of a larger Ontario Ministry of Education Project with the goal of “developing standards for each exceptionality in order to improve the understanding of what is the most effective way to provide special education programs across the province.”  According to Scriven, formative evaluation "is research-oriented vs. action-oriented," and since there are few calls to action in the recommendations, this assessment in primarily formative.
      Researchers used three methods for gathering their data: Qualitative. They interviewed various stakeholders; Quantitative. They had stakeholders complete surveys; and third, they compiled findings from previous research done by similar groups. I found it interesting that they invited individuals/groups to submit up to five areas of concern that they would like to see evaluated and, from these, they received 17 responses with a total of 119 suggestions...obviously exceeding the limit of five each.
      Although they did not use the exact terminology, based on the broad information headings, the researchers were using a CIPP (Context, Input, Process, Product) model. The following headings were selected by the researchers to categorize the 119 responses (I have added what CIPP category that each would most likely fall in – there would obviously be some overlap):
  1. Students (Context)
  2. Budget (Context, Input, Process)
  3. Administrative Responsibility (Context, Input, Process)
  4. Facilities (Context, Input, Process)
  5. Needs of Students (Input)
  6. Staffing (Input)
  7. Qualifications (Input)
  8. In-service (Input)
  9. Material Resources (Input, Process)
  10. Delivery Models (Process)
  11. Activities (Process)
  12. Goals of the Program (Product)

What I liked about the evaluation:
  1. It was requested and done. I think that too many programs are run without ongoing assessment and evaluation.
  2. The researchers acknowledged that the recommendations for improvement would be “within the confines of responsible fiscal management.” Often government programs make great promises which are not feasible financially – this recognized the limitations that school boards face.
  3. Some data was provided (and more can be requested in hard copy but was not included in online version of the report). This gives support for the researchers' judgements. Conversely, it also enables readers to assess and challenge the conclusions and recommendations of the researchers.
  4. Specific recommendations were made in each CIPP category.

What I did not like about the evaluation:
  1. Goals of program are vague or non-existent. Even though the researchers included a category of “Goals” when organizing responses, not one of the 119 suggestions fell into that category. I believe that assessing any program must start with alignment to goals – Is the program doing what it is supposed to do?
  2. Similar to #1, the goal of assessing all exceptionalities was to ensure that “all students have access to curriculum, teaching, and learning environments, that will enable them to reach [provincial] standards.” As stated, this would virtually eliminate all gifted programs.
  3. In their recommendations, researchers did not comment on the alarming fact that 100% of the responders wanted to see an issue in Context, Input, or Process addressed, but not one of the 119 were concerned about the Product – or outcome – of the program. This could mean that everyone is happy with the goals but I find this doubtful since nowhere in the evaluation are the specific goals written or even referenced.

      Considering the goals of Ministry when they requested the report, the Stufflebeam CIPP model seemed like a logical place to start. The Ministry seemed more concerned with documenting what was being done than they were with improving the program. Had they been more interested in improvement, the Prevus Discrepancy model would have been more practical. The other model that was touched on was Scriven as the researchers did analyze roles but they certainly did not address goals in any significant way. Overall, the report seemed to be a political document to testify to the government's concern for gifted children without actually having to anything more to support them.

The complete report can be found online at http://www.abcontario.ca/pdf/ocdsb_gifted_review/Gifted.pdf

Thanks,
Joel