Select Page

Researchers at Texas A&M University entered into a contract with the Texas Department of Health to evaluate abstinence programs and recently released their findings. Unfortunately, however, only a small number of programs were included in the study. The researchers at Texas A&M, when interviewd by the media, suggested that the programs were not accomplishing anything of significance. The Medical Institute has done an in-depth evaluation of the entire study as reported below. We feel there are great problems with the Texas A&M research itself and some surprisingly positive implications that were completely overlooked. Below please find our technical report. We hope you will find it useful.  — The Medical Institute

Abstinence Education Evaluation — Technical Report

Patricia Goodson, BE Pruitt et al. September 2004. Texas A&M University.

 A comparison of the recent Texas A&M Abstinence Education Evaluation Phase 5 Report to its extensive press coverage underscores the pervasive media bias against abstinence education.  What is being presented as a scientific study showing failure of abstinence education is actually not scientific at all.  And a careful examination shows that it is the study, rather than abstinence education, that is a dismal failure.  This “study” is riddled with methodologic flaws and data misinterpretations, and — to paraphrase a famous Texan — is shakier than school-lunch jello.  

This study involved fewer than 750 of the state’s students, drawn from just 20 of the more than 3,200 schools in Texas.  Though this might have been acceptable had the sample been well selected, the researchers totally ignored generally accepted scientific criteria requiring study participants to be randomly selected if the results are ever to be generalized.  Following the lead of Alfred Kinsey, they chose the ad hoc approach of allowing participants to self-select. The not surprisingly unrepresentative nature of the sample is shown in Figure 1.  As can be seen, the ethnic composition of the participants bears scant similarity to that of Texas schools.

 

Figure 1. Ethnicity of Middle School students

 1                                                                2

       

 

 

 

 

 

 However, the unrepresentative nature of the group and the self-selection of the participants are only a few of the factors that invalidate this evaluation.

 The above-mentioned errors are further compounded by the astonishing absence of any control group.  Because behaviors are expected to change over time, it is standard practice to compare a group that has been exposed to an intervention to a group that is not exposed to the intervention. Generally, the groups are compared before the intervention begins (pre-test) and at least once after the intervention (post-test).  The A&M researchers did not do this.  Instead they simply compared pre- and post-tests for the “intervention” group.  Though simple pre- and post-tests with no control group are commonly used in small-scale self-evaluations of programs because they are relatively cheap and simple, such a design has no place in a scientific external evaluation.   

It is instructive to compare the authors’ written statements in their Technical Report to their statements quoted in the press. For instance, in the summary section of their report,[1] the authors’ correctly state

1)      The most significant limitation relates to its design [italics added]…no controls were identified for the high school group.

2)     …”cases” and controls were already substantially different at pre-test, and therefore, not comparable

3)      …both the middle school and high school samples are non-probability samples, ie, they were not randomly selected.

They are correct.  These were huge design flaws — on the order of a civil engineer saying that he got the arch upside down on a bridge design.  Nevertheless, despite their stated misgivings, the authors were far less circumspect about the limitations of their study in their statements to the press. They were, in fact, prone to make conclusions neither discussed in nor supported by their report.  Some of these statements were

1)      Science is losing to politics.

2)      We need to get over our fear of research.

3)      We didn’t find strong evidence of program effect.

It would appear that it is the authors — abstinence educators — who are actually afraid of research. 

 
Notwithstanding the numerous shortfalls in the study design, some findings would seem to merit a second look.   For instance, following exposure to an abstinence program, post-test responses of participating students were two to three times more likely to have changed in a direction that was favorable rather than unfavorable toward abstinence.

 Abstinence programs are not the default in most school districts in the US.  They are generally adopted only in communities where parents and community leaders feel strongly about these values and want them reinforced in school.  It is interesting to note (Figures 2 & 3) that, at both the beginning and end of the study, the young participants appear to be far less sexually active than their peers statewide. 

 

 

Figure 2. Percentage of students who have “ever had sex” at the beginning of the study 3

 

 

 

 

 

 

 

  

 Figure 3. Percentage of students who have “ever had sex” at the end of the study

 4.5

 

 

 

 

 

 

 

 

 

It is not surprising that this highly flawed and limited sample supports published peer-reviewed studies[2],[3] that demonstrate the value of community-based abstinence programs.  What needs to be emphasized at the community level are consistent behavior change messages promoting abstinence before marriage and fidelity within marriage.  Communities that consistently encourage their young people to make the healthiest behavior choice — to remain abstinent until marriage — can expect to reap the rewards of both reduced nonmarital pregnancy and STD rates.

——————————————————————–

[1]Goodson P, Pruitt BE, Buhi E, Wilson KL, et al.  Abstinence Education Evaluation Phase 5 Technical Report. College Station, TX: Department of Health and Kinesiology; Texas A&M University; 2004 September.

[2] Vincent ML; Clearie AF; Schluchter MD.  Reducing adolescent pregnancy through school and community-based education. JAMA. 1987; 257 (24): 3382-3386.

[3]
 
Doniger AS, Adams E, Utter CA, Riley JS. Impact evaluation of the “not me, not now” abstinence-oriented, adolescent pregnancy prevention communications program, Monroe County, New York. J Health Commun. 2001 Jan-Mar; 6 (1): 45-60.

[The Medical Institute, 22Feb05]