I love data. It can show us what is working and what is not working, pointing the way to effective solutions to some of our most intractable problems. Indeed, I love it so much I have spent most of my career engaged in applied research and teaching undergraduates the skills to do the same.
However, not all data are created equal. Every student of mine has heard the Jaeger saying, “Bad data in, bad data out”. In other words, poor measurement of the things we are trying to study leads to statistical results that are meaningless. If we have not measured things correctly (i.e. what is referred to a valid measures or assessments), we cannot draw meaningful conclusions from our results and when we do, they can not only be wrong but also dangerous.
Take last week’s reports in The Source and the VI Consortium on the results of the new Virgin Islands Students and Teachers Accountability System (VISTAS) as an example. The assessment of how well schools are doing is largely dependent upon student test scores. We assume these scores are telling the truth. But what if these test scores themselves are not accurate reflections of the knowledge and skills VI children demonstrate in the real world. Admittedly, this is a challenge with all tests – they are given in a contrived situation but are meant to reflect what a student can do outside of the testing situation. But it goes beyond this. Assessments can only truly begin to provide accurate results if they are given to the same type of students on whom the test was created.
To my knowledge, no tests of student learning have been created on or for students in the USVI. This alone should give everyone pause when interpreting the meaning of assessment results in the VI. Sometimes known assessments are adapted to better fit a particular culture or community which was not included in the test’s development. Unfortunately, even such adaptations fail to solve a much deeper problem with assessments developed on other populations: those who create the tests choose the standards against which a community’s students are judged. At best, it means children’s culturally relevant and valued strengths go undocumented. At worst, it means that the tests are working exactly as some are intended, labeling those who are not like the test creators (i.e. the “other”) as deficient.
The consequences of a mismatch between the test makers and test takers should never be minimized, especially given the VI’s political status as a territory. One of the processes by which colonizers maintain the status quo is by convincing the colonized that they are deficient compared to their colonizers. In the VI, it seems many do not question the characterization of our children as deficient. For example, as a commenter on the VI Consortium article mentioned above stated, “So barely over 1/2 of the students at the BEST school in the territory were proficient in English and math? No wonder nothing else works properly here.”
Such ideas even shape how we view VI children before they start school. Every few years the Kids Count Data Report, funded by the Annie E. Casey Foundation, comes out and repeats the narrative that many of our young children are not prepared for school, especially in the domain of language development. This is a particularly interesting given that the tool used by VIDE to assess kindergarten readiness was developed on monolingual children. However, most of our children come from homes where a Caribbean dialect, a creole, other language, or multiple languages and/or dialects are spoken. On such tests, young children from such early home environments often appear to be lagging behind children who come from homes where only academic English is spoken. However, a little later in development, they not only catch-up on such tests but demonstrate cognitive advantages over such peers if schools treat such experiences as assets, not liabilities.
Repeating narratives derived from suspect data is extremely dangerous. The notion that our children are deficient becomes embedded in the minds of parents, teachers, community members and, worst of all, in the minds of our children, sowing the seeds of self-fulfilling prophecies. For example, ample research confirms that teachers who have low expectations for a student’s success provide them with poorer learning opportunities than they do for a student for whom they have high expectations. Students for whom expectations are lower in turn demonstrate less motivation. Not surprisingly, parent expectations for academic achievement operate in a similar way.
None of what I write here should be taken as a negation of any efforts to ensure our children have first-rate educational opportunities in safe, clean learning environments. Often data are not needed to spot the obvious. But if we want to rate schools based on students’ test scores, use their scores as an overall barometer of what students are capable of, or use data to inform actions to assure they thrive, we better be 100% certain we are measuring what matters and that we are doing so accurately. Having no data would be so much better than relying on bad data that simply does the colonizer’s bidding.
Moving forward, community knowledge and expertise, including that from family members, teachers, elders, children, and other professionals from the VI who work with children, should be used to define the standards against which our children are judged and develop valid assessments that truly measure their accomplishment. Other minoritized communities are reclaiming their autonomy in this way. The VI must start doing the same. Our children’s lives and the future of our community depend on it.
St. Croix Source
Op-ed
