Skip to main content

Data for Continuous Improvement

Using Data for Continuous Improvement

**Hough, H., Byun, E., & Mulfinger, L. (2018). Using data for improvement: Learning from the CORE Data Collaborative (Technical report). Available at the Getting Down to Facts website: https://gettingdowntofacts.com/sites/default/files/2018-09/GDTFII_Report_Hough.pdf

This paper discusses the data that are most useful to inform continuous improvement at the classroom, school, and system levels. The authors first make the distinction between data used for improvement and data used for accountability, and then provide both policy context and an overview of the current state of data use in California. The report then presents a case study of how the CORE Data Collaborative uses a multiple-measures approach to support decision-making. This exploration looks at examples of how the CORE interactive data interface allows for an assortment of comparisons among and within schools, as well as how county offices of education use the interface to understand district performance across different groups. Additionally, the case study describes how the CORE Data Collaborative supports networked capacity building to help district and school administrators determine root causes and to enact changes to make improvements.

Society for Public Health Education & Association for Supervision and Curriculum Development. (2013). Reducing youth health disparities requires data for continuous improvement. Available at https://www.sophe.org/wp-content/uploads/2017/01/DatauseYHD_1nov2013_v2.pdf

This fact sheet summarizes key ideas from a panel of experts organized by the Association for Supervision and Curriculum Development and the Society for Public Health Education to discuss the use of health data to promote continuous improvement. The panel noted that school outcome measures often exclude key data points such as physical and mental health, social and behavioral problems, and health risk-behaviors—especially in poor children. Federal legislation like No Child Left Behind mandated state collection and reporting of student achievement data overall and for subgroups based on ethnicity, income, and English proficiency; however, data associated with chronic absenteeism, school climate, and family involvement were not explicitly prioritized. In order to establish integrated data systems of continuous improvement, the panelists recommended that education and health systems regularly collect and monitor these health indicators and be held accountable for improving outcomes. The fact sheet concludes with recommendations at the school, state, and national level, and highlights the need for (1) community coordinating committees to measure and annually report data related to students’ health; (2) state health departments and education agencies to report local health and educational data; and (3) health care reforms to include student health indicators of achievement.

Datnow, A. (2017). Opening or closing doors for students? Equity and data-driven decision-making. Available at https://research.acer.edu.au/cgi/viewcontent.cgi?article=1317&context=research_conference

In this piece, Amanda Datnow summarizes key findings about connections and tensions between data-driven decision making and equity from three school case studies. The first tension is between using data for accountability versus continuous improvement; Datnow centers on the importance of actionable data that leads to productive conversation or changes, whereas accountability often is used for reporting or compliance. She also advocates for the term “data-informed decision making,” because humans—not data—drive changes that happen in schools. With regard to the second tension, confirming versus challenging assumptions, the author discusses the importance of using data to counter common narratives about underserved student populations and resisting deficient thinking. The third tension centers on the common misuse of assessment data to lock students into ability grouping, instead of using formative assessments and other “fresh” data to continually and flexibly identify students strengths and needs. Datnow concludes that equity must be explicit in data use and analysis in order to produce improvements for all students.

**O’Day, J. A., & Smith, M. S. (in press). Quality and equality through continuous improvement. In J. A. O’Day & M. S. Smith, Opportunity for all: A framework for quality and equality in education. Cambridge, MA: Harvard Education Press. Available at https://cacollaborative.org/sites/default/files/4.1.4%20 ODay_Smith_in_press_Quality_and_equality_through_continuous_improvement.pdf 

This excerpt outlines four key differences between data use in outcomes-based accountability models and data use for continuous improvement. These include the degree to which the two approaches make use of data on antecedent processes as well as outcomes, their perspectives on and responses to failure, their relative attention to context, and their reliance on internal versus external sources of accountability.

Thompson, K. D., & Keiffer, M. J. (2018, July 17). Multilingual learners doing better in US schools than previously thought [Web log post]. Available at https://theconversation.com/multilingual-learners-doing-better-in-us-schools-than-previously-thought-98919

In findings that contrast with prior analyses of English Learner (EL) learning outcomes, new research by Michael Kieffer and Karen Thompson shows that reading and math scores for multilingual students improved at higher rates than those of English-only students between 2003-2015. This approach differs from prior research because it expands the traditional definition of ELs—a group of students who have not achieved a certain level of proficiency and who exit the EL category once they do—to include data for both current and former ELs, thus including improvement of the latter while in the public school system. Their research also reveals graduation rates for ELs to be similar to those of students who have never been classified as ELs. The researchers hypothesize that schools, districts, and policy are serving these students more effectively than previously believed and suggest combining outcomes for both current and former ELs to show student progression more accurately.

Local Examples

Aguilar, J., Nafack, M., & Bush-Mecenas, S. (2017). Exploring improvement science in education: Promoting college access in Fresno Unified School District. Available at Policy Analysis for California Education (PACE) website: https://www.edpolicyinca.org/sites/default/files/FUSD-continuous-improvement.pdf

This case study of Fresno Unified School District (FUSD) shares how the district developed and utilized its data dashboard and the principles of improvement science to increase college access for their students. Since 2009, FUSD invested in a robust data dashboard to support its school improvement work. The Fresno School Quality Improvement and Targeted Action index is composed of 75 indicators including standardized test performance, EL re-designation, measures of student growth mindset, measures of school climate, and college enrollment. Using this data dashboard, FUSD found evidence that many students were eligible to apply to a variety of colleges and universities in California, but were applying to just one. FUSD used the principles of improvement science to understand the problem and identified one root cause to focus on, the lack of students’ awareness of their matched college options based on their academic profiles. FUSD created individualized “I Am Ready” packets for every senior qualified to apply to CSU and UC campuses to increase their awareness of college eligibility. Sending these packets helped increase the number of students applying to CSU/UC outside of Fresno from 382 to 578, an increase of over 50 percent.

Stringer, K. (2018, June 12). These California school districts joined forces to bolster social-emotional development, but a study of 400,000 kids reveals learning gaps and a confidence crisis among middle school girls [Web log post]. Available at https://www.the74million.org/article/these-california-school-districts-joined-forces-to-bolster-social-emotional-development-but-a-study-of-400000-kids-reveals-learning-gaps-and-a-confidence-crisis-among-middle-school-girls/

This article references a PACE study of the CORE Districts student population to discuss what the early body of research in social emotional learning reveals and highlight a few suggestions for its direction. The study’s survey data reveal certain growth and decline patterns among gender and age groups. For example, growth mindset scores for both males and females increased from Grades 7 to 10. However, despite performing better academically than their male peers in middle school, females reported lower self-efficacy in their ability to achieve beginning in Grade 5 and through Grade 11.  The author of the article also includes researchers’ considerations about how to interpret and utilize the early body of social emotional learning research noting that although surveys are valuable and scalable measurement tools, because of the nature of bias in self-reporting, other tools are important in tracking and evaluating student social-emotional development. Lastly, the author points to evidence that legitimizes social emotional learning’s positive impact on students and closes with a district leaders’ call to not only value this skillset, but the need to explicitly teach it in schools.

Sanger Unified School District. (n.d.). MTSS cycle of improvement and self-correcting feedback loop [Description of data analysis process for MTSS team]. Sanger, CA: Author. Available at https://cacollaborative.org/sites/default/files/Sanger_MTSS_Feedback_Loop.pdf 

This graphic, which is a slightly revised version of one Collaborative members may remember from last June, illustrates some of the structures that Sanger Unified School District has developed to support the cycles of data analysis, feedback, and improvement integral to the district’s multi-tiered systems of support (MTSS). The document identifies the groups and individuals at each level of the system who are charged with reviewing and analyzing MTSS data and the rough timeline during which these processes take place. Data discussions and analysis begin in the grade level professional learning communities (PLCs) and proceed through School Leadership Teams, Sanger Academic Achievement Teams, the Administrative PLC (cross-site), the MTSS District Data Team, and finally the Cabinet MTSS Data Review Team.

**This document is considered a priority reading.