Arizona State University In Search of Leading Indicators in Education

Data have long been considered a key factor in organizational decision-making (Simon, 1955; Lindblom & Cohen, 1979). Data offer perspective, guidance, and insights that inform policy and practice (Newell & Simon, 1972; Kennedy, 1984). Recently, education policymakers have invested in the use of data for organizational improvement in states and districts with such initiatives as Race to The Top (United States Department of Education, 2010) and the development of statewide longitudinal data systems (Institute for Education Sciences, 2010). These and other initiatives focus attention on how data can be used to foster learning and improvement. In other fields, including economics and business, much work has been done to identify leading indicators that predict organizational outcomes. In this paper, we conceptualize how leading indicators might be used in education, using examples from a small sample of school districts with reputations as strong users of data. We define leading indicators as systematically collected data on an activity or condition that is related to a subsequent and valued outcome, as well as the processes surrounding the investigation of those data and the associated responses. Identifying leading indicators often prompts improvements in a district’s system of supports. To develop this concept, we describe four examples of how districts identified and used key indicators to anticipate learning problems and improve student outcomes. We also describe the infrastructure and other supports that districts need to sustain this ambitious form of data use. We conclude by discussing how leading indicators can bring about more intelligent use of data in education.


Introduction
Educators see much promise in using data to improve the quality of education. Mason (2002) argues that data can help school systems pinpoint successes and challenges, identify areas that need improvement, and evaluate the effectiveness of programs and practices. Dembosky, Pane, Barney, and Christina (2005) contend that data can reveal strengths and weakness and guide improvement strategically and systemically. Earl and Katz (2006) assert that when educators learn more about data use, they can more effectively review their capacities, identify weaknesses, and plan for improvement.
The emphasis on using data to make better decisions is driven by a convergence of longstanding trends in policy research and more recent developments in education. There has long been an emphasis on deliberate and rational policy decisions and using evidence to inform decisionmaking. Going back to the 1950s, researchers such as Arrow (1951) and Simon (1955) studied the logic of decision-making in professional organizations to find the qualities that make decisions effective. In the 1960s and 1970s, researchers examined how policymakers used evidence to make better decisions (Newell & Simon, 1972;Lindblom & Cohen, 1979;Kennedy, 1984). More recently, the production and use of research-based knowledge has grown into a large and sophisticated enterprise (Corcoran, 2003;Rowan, Camburn, & Barnes, 2004;Weiss, Murphy-Graham, & Birkeland, 2005).
In education, the accountability movement in general and the No Child Left Behind (NCLB) Act of 2001 in particular have meant more testing in both states and districts (Elmore, Abelmann, & Fuhrman, 1996;Hamilton, Stecher & Klein, 2002;Supovitz, 2009). These tests are a major source of data for schools and districts. A 2004 study of NCLB, for example, found that districts were increasingly using student achievement data to inform instruction (Center on Education Policy, 2004).
Other factors behind the drive to expand data use in schools and districts include the rapid proliferation of technology for collecting, aggregating, and organizing quantitative information (Mieles & Foley, 2005;Stringfield, Wayman & Yakimowski-Srebnick, 2005), as well as arguments that data use can increase educational equity (Johnson, 2002), develop professional learning communities (Holcomb, 1999), and foster school-wide improvement (Bernhardt, 1998).
Research on school districts has often attributed improvement to the district's focus on data. For example, Snipes, Doolittle and Herlihy (2002) conducted a series of three-year case studies of districts that were more successful than others in their states at raising overall student performance and reducing racial gaps in performance. Among nine central district strategies they identified, the authors named data-driven decision-making as a key factor. According to the authors' findings, the successful districts "committed themselves to data-driven decision-making and instruction. They gave ongoing assessment data to teachers and principals as well as trained and supported them as the data were used to diagnose teacher and student weaknesses and make improvements" (p. xviii). Togneri and Anderson (2003) studied five high-poverty districts where student mathematics and/or reading achievement improved over 3-5 years. Among seven key findings, the authors noted that the improving districts made substantial use of data to guide decision-making. The districts "systematically gathered data on multiple issues, such as student and school performance, customer satisfaction, and demographics"; "developed multi-measure accountability systems to gauge student and school progress"; and "encouraged teachers to use data to guide decision-making" (p. 6). Thus a range of data strategies was seen as central to improvement in these districts. However, underneath the conclusions that data use is a characteristic of effective district policy, we know relatively little about how districts structure their use of data, which indicators they focus on, how they draw meaning from this activity, and how the results contribute to their learning and help them adjust their policies.
In this article, we examine how four school districts used leading indicators in their improvement process. We describe leading indicators as systematically collected data on an activity or condition that is related to a subsequent and valued outcome, as well as both the processes surrounding the investigation of those data and the associated responses. Thus, in our conception, leading indicators encompass both the indicators themselves and the processes surrounding them.
Our investigation of leading indicators focuses on four research questions: First, how did the districts construct their investigations using data? Second, what data did the districts use, and how were those data used as leading indicators? Third, how did district leaders respond to what they learned from their investigations of leading indicators? Fourth, what infrastructure and resources did districts require to support their use of leading indicators for decision-making?
In the sections that follow, we present an overview of the literature that provided a framework for our investigation, describe how we arrived at our sample of four districts, explain our data collection and analysis methods, and present the results of our analysis. The paper concludes with a discussion of the results.

Literature Review
There is a long line of theory and research on using data to make policy decisions. In the 1950s, Simon (1955) developed a model of rational decision-making that emphasized collecting and synthesizing data to inform policy choices. Bass (1983) identified three general phases of the decision-making process: problem identification and diagnosis, search and design, and evaluation and choice. Daft and Weick (1984) introduced a concept of how organizations interpret and act on external information and continuously improve by scanning the environment and collecting data, interpreting the data to create meaning, and taking action. Edward Demings developed a similar approach in his Plan-Do-Study-Act (PDSA) cycle, which he saw as a method of continuous organizational improvement (Thompson & Koronacki, 2001). Similarly, in education, Preskill and Torres (1999) developed a model for using inquiry to continuously improve teaching and learning; it included identifying appropriate questions about practice, identifying and analyzing data to inform the questions, taking action as a result, and assessing the results and revisiting the questions.
The focus on data-based improvement processes has also brought increased attention to the data themselves. Several authors have developed ways to explain how data are transformed into action. These frameworks generally consider data to be raw numbers and facts; information to be processed data; and knowledge to be authenticated information (Ackoff, 1989;Alavi & Leidner, 2001). In educational research, several authors have used this progression in their frameworks for district data systems. For example, Mandinach, Honey, and Light (2006) used it to develop tools for collecting and organizing data to be analyzed and summarized into information, which is then synthesized into knowledge to help make decisions. Petrides and Guiney (2002) used the progression from data to information to knowledge to envision a comprehensive knowledge management system in which school leaders can evaluate information and convert it to the knowledge they need to make decisions.
School systems are increasingly analyzing student outcome and other data for patterns they can use to guide improvement. Massell and Goertz (2002) examined the strategies of eight diverse districts. The strategies included using data to align curriculum and instruction with tested outcomes; to identify and network with schools or districts that had similar demographics but better student performance; to identify professional development opportunities; to develop their own data to supplement that of the state; and to create incentives to encourage schools and teachers to use data for decisions about practice. Datnow, Park, & Wohlstetter (2007) examined four school districts that were nominated as successful data users. The authors identified several key attributes of the districts' use of data. First, they set goals that were used as anchors for which data could be collected, progress measured, and insights about variability in progress explored. Second, the districts established a culture of data use and continuous improvement. Third, the districts invested in an infrastructure for data-rich systems. Fourth, the districts built capacity for data-driven decision making by investing in professional development, support, tools, and time for teachers to investigate and collaborate around data.
There has also been significant recent work on indicators to predict and prevent high school dropout and describe reform implementation and outcomes. . In a seminal article, Allensworth and Easton (2005) described the "on-track indicator" which combines course credits and grades to identify students both on and off track for graduation in their freshman year of high school. They also have examined these indicators to predict high school dropout and college going rates (Allensworth & Easton, 2007). Similarly, Balfanz, Herzog and MacIver (2007) used longitudinal data analyses to identify attendance, behavior and course grades as key indicators of student engagement at the middle school level. Supovitz (2010) examined indicators used by two large, urban school districts. One district constructed a custom set of indicators that measured the district's implementation of its reform efforts, including standards, reading instruction in classrooms, and school safety measures. The district did not, however, link those measures to student outcomes. By contrast, the second district focused on the set of existing indicators, primarily test performance, but did not link these to any measures of implementation. Thus both districts used data to describe their systems, but not to explore relationships.

The Use of Leading Indicators in Other Disciplines
The concept of leading indicators is well established in other fields. Economists have labeled three categories of indicators of performance: coincident, lagging, and leading (Mankiw, 2007). Coincident indicators normally move in line with overall economic activity, while lagging indicators trail behind. Leading indicators, on the other hand, fairly reliably turn up or down before the general economy does, and therefore predict the future health of the economy. Examples of leading economic indicators include common stock prices, business inventories, and changes in consumer installment debt.
Much work has been done in economics and business to identify leading indicators that predict beneficial outcomes. Mitchell and Burns (1938), working for the National Bureau of Economic Researchers, coined the term "leading indicators" to identify sectors that moved in and out of recession before the rest of the economy (cited by Hamilton & Perez-Quiros, 1996). In 1968, a composite index of 12 economic indicators (called the Composite Leading Index, or CLI) was developed as a tool for predicting business cycle turning points. Since then, multiple studies have been conducted to identify leading industrial indicators. For example, Estrella and Mishkin (1998) examined the relationships among a series of financial variables-interest rates and spreads, stock prices, monetary indicators, consumer surveys, and manufacturing orders and performance-as predictors of recessions in the United States. Ittner and Larcker (1997) examined the relationship between customer satisfaction and corporate financial performance, exploring whether investments in intangible assets like customer satisfaction predict a better financial future. They found that the relationship between customer satisfaction measures and future accounting performance was both positive and statistically significant. These and other analyses used complex statistical techniques to relate events to future outcomes.

Defining Leading Indicators
Work on leading indicators suggests that both the indicators themselves and the process surrounding their identification are important aspects of their utility. Therefore, we define leading indicators as systematically collected data on an activity or condition that is related to a subsequent and valued outcome, as well as both the processes surrounding the investigation of those data and the associated responses. This definition captures several important attributes of leading indicators. First, leading indicators are antecedents to important events that predict or foreshadow those events. Second, leading indicators are not fixed characteristics of individuals or systems; rather, they are conditions or activities that can be changed by action. Third, the search for leading indicators catalyzes a productive inquiry that results in the rethinking of organizational resources or supports. Fourth, the search for leading indicators may help identify or develop more relevant and precise indicators.
Leading indicators share some meaning with terms such as correlates, predictors, and risk factors, but are distinctive. The term correlates describe the connection between variables, but does not convey the antecedent nature of a leading indicator. While leading indicators can be predictors and convey risk factors, they are distinct from these concepts in that they always represent an actionable concept, whereas predicators and risk factors may convey immutable qualities of individuals or groups.
Our conception of leading indicators is in distinct contrast to what we see as the prevailing use of data in education today. Although educators commonly focus on data, they pay more attention to the lagging indicators of student test scores to the exclusion of other indicators of performance or the relationships among indicators. Thus educators primarily use data descriptively rather than investigating the relationships that we describe in this study.

Study of Leading Indicators
Our study was a qualitative investigation of how the concept of leading indicators was emerging in a small sample of school districts with reputations as strong users of data. After identifying the districts, we conducted fieldwork and reviewed documents and artifacts to understand how district leaders used leading indicators. This research grew out of the Annenberg Institute's work in the area of leading indicators. The Institute's Task Force on the Future of Urban Districts used the term in its description of a "smart district" (School Communities that Work, 2002), and the Institute has since published a report focused on the idea of leading indicators (Foley et al., 2008), and a series of spotlight reports on specific indicators (see Musen 2010a; Musen 2010b; Flug 2010).

District Sample
To select districts for our case study, we used a two-step process. First, we reviewed studies on district data use, and we used the networks of the Annenberg Institute for School Reform and the Consortium for Policy Research in Education to gather nominations of districts that were innovative data users. We came up with a list of about 50 districts, then narrowed the list down to 12 that were cited multiple times in the research literature on data use and about whom our colleagues spoke particularly highly. Second, we interviewed each district's director of accountability (or, if there wasn't one, the superintendent or chief academic officer) about how they used data. The interviews and our ultimate selection criteria focused on (1) the systems that the district used to regularly collect data; (2) ways for people to access the data; (3) training on data use for district employees; (4) use of data to refine the organizational support systems for schools and teachers, and; (5) use of data to modify district central office practices. In making our final selection, we also considered the accessibility of informants in the districts and the convenience of getting to the district for fieldwork. Ultimately, based on these criteria, we selected four districts for our in-depth fieldwork: Hamilton County, TN; Philadelphia, PA; Montgomery County, MD; and Naperville, IL.

Data Collection
Our data collection in the four districts occurred in two phases in 2007 and 2008. In phase one, we conducted a short preliminary interview with the central person in charge of data-the chief accountability officer in Montgomery County, the chief academic officer in Philadelphia, the director of assessment in Naperville, and the director of testing and accountability in Hamilton County. The interview focused on how the district was using data to make decisions; the training that people at different levels of the system received; what data aside from student outcomes the district systematically collected; and how the district shared data with stakeholders. The purpose of this interview was to familiarize the research team with the district to focus our fieldwork.
In the second phase, two researchers went to each site for two days in the Spring of 2008, conducting 8-10 interviews apiece and collecting documents from the district. The people they interviewed included educators with cabinet-level positions (superintendents, accountability officers, chief academic officers), district middle managers (directors of professional development, technology directors, evaluation program staff, other data managers and/or data trainers), and major partners (public education fund, vendors or other data partners). We also visited two schools nominated by the district, where we interviewed principals and conducted focus groups with teachers.
Interviews lasted 45-60 minutes and were conducted with semi-structured protocols, which followed a sequence of pre-designed questions while still giving the interviewee opportunities to respond to the context. Interviewees were asked to describe the key components of their datainformed decision-making system; why the district had made a commitment to data-informed decision-making; how data were used at the central office and school levels; what indicators had emerged as particularly useful and how they used those indicators; what data they would like access to but did not have; how data were changing district practices, with examples, and; how they used data in their own decision-making. In all, we interviewed 73 people across the four districts. Appendix A provides examples of the protocols we used.

Analysis Methods
Overall, the study employed a multi-site cross-case synthesis (Yin, 1994) to explore data use in the four districts, focusing on the use of leading indicators. The analyses went through several steps. First, following data collection, we developed overarching impressionistic write-ups for each of the four districts. Then, all interviews were digitally recorded and transcribed. Next, data from the interviews was uploaded into the NVIVO qualitative data software program. Initial qualitative coding of interview transcripts and observation notes used both deductive, pre-structured coding categories that were developed both from the literature and our framework (Miles & Huberman, 1994). As we began the coding process, further codes were inductively developed as they emerged from the data (Patton, 2002).

Examples of the Use of Leading Indicators in School Districts
Careful attention to leading indicators suggests that districts use data not only to track progress, identify individual students for assistance, or evaluate the effectiveness of programs, but also to model the paths that lead to successful outcomes. All four districts had ways to monitor student academic outcomes-grades, test scores, graduation and promotion rates, etc. But what set them apart from other districts was that they also carefully identified and tracked indicators that they viewed as predictors of outcomes they valued. Further, their investigation of these leading indicators led to policy changes that strengthened their supports for students. Here we examine the indicators the districts used, how they identified them, and how they used them to inquire about patterns in their systems and develop ways to modify their systems to improve student achievement.
Our findings are organized into three major sections. First, we present three key examples of how the districts developed and used leading indicators: students' age and course credits as leading indicators of dropping out of high school; course-taking patterns as leading indicators of college readiness, and; PSAT test taking as a leading indicator of college eligibility. Second, we describe a case in which leading indicators were not readily available and our sample districts struggled with how to measure a concept -student engagement -that they identified as useful but could not easily capture. Third, we describe the infrastructure and other key central office supports to identify and use leading indicators.

Leading Indicator Example 1: Students' age and course credits as leading indicators of dropping out of high school.
Reducing student dropout rates is one of the most vexing problems that educators face today. According to the Editorial Projects in Education Research Center (2010), the average dropout rate is 40 percent in the nation's 50 largest districts and reaches almost 60 percent in some districts. One of the urban districts in our sample was particularly focused on reducing dropouts. This district used exploratory data analysis to identify several important leading indicators of dropping out of high school, spurring several productive policy shifts.
From the district leadership, we learned that two experiences converged to focus attention on high school dropout rates. First, the leaders were alarmed by rising dropout rates. According to one of the central office administrators, "We saw our high school dropout rates increasing and wanted to do something about it. So we started looking at what was causing students to drop out." Second, the district started an adult high school for students who had previously dropped out and now wanted to get either a GED or a high school diploma. This caused them to realize the magnitude of the dropout problem. Another central office administrator said: "Three years ago, we started our first adult high school. When we started that, we had over a thousand kids! We wondered what caused these kids to drop out-what's the common denominator?" Thus district leaders established the dropout rate as a major problem in the district.
Examining their data, district leaders started to investigate leading indicators that were associated with students' dropping out. For example, they discovered that 64 percent of high school dropouts were over-age for their grade. As they dug deeper, they found several other relevant indicators that seemed to be related to a student's dropping out. One district administrator described the process: We analyzed transcripts and then pulled in kids we could locate to find out their stories. Then we went back and pulled up a list of every over-age, under-credited student in the high school. We went one, two, three standard deviations off the norm. When we did that, we had a wealth of data. And what we discovered . . . is that we had lots of youngsters who were older than their peers and had fewer credits than they should appropriately have. Thus the district focused on students' age and number of course credits as leading indicators of their risk of dropping out.
As district leaders further scrutinized the data about over-age students, another important pattern began to emerge. They discovered key transition points where students were particularly vulnerable to falling behind, particularly from fifth to sixth grade, from eighth to ninth grade, and from ninth to tenth grade. According to the district's director of curriculum and instruction, "Lots of students were not successful in sixth grade. There are a huge number of retentions in sixth grade, as compared to fifth grade. The number of disciplinary and special education student referrals was also much higher in sixth grade than in fifth. It's a transition problem." The discovery of leading indicators for dropping out brought a number of changes to the district. First, the central office began paying particular attention to the transitions between elementary and middle school and middle school and high school. A member of one of the district's external partner organizations, which focused on school safety nets, explained that the "overarching goal of the entire initiative is to prepare every single middle school student . . . for a rigorous high school curriculum." Thus the district's investigation of the target population led to district-wide policy.
Second, the district began to use its data system to flag students at every grade who were over-age, so that individuals at the school level could learn why each student was over-age and whether these students needed additional supports. This helped the district target its efforts on students who were at risk.
Third, the district found that each of its high schools had a different way of defining and reporting student promotion from ninth to tenth grade, so administrators went to the school board to develop a consistent policy defining matriculation in high school. This took two years, but, as the district's associate superintendent for curriculum and instruction said, "You've got to get the data clean and clear if you want to get accurate and precise numbers." This story conveys several important qualities about leading indicators. First, no single factor stood out; rather, a series of leading indicators of student's risk for dropping out were identified over time, including the student's age and course credits at key transitional junctures. Second, the district's investigation of predictors of high school dropouts took time to unravel. Third, what the district learned resulted in a number of policy responses, including more attention to at-risk students and at-risk junctures and a change in the reporting of high school promotion. This experience illustrates the investigatory nature of the search for leading indicators. The process of inquiry into the causes of student failure in high school took leaders back down the trails that led students to drop out and helped them take preventive action.

Leading Indicator Example 2: Course taking patterns as leading indicators of college readiness
The story of how one district discovered that the offered course sequence was not preparing its students for college, and of how it responded, is another example of the potential of both leading indicators as well as the process surrounding them.
A culture of examining data was the catalyst for the district's search for leading indicators of college readiness. According to the district's associate superintendent of curriculum and instruction, the story began with the incoming superintendent's emphasis on using data to support decision-making. As the associate superintendent explained, when the superintendent first came to the district: He said that in order to get the money to support our work, we had to indicate that we are making success, so our outcomes are critical. And there's no better way to do that than by examining data, making decisions based on that data, monitoring those decisions and the interventions you put in place, altering them if they are not working and continue to focus, focus, focus. So that's pretty much been his message from day one. As the district leaders examined their high school course-taking data, they were troubled by the fact that students could take an accepted curricular path in high school and still not be prepared for college.
To project students' readiness for college, the district began to identify the courses they would need to be prepared for college and looking at predictors of success in these courses. One administrator described the resulting process as they followed these patterns earlier and earlier in students' schooling: The algebra has got reading and math in it, both because there are story problems and you have to problem-solve to do the real algebra. We used that at the eighth grade. So you're starting to build leading indicators that predict.
Then what predicted success on algebra was fifth grade math performance.
What we used to teach in seventh and eighth grade in math, we took down to the fifth grade and we called it Math A, and it became the accomplishment of that. And then here's where we checked your reading and language arts as well, and because, as you get down lower, it's harder to check for numeracy and easier to check for literacy. And then it became what projected success on this were kindergarten reading skills. This led the district to focus more attention on early reading proficiency as the foundation for student success in both mathematics and English, and as the building blocks of college readiness. The district invested in early-childhood education interventions such as tutoring and double doses of reading instruction for underperforming students, and it established benchmarks for reading at each grade level.
As the importance of the pathways to college readiness became clear, the district developed and publicized a progression chart of mathematics courses from kindergarten to grade 12. The chart showed the possible combinations and sequences of courses that students could take, emphasizing the courses students needed to be prepared for college. The chart is intended to help students, parents, teachers, and counselors see what sequences will get students to state standards, college readiness, or accelerated preparation. The superintendent described how it was created: What we did was take AP Calculus, and we took AP English-that's your math and your language arts-and said, Where did you have to be in middle school, elementary school, all the way back to grade two to get [to AP Calculus and AP English]? . . . Then we looked at the trajectory. . . . If they're on track here, is there a high probability that you'll get to the next point? And that's how we used data to build our curriculum progressions. Thus the district traced backward from college readiness to the elementary grades and used what it learned to build a stronger scaffolding of courses for students-a strong example of the power of leading indicators. First, the district identified a mismatch between course taking patterns and college readiness. Second, it used existing data on course taking to identify trajectories of courses that prepared students for college. In doing so, it identified course patterns that were leading indicators of college preparation. Third, it used this knowledge to take action by promoting more rigorous course patterns as the path to college, thereby creating clear expectations for students and parents.

Leading Indicator Example 3: PSAT test taking as a leading indicator of college eligibility
The PSAT is designed as a practice test to help high school students' prepare for and perform better on college entrance exams like the SAT and ACT. Our third example of leading indicators focuses on two districts' use of the PSAT as both a way of improving college entrance performance and helping to guide students into appropriate courses. Using information on both who wasn't taking the PSAT and the performance of those that were, the districts identified students who should be taking the PSAT as well as those who scored well on the PSAT but were not enrolled in appropriately challenging courses. In this section, we focus on how the districts used the PSAT in novel ways to both help students prepare for college and to match students' ability with their placement in courses.
The research office in this district put together a brief on pre-college testing, investigating the claim that, as a central administrator put it, "Everyone's taking the PSAT." The research brief showed that only about 60 percent of eligible students were taking the PSAT. It also showed that students who had taken the PSAT scored higher on the SAT. Together, these facts led to a huge push to get more students to participate in pre-college testing, beginning as early as the ninth grade. Now, more than 90 percent of the students in the district take the PSAT.
In addition to the effort to increase PSAT participation, the district used scores from the PSAT to place students in more appropriate courses and to intervene with students who did not perform well. As a central office administrator explained: We made it so that if you score a certain level [on the PSAT], kids have to be in the [more rigorous] courses. . . . Schools have to put the kids [who are] scoring high in these courses, but they [the students] need to have the supports if they haven't been in higher level courses in the past. This led the district to bolster its supports for students who were required to take more ambitious courses.
This example of identifying a leading indicator, in this case of college admission test-taking, illustrates other qualities of the process. First, the search for leading indicators often involves testing hypotheses or questioning assumptions. In this case, there was a widespread belief that all students were taking the PSAT, which proved to be unfounded. Second, this case shows how the search for leading indicators of college admission testing resulted in a rethinking of student course placement. Third, this vignette demonstrates how the identification of leading indicators can lead to the targeting of supports and resources to assist students.

The Challenge of Capturing Student Engagement as a Leading Indicator of Success
The leading indicators we have discussed so far involve data that many school districts already collect and that are relatively easy to measure. But there are other potentially informative leading indicators, such as student engagement, that are harder to measure.
Leaders in several of the study districts viewed student engagement as a leading indicator of student performance. As one district assistant superintendent explained: "We think that getting students engaged in their learning is a key part of their being academically successful. When you have a student that is engaged in their school work, you've won half the battle." Once leaders viewed student engagement as a leading indicator of student outcomes, the question became how to measure it. The study districts identified some indicators related to student engagement that they readily collected, such as attendance and suspension data. In one district, attendance data reports had been delivered to schools once a month and at the end of each semester. Once the district began to view attendance as a leading indicator of achievement, it began to share these data more frequently with schools. Now, attendance data are shared on a 10-day cycle, allowing principals to more quickly identify and respond to attendance issues.
In the same district, a school leadership team asked the district's data team to speak to the faculty about the relationship between attendance and student achievement. Other data they looked at were suspension and major incident rates. They looked not only at the numbers of suspensions and major incidents in each building, but also at whether a small group of students accounted for most suspensions. They worked to understand how many instructional hours those students were missing and the academic costs of those absences. One district leader put it this way: The serious incidents and suspension indicators were connected to the theory that we all believe in-that if you have a highly volatile school, you can't have really good instruction take place. [So we] help teachers and principals monitor and bring down the level of violence and disruption so learning can take place. However, attendance and suspension data only told part of the story. As one administrator pointed out, many students attend school and do not have behavior problems, yet are not motivated to do their best.
Thus districts began to look for more refined representations of engagement rather than relying on existing proxies. Districts used several approaches, including student surveys and classroom walk-throughs, to measure student engagement. Surveys were the most common tool, but one district used systematic classroom observations. In this district, administrators built a tool to assess the way teachers were engaging with students and then did quick unannounced visits to classrooms to collect a systematic picture of student engagement across the school/district. A central office administrator explained: We have teams of staff members who go into a building and, basically, they peek their head into a classroom for a few minutes and they look at the activities that are going on in the classroom, and they rank how students are engaged in the class. Everything from passively sitting there and being lectured to, to taking control of their own learning and doing activities that are helping create their own meaning from what they're doing. And we gather [and look at] that data. . . . So now we're looking at the data in terms of best practices and how we want students engaged in learning.
Despite efforts like these, leaders in our study districts were frustrated with how hard it was to measure student engagement. One central office administrator said: I believe our students [being] on task is certainly a correlational behavior with their success, but we have struggled with some of the things that we think are good key performance indicators, getting them to a point in which they roll up and can be quantified and used in a format that can help us understand what engagement looks like and what it is related to. In essence, the districts were struggling to find a balance between efficiency and value. They wanted more incisive measures of student engagement, yet they didn't want to employ elaborate, resourceconsuming efforts to collect the data. As the superintendent of another district commented, "We're looking desperately at what are those indicators that are predictors. And what are those ways of measuring them that are not so intrusive, yet tell a story that doesn't consume a great deal of time and gets you feedback quick?" Student engagement was seen as important in our study districts, but there are only so many ways to easily measure and aggregate student engagement data so they become useful for administrators and teachers. Several of our districts expressed an interest in finding better measures of their students' engagement, but they had found few ways to get at this data. In one district in particular, participants almost universally commented on this problem. A principal in the district exemplified the sentiment when he said, "If a kid feels engaged in the system, he is going to learn better. Right now, it is hard for us to get information like that." Another district administrator described it as a need to measure "socio-emotional" data. "How to assess social-emotional data is an area where we tend to go by gut rather than data," he said. "We need training on what tools are out there, what really is going to inform how we help kids in that area. Lots of research shows that social-emotional concerns can affect achievement." The example of indicators of student engagement shows that the search for meaningful indicators may push districts to go beyond the data they readily collect. Student engagement is a good example of an area that district leaders identified as important, but for which they lacked meaningful representation. This led the districts to search for ways to capture student engagement. This example also shows how an indicator such as student engagement might be used as both a lagging and leading indicator, for although other variables may predict engagement, engagement could also be used as a leading indicator for other outcomes.

Infrastructure for Identifying and Supporting Leading Indicators
The central offices of our four study districts played a big role in building the infrastructure to support investigations and develop theories about leading indicators. Each of our study sites developed the technical capacity to collect information, ensure its accuracy and completeness, make it accessible, and present it in a user-friendly format. They did this in a number of ways, but common features included the use of data warehousing, a system of standardized summative and other assessments, and an easy data input and interface. Beyond infrastructure, districts created opportunities for key stakeholders to examine the data provided. These included data-informed discussions, training, regular data meetings, benchmarking and sharing best practices, and establishing a data culture.
Use of Data Warehousing Technology. To support their work, all four sites have developed data warehouses, which link information stored in different locations and formats. Data warehouses allow data from different sources to be connected and accessed by multiple users, usually through a web interface. For school districts, this typically means connecting disparate "legacy" systems-e.g., data on student demographics, special programs and test scores, and finance and human resources-that are collected by different departments and schools for different purposes. By combining storage, access and reporting tools, data warehouses remove key obstacles to managing knowledge and using data. With data warehouses, authorized users do not have to go through a technology professional or data analyst to get access to data, and data can be updated in a timely way-even daily, as in some of our study districts. Data warehouses can promote data use by making data more accessible, more powerful and more efficient.
A System of Standardized Summative and Other Assessment. Each of our study districts relies heavily on state and local standardized tests for information about school and student outcomes. State tests are the most frequently used and manipulated, but many of our study districts also added local standardized-test data, including end-of-course exams and district-wide interim assessments. Indeed, these additional assessments that measure student skills as they develop, instead of just at the end of the year, were critical. One principal told us, "Until you get to the point where you can inform yourself about where your students are . . . it's not just summative assessments with pass or fail, but what did you learn along the way." Another principal summed it up: "The formative assessment piece is really key." Easy Data Input and Interface. The districts in our study worked to make collecting and organizing information easy, and they all offered some form of classroom-or school-based input or scanning of some assessment data, such as DIBELS reading assessments or end-of-course exams. They also provided an easily accessible way for school personnel to examine data about students or groups of students, usually through a web interface.
Time and Supports to Foster Data-Informed Discussions. The districts set aside time and developed processes and structures to foster conversations about key data. This involves training for central office staff, principals, and teachers to examine and use data and data systems, as well as regular data meetings and other opportunities to benchmark against other classrooms, schools, and districts, and to share best practices. However, in the United States, the average teacher has only five to seven hours per week for lesson planning and collaboration with other teachers (for example, to discuss and use data). The short supply of these "slack resources" (Leanna, 2010) that would allow for regular data meetings and benchmarking opportunities may limit the use of leading indicators in education.
Training. The districts had multiple ways of providing data-use training. In districts where school staff had easy access to data, principals (and sometimes teachers) were either trained in the use of the database during the summer or offered online training. However, in one district this training was offered only to central office staff and principals, not teachers. More often, districts offered training and professional development around the use of data to school-based teams; the training was provided on-site by central office staff. The training and professional development around data-informed decision-making that our subjects found most useful included all staff and was embedded within existing groups or programs, such as a school-based leadership team or a principal leadership program for assistant principals. Principals in one district talked about data retreats for multiple school-based teams. As more teacher teams are trained in data, the use of data has become part of the culture in schools. There was also a required course on data-informed decision-making for assistant principals. And beyond the training in looking at and using data, the assistant principal development program fostered relationships and trust in looking at data. Another principal said, "You have a network of people to call and talk to and pose questions to." Regular Data Meetings. Our districts relied on "data chats," "data retreats," or a similarly named process consisting of regular meetings (annual, semiannual, or monthly) with school leadership teams to discuss school performance data. One teacher described the process: "At our elementary school . . . after every [formative] assessment round . . . we meet right after those rounds. We look at [the data] as the teams. Our principal has us doing data chats with her and the administration once a quarter." A principal from a different district described a similar process: "[At the data retreat], the leadership team is there together; we're looking at data and getting that a-ha together. . . . [We have] two days of rich discussion. Are we seeing results? Identifying kids early for interventions? Are they making a difference?" Establishing a Data Culture. All these efforts have helped build a culture of data use in our four study districts. Not everyone in every district is a "power user" of data, but our respondents in each district told us that a critical mass had been reached, and data-informed decision-making had become a regular part of their practice. The comment of one teacher exemplifies the constructive spirit in which data were examined in her district.
[Everyone] understands that it is about helping the kids, making connections . . . It's a very healthy process; we look at trends over time. One blip does not . . . If you have a down year, you ask why. If you have two down years and didn't do anything, then, probably, shame on you. It's a question, not something to freak out about. It's all about "How do I get better?" Respondents at several sites spoke similarly about data as empowering and said that it contributed to a sense of efficacy. Our research made it clear to us that using data was not just a monitoring or compliance-oriented function; rather, examining data was a key aspect of developing a professional learning community.

Discussion
Growing access to data, technology, and analytic tools offer education leaders abundant opportunities to use a range of indicators to improve educational decision-making. The promise of data, however, does not dictate their use. There are two shifts in thinking that would help leaders make better use of the concept of leading indicators. First, more emphasis must be placed upon the indicators that lead to valued outcomes, as opposed to the outcomes themselves. Currently, education policymakers tend to focus on the single lagging indicator of high stakes test scores, perhaps because of their prominent role in district and state accountability systems. Unfortunately, test results are lagging indicators, because they are not directly actionable and are the culmination of education efforts. An emphasis on leading indicators, by contrast, would focus policymakers' attention on the activities, conditions, and supports that lead to test performance, rather than the test data themselves. Second, as the title of this paper implies, leading indicators involves a process of search that generates important knowledge that can be used to improve the systems within which kids learn. The pursuit of leading indicators, therefore, can be a productive component of a strategy to make better use of data for organizational improvement.
Leading indicators can facilitate more intelligent use of data in at least three constructive ways. First, the identification of leading indicators is proactive, because they contribute to a valued future state. Leading indicators offer the expectation that influencing them can lead to changes in outcomes.
Second, the search for leading indicators tends to spur investigations. As the four examples in this article demonstrate, the search for predictors of important outcomes often led district leaders to explore and improve key elements of their systems. Thus the search for predictors of leading indicators is a backward tracking process that is both proactive and preventive. From an analytic perspective, the investigative process moves districts from an emphasis on describing patterns in outcomes to an emphasis on looking for relationships among variables. This allows district leaders to model the relationships among indicators and gives more attention to variables that can be manipulated by policy.
Third, the search process itself seems to encourage the adjustment of resources to support students. Once district leaders identify leading indicators, the process seems to create an imperative to bolster the factors that influence those indicators. For example, as our first case showed, the identification of leading indicators of students dropping out led the district to support students at key school transition points, which was an important reallocation of educational resources to improve system supports. In the example of identifying leading indicators of college attendance, efforts to have students take more challenging mathematics classes resulted in more supports for these students to meet the more rigorous expectations.
Attention to leading indicators also has several important implications. One is the need for appropriate data. Because of the limited data available to most school districts, looking for meaningful leading indicators may lead to the proverbial search for the key under the lamp post (because that's where the light is). The search for leading indicators is largely an exploratory process, as it should be. But it consequently runs the risk of focusing on things for which variables are readily available, rather than for root causes. As we saw in the search for ways to measure student engagement, important variables are not always easily accessible. Settling for what exists may impede efforts to identify truly meaningful variables.
A second implication is that we need to broaden our definition of which data can serve as indicators. We tend to think of indicators as quantitative, and they usually are. But we saw several examples in our case studies of the collection of qualitative data. For example, in the search for predictors of dropping out, interviews with students "to find out their stories" played an important role. In the search for indicators of student engagement, classroom walk-throughs produced a qualitative perspective. As these examples illustrate, data for identifying leading indicators can come from a range of sources, not just quantitative measures.
A third implication is that leading indicators can be ephemeral. If districts respond effectively to a lagging indicator, then its relationship to a leading indicator may fade over time. In this sense, the search for leading indicators is a process that shifts and changes as districts adjust resources to shore up areas where they find that additional supports are needed.
A fourth implication is that leading indicators aren't always single indicators. The study districts were not only prioritizing indicators, they were figuring out ways to combine them, as in the district that tied dropout rates to over-age and under-credited students. The district put together two pieces of data that it regularly collected; doing so gave the district's leaders a new perspective on how to target their education resources. Identifying powerful leading indicators requires this kind of exploration and synthesis.
A fifth implication is that leading indicators will be used by central office and school-based staff only if adequate support infrastructure is provided. This goes beyond the technical work of building data systems and data warehouses to building a culture of data use. That culture is supported through training on how to use indicators effectively and, as importantly, through time for central office and school staff to collaborate on how best to interpret and use data to improve student and system outcomes. The districts in this study had to a significant degree created the support infrastructure and data culture to use indicators effectively, but such districts are still rare.
The concept of leading indicators, often used in business and economics, should become more central in education as data use becomes more sophisticated. Leading indicators can move us from using data descriptively to focusing on the relationships among variables. An emphasis on leading indicators also prompts education leaders to more actively explore the factors that contribute to important outcomes, rather than less productively focusing on the outcomes themselves. Finally, increased focus on leading indicators should lead to improvements in the data systems themselves, for effective policy is only as accurate as the data upon which it is based. a. What other kinds of data don't you have access to that would be helpful to have? 7. How, if at all, has the data system influenced the ways in which you work with families and community groups? 8. How, if at all, do you learn about best practices from other teachers or schools in the district? 9. Do you think the benefits of using data are worth the investment of effort you are required to make? Why or why not? 10. What do you think are the biggest challenges for your district in using data to inform decisionmaking?