Technical and Ethical Issues in Indicator Systems


  • Carol Taylor Fitz-Gibbon University of Durham
  • Peter Tymms University of Durham



Educational Indicators, Educational Research, Elementary Secondary Education, Foreign Countries, Higher Education, Research Utilization, Technology


Most indicator systems are top-down, published, management systems, addressing primarily the issue of public accountability. In contrast we describe here a university-based suite of "grass-roots," research-oriented indicator systems that are now subscribed to, voluntarily, by about 1 in 3 secondary schools and over 4,000 primary schools in England. The systems are also being used by groups in New Zealand, Australia and Hong Kong, and with international schools in 30 countries. These systems would not have grown had they not been cost-effective for schools. This demanded the technical excellence that makes possible the provision of one hundred percent accurate data in a very timely fashion. An infrastructure of powerful hardware and ever-improving software is needed, along with extensive programming to provide carefully chosen graphical and tabular presentations of data, giving at-a-glance comparative information. Highly skilled staff, always learning new techniques, have been essential, especially as we move into computer-based data collection. It has been important to adopt transparent, readily understood methods of data analysis where we are satisfied that these are accurate, and to model the processes that produce the data. This can mean, for example, modelling separate regression lines for 85 different examination syllabuses for one age group, because any aggregation can be shown to represent unfair comparisons. Ethical issues are surprisingly often lurking in technical decisions. For example, reporting outcomes from a continuous measure in terms of the percent of students who surpassed a certain level, produces unethical behavior: a concentration of teaching on borderline students. Distortion of behavior and data corruption are ever-present concerns in indicator systems. The systems we describe would have probably failed to thrive had they not addressed schools' on-going concerns about education. Moreover, data interpretation can only be completed in the schools, by those who know all the factors involved. Thus the commitment to working closely and collaboratively with schools in "distributed research" is important, along with "measuring what matters"... not only achievement. In particular the too-facile interpretation of correlation as causation that characterized much school effectiveness research had to be avoided and the need for experimentation promoted and demonstrated. Reasons for the exceptionally warm welcome from the teaching profession may include both threats (such as the unvalidated inspection regime run by the Office for Standards in Education) and opportunities (such as site based management).


Download data is not yet available.

Author Biographies

Carol Taylor Fitz-Gibbon, University of Durham

After 10 years of teaching physics and mathematics in a variety of schools in the U.K. and then the U.S., Carol Fitz-Gibbon conducted a study for the U.S. Office of Education on the identification of mentally gifted, inner-city students and then became a Research Associate for Marvin C. Alkin, at the Center for the Study of Evaluation, UCLA. She completed a Ph.D. in Research Methods and Evaluation, obtained a grant on the design of compensatory education, co-authored a series of textbooks and returned to the UK in 1978 planning to continue work on Cross-age and Peer Tutoring. But the success of an indicator system she developed with 12 schools in 1983 led to other areas. Much of this work is described in the prize-winning book Monitoring Education: Indicators, Quality and Effectiveness (1996).

Peter Tymms, University of Durham

After taking a degree in natural sciences, Peter Tymms taught in a wide variety of schools from Central Africa to the north-east of England before starting an academic career. He was "Lecturer in Performance Indicators" at Moray House, Edinburgh, before moving to Newcastle University and then to Durham University, where he is presently Professor of Education. His main research interests are in monitoring, assessment, school effectiveness and research methodology. He is Director of the PIPS project within the CEM Centre, which involves monitoring the progress and attitudes of pupils in about 4000 primary schools. He has published many academic articles, and his book Baseline Assessment and Monitoring in Primary Schools has recently appeared.




How to Cite

Fitz-Gibbon, C. T., & Tymms, P. (2002). Technical and Ethical Issues in Indicator Systems. Education Policy Analysis Archives, 10, 6.