Practical Outcomes …


The signed chi-square measure was used in People in Britain – a census atlas and by some other research students, who were not dependent on GIS systems for their analyses and mapping.   In the 1990s, th University of Manchester broke with tradition and used signed chi-squares for calculating the primitive statistical indicators on which the Department of Environment’s 1991 Indices of Local Conditions were based (Bradford et al, CHECK).  These Indices were used in many area based programmes throughout the 1990s and it has been estimated that some 8 billion pounds have been allocated using this measure under the Single Regeneration Budget and other programmes (Connelly and Chisholm, CHECK). 






Everytime we change the metrics, it is like a lottery for people out there, since there will be a different set of winners and losers.  Unfortunately, we cannot have winners without losers.  So, there have been criticisms of the new methodology, especially by losers.To some extent their criticisms are justified.  Owing to their inexperience in using the new methodology, the Manchester team made some mistakes.   However, it is more important to get the methodology for analysis of the 2001 census correct.






The University of Oxford undertook a major review of social indicators on behalf of the Department of Environment, Transport and Regions (DETR) – see DETR report may be downloaded from site also contains a link to a summary report
Indices of Deprivation 2000 – Summary (.pdf 159kb)





The review rejected the signed chi-square measure in favour of a return to ratios with some averaging to reduce the impact of the small number problem.   There is clearly a need for independent evaluation of the DETR 2000 Indices of Deprivation  but this is impossible since the small area statistics on which the analyses were based are confidential.  However, it should be possible to evaluate the methodology using other data in the public domain.  So, here is a challenging project for several PhD students.   If you decide to tackle this problem, I would urge you to think beyond ratios and signed chi-squares and scrutinise more fundamental philosophical issues.   I believe that the static ranking provided by ratios has encouraged a mechanical (mindless?) approach to data analysis.  Ratio-based analyses are grounded in the philosophical assumptions underpinning quantitative empirism. 





Professor Johnston articulated these assumptions some twenty years ago.  He stated that spatial variations in the human condition can be measured on a generally acceptable metric, irrespective of how that metric is interpreted by the value system of the perceiver.  He went on to postulate that each proxy measure, however crude, bears at the very least a monotonic relationship to a selected dimension of the underlying theme; i.e. there is at the very least a correspondance of rank between the metric and a symptom, such as male unemployment.





I have attempted to show that as soon as we use more than a single variable, there are multiple ways in which we could rank areas on a good to bad scale!   In my papers, I provide some simple examples to show how we can dramatically change the ranks of areas by changing the values for expectation.  Statistical averages can be very misleading and inappropriate in social engineering.Most analyses use a set of variables, which are treated as if they were dichotomous.  If we only wanted to use such two-category variables, we could use the z-score for ratios*, which will give identical results to the signed chi-square measure.  However, even more interesting problems are encountered when we venture with signed chi-squares beyond two category formulations into multi-category data, which I will consider next. Incidentally, please do not confuse the z-score for ratios with the z-scores used by DoE for computing the Jarman scores;
they were only used for standardising data to zero mean and unit variance prior to summation.
They do not change the rank of areas on individual indicators.