UPV/EHU researchers have looked at the quality and good methodological practices employed and published in 21 rankings, indexes and similar tools used for classifying and monitoring urban sustainability. They concluded that these tools neglect complex causalities in their design and lack methodological transparency in relation to data gathering, weighting and aggregation process; they also tend to be biased and ignore badly ranked cities and reinforce existing stereotypes.
Lack of transparency in urban sustainability rankings
An article by the UPV/EHU-University of the Basque Country has seen methodological weaknesses in rankings, benchmarking and indexes on urban sustainability
- Research
First publication date: 07/02/2020
“The last two decades have seen significant growth in the spread of tools to classify and measure urban performance (rankings, indexes, etc.) across both the public and private institutions that use them, in response to different types of pressures encouraging uniformity. Naturally, all these tools are useful for guiding and assessing the policies implemented by local authorities in various fields of action, and are particularly prolific in the area of sustainability. Yet there is a lack of knowledge about the actual methodological base underpinning them and which is supposed to legitimize their use,” explained Lucía Sáez-Vegas, PhD holder in the UPV/EHU’s Department of Financial Economics II.
"With the aim of analysing and assessing quality and good practices in urban measuring and monitoring, and while devoting special attention to the methodological aspects, we took hundreds of measuring tools and selected a set of 21 similar rankings, indexes and tools designed to rank and monitor urban sustainability (understood in a very broad sense) so that we could study them in depth and thus adapt and apply an analysis methodology tested in another field, that of university rankings,” added Dr Sáez.
The significance of methodological aspects
In each of the similar rankings, indexes and tools analysed, the researchers explored the following four main principles: aim and target group they are geared towards; methodology and weighting used in their design; transparency related to data gathering and information processing; and finally, the presentation of the results. As Dr Saéz explained, “of these four aspects the information on the first and the last is the most accessible, in other words, the descriptive information. That is specified by all the classifications analysed; yet that does not happen when it is about accessing all the information on the methodological aspects, data gathering and information processing; this results in what is known as the black box, an artefact whose results are studied and disseminated without its inner workings being thrown into doubt”.
That is how the researchers confirmed various methodological weaknesses in all the rankings analysed. The researcher insists that “tools of this type tend to neglect complex causalities and lack transparency with respect to data gathering, weighting and aggregation process in their design, they tend to be biased and, as a result, tend to ignore badly ranked cities and to reinforce existing stereotypes”.
“The possibility of ranking and comparing cities of different dimensions may help to spot those that appear to perform better in various urban aspects. That is why these tools are used on occasions by urban managers and public decision-makers to develop an action plan, even though one has to have a clear idea about how the ranking or index has been drawn up, and exercise caution when using it, above all if insufficient information is provided about the methodological aspects and the robustness of its results. We understand that these tools should be used more as a source of information and even inspiration and less as a road map for action,” she added. “These rankings attract the interest of the general public because they measure concepts of a complex nature which are presented by means of a ranking, generally of a numerical type, which can be understood very easily. From our academic viewpoint, the fact that the results are presented in the form of a final ranking with a mention of the principal findings, but with little or no regard for the methodological aspects which, at the end of the day, are the ones that underpin the score or ranking, signifies a clear weakness of these tools when used to measure and monitor urban performance,” said Sáez.
Additional information
The study was conducted by the following researchers in the UPV/EHU’s Faculty of Economics and Business: Lucía Saéz-Vegas, Iñaki Heras-Saizarbitoria and Estibaliz Rodríguez-Núñez (of the Department of Financial Economics II and the Department of Business Organisation), who belong to the Basque Government’s GIC 15/176 consolidated research group.
Bibliographic reference
- Sustainable city rankings, benchmarking and indexes: Looking into the black box
- Sustainable Cities and Society
- DOI: 10.1016/j.scs.2019.101938