Abstract
The increasing focus on international university rankings reflects the fact that global competitiveness is ever more driven by knowledge. The ranking systems condense a vast amount of information and data collected to measure the knowledgeproducing and talent-catching capacity of universities. Easy-to-recall league tables facilitate communication to stakeholders and customers. However, ranking systems emphasise vertical differences between institutions while masking their horizontal differences. There are enormous differences in methodology in ranking criteria, weightings, proxies for quality, choice of indicators, data sources, and use of surveys. The more prominent ranking agencies include the Times Higher Education which focuses more on international reputation, combining subjective inputs, and quantitative data. The ARWU focuses exclusively on objective indicators. The validity of some of these measurements is sometimes questionable, and there appears to be a bias towards larger institutions which have greater resources and stronger reputations. Nevertheless, the rankings have highlighted reputational differentiation and intensified competition for students, faculty, funding and researchers. More importantly, rankings impact on institutional strategic policy and direction as well as university missions. Increasingly, the visibility and influence of a global university is measured less by the size of its physical campus or the importance of its home city, than by its presence and prominence on the Web. The Webometrics Ranking of Universities offers an alternative ranking system that rates universities based on their Web presence and accessibility.
Keywords: Activity, City Ranking, Data sources, Indicators, Inlinks, Maximum rank difference, Proxies for quality, Ranking analytics, Ranking criteria, Surveys, Visibility, Webometrics, Weightings.