The Australian government is currently considering scrapping entirely the use of academic research outputs as a determinant of university block grant funding in favour of a measure based on non-academic ‘impact’ with industry collaborators. In general, I am a big fan of the ‘impact agenda’ – the idea that publicly funded universities should explicitly and accountably articulate their research towards non-academic benefits, whether economic, social, or cultural. The Australian interpretation of this seems to be veering towards a very narrow definition of impact in primarily economic terms and oriented primarily towards the private sector. My concern in this piece, however, is the effect that radically devaluing academic publications would have on Australia’s higher education sector. Put simply, I think there is a very clear case to be made that this would have a significant deleterious effect on Australian universities’ international standing, with consequent knock-on effects on the country’s ability to recruit high quality academics and international students alike.
The UK experience of academic impact assessment
I will blog later more extensively about what I think an impact agenda for Australia could look like. Suffice it here to say that the UK experience with the impact agenda is in my view pretty good, if not perfect. The UK has incorporated assessment of research impact as part of the government block funding of universities through the Research Excellence Framework (REF) that periodically evaluates and ranks the research of different subjects in UK universities. The 2014 REF attributed 20 per cent of overall results to impact, compared with 65 per cent to academic publications and 15 per cent to overall research ‘environment’. Impact was broadly defined as ‘any effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia’. This was the first time that impact was evaluated, and the stated intention was to raise the impact proportion to 25 per cent in future exercises.
For the 2014 REF, impact was assessed through ‘impact case studies’, in which submitting institutions collated together a track record of research and engagement by multiple scholars in a particular area, and documented how it had contributed to such positive change. These case studies provide rich accounts of the positive impacts that academic research has on the wider population, and are well worth browsing. Impact case studies ranged from contributing to improving international agency policy in conflict-affected countries to contributing to broadening the reception and understanding of Shakespeare in Hong Kong.
But this was undoubtedly a time-consuming and costly exercise; one estimate suggests that UK universities spent £55 million preparing their case studies. HEFCE, the body that implements the REF, commissioned an independent report on the possibility of using metrics for impact assessment, but the report concluded that it is ‘not currently feasible to use quantitative indicators in place of narrative case studies’.
In contrast, the indications are the the Australian government is looking to develop an impact agenda that is narrowly tied to economic interests and private sector. The Go8 joint submission to the current Review of Research Policy and Funding Arrangements for Higher Education cautioned precisely that ‘care needs to be taken to ensure that “impact” and “engagement” are not defined solely in relation to commercial or industrial benefits, which would devalue the broad scope and variety of ways that research can contribute to Australia’s cultural, social and economic wellbeing’. This submission also reflected favourably on the UK’s experience with impact case studies.
Impact and engagement are a fundamental part of the activities of a modern public-sector university. But the current proposals would undermine the international standing of Australian universities with deleterious consequences for the sector. This largely because of the academic and student incentive structures in the wider international higher education sector, which is becoming ever more internationally competitive. Malcolm Turnbull states that he wants to end the ‘publish or perish’ incentive structure for academics, but when that incentive structure (however flawed) is the international norm, radical changes to the academic incentive structure in Australia will place us in a poor position to compete for high-flying researchers and high fee-paying international students.
Publishing and perishing in the global higher education market
For the international academic community, peer-reviewed academic publications, especially in highly-cited disciplinary journals, are the gold standard of our research activity. This is the core of how we judge ourselves. Certainly there is variation between disciplines – some disciplines place a greater value on book-length monographs in prestigious university publishers; other disciplines place a greater value on presentation of results at elite international conferences. But all these modes of publication are currently captured in the Australian funding system, and this is what the government is proposing to devalue in favour of industry engagement.
For a country with a small population, Australia currently punches well above its weight in terms of academic research output. A very good predictor of overall research output for a country is simply the overall size of the economy (see Figure). On this measure, Australia currently constitutes less than 2 percent of total GDP, but accounts for around 2.6 percent of academic publications and 3.3 percent of academic citations. Likewise, 3.5 percent of the 3,000-strong Thomson Reuter list of highly-cited researchers (‘HiCis’) have Australian university affiliations.
More highly-cited researchers can clearly command higher salaries internationally. Whether we like it or not, from a purely economic perspective it is a sensible investment for universities to pay over the norm for a highly-cited researcher, because their output and citations will feed into the international rankings that attract international students (see below). Indeed, at the extreme, the Jiao Tong ranking includes a 20 per cent weight directly attributed to the number of HiCis affiliated with the university. Significantly reducing the domestic returns to publishing will reduce the incentives for Australian universities to compete in the international labour market for academics.
The proposed changes also risk driving highly-cited Australia-born researchers away from their home country. Australian academics are already itchy-footed by international standards. An international survey of academics in 2007 found that over 30 per cent respondents at Australian universities had taken ‘concrete action’ to find an academic position in another country. Only academics in Italy showed a higher desire for an international move (39 percent); the average across all countries surveyed was 20 percent. Among Australian institutions, the figure was even higher for academics within the Go8 research intensive universities (36.5). Turnbull may be seeking to end the ‘publish or perish’ incentive structure, but he risks promoting a ‘publish and prosper overseas’ culture in its place.
Put simply, an ambitious young Australian academic with a couple of publications out of her PhD in high-ranking international journals could easily compete for jobs at any number of prestigious universities internationally. Why would she not look overseas if the Australian system does not place such high worth on those publications?
If Australia downgrades radically the importance of academic publication its domestic incentive structure in favour of industry linkages, there would hence be a potential triple-whammy effect on Australian research output: academics currently employed at Australian universities would have less incentive to publish; Australia-born academics would have more incentive to migrate overseas; and Australian universities would have less incentive to attract high-calibre foreign researchers to their institutions.
The consequences on international academic rankings
One immediate and direct logical consequence of this triple-whammy effect would likely be a decline in the standing of Australian universities in international university rankings. These rankings typically score universities directly on journal output. The main international university rankings are the Jiao Tong and ARWU, QS and Times Higher rankings. All of these give considerable weight to the volume and citation rate of academic publications. Any decline in the incentives to publish would directly affect Australia’s standing in these rankings.
This is all the more the case because none of the international rankings have any direct weighting for ‘impact’. Certainly, industry-relevant research might feed into categories such as ‘employer reputation’, which is measured in the QS ranking through surveys of large employers. But this latter category is typically weighted very low compared to direct academic output, if at all: the Jiao Tong and ARWU rankings have no category for employer reputation; the QS methodology attributes 10% to employer reputation; and, the Times Higher methodology attributes 2.5% to university income from industry (compared with 6% from traditional research grant income).
This in turn is likely to have a direct impact on the ability of Australian universities to attract international students, which are a significant contributor to their revenue and a major export commodity for Australia. International students at Australian universities contributed export revenue of $18 billion to the Australian economy in 2014/15, supporting 130,000 jobs. Although there is as yet (to my knowledge) no systematic evidence on this, it is intuitively uncontroversial that in an increasingly competitive international market for higher education with increasingly mobile students, international rankings such as these are likely to play an increasingly important role in determining international student recruitment. Indeed, a number of countries are already using such international ranking to determine international scholarships.
This is all the more problematic for Australia because of the existing paucity of government funding for research and the failure to reform domestic fee structures, whether through increased public contributions or by shifting more of the cost to the students themselves. Australia’s public funding of its universities is the second worst in the OECD as a proportion of GDP. This means that Australian research-intensive universities are increasingly looking to international student recruitment in order to subsidise research.
I have painted something of a Doomsday scenario for Australian universities, in which government changes that radically devalue the role of publications in the domestic higher education systems drives an academic brain drain, contributing to a decline in the standing of Australian universities in international rankings, in turn reducing their ability to attract the international students that are an increasingly vital source of revenue. The reality may not be quite so bad, but I do not think the picture I have drawn is unrealistic.
None of this is to defend normatively the structure of the international higher education market. Nor is it an argument against the impact agenda in its entirety, of which I am a strong supporter. But we must recognise that because of geography and current funding levels, Australian universities have a particularly precarious position within this international higher education market. And radical changes to the domestic incentive structure for universities that place us on a different kilter to the international market entail a serious risk of instigating a rapid downward spiral with serious adverse consequences for the higher education sector and the wider society, ultimately undermining universities’ capacity for precisely the kind of impact that the government seeks to promote.