Dr Barbara Ribeiro of the Manchester Institute of Innovation Research reflects on what the Industrial Strategy White Paper means for artificial intelligence and considers whether prioritising AI funding is really in the public interest.
In its autumn budget of 2017 and recently published industrial strategy white paper, the UK Government put forward an ambitious target for the development and implementation of artificial intelligence (AI). Simply put, AI refers to human-made digital systems which, once trained, can become independent learners. These systems support smartphones, driverless cars and scientific research in general. In 13 years time, the Government expects a positive impact of AI on household budgets, leading to ‘extra spending power’ of up to £2,300 per year, and a contribution of 10% in the country’s GDP (an additional £232bn).
We must recognise that these are impressive numbers. To put them into perspective, recent data on household spends shows an annual average just below £33,000 suggesting that, today, AI could increase your budget by 7%. As for GDP, we have seen an average annual growth of 1.49% for the last thirteen years and an accumulated growth of 19.4% for the same period. The figure of 10% indicated by the government makes up more than half of this number should we expect similar growth rates over the next decade.
Because expectations are high and promises are bold, it is worth taking a closer look at the evidence underpinning the government’s strategy for AI. The government’s vision and plans to move forward on this sector are informed by two key reports: one commissioned in June 2017 to PwC, from which the numbers mentioned above were extracted, and another published in October 2017 on the potential social and economic benefits of AI to the UK, led by academic and industrial actors in the field. These reports also underpin the recently published industrial strategy white paper, where AI represents one of the four grand challenges for industry in the years to come.
Do the numbers add up?
Starting by the first report, it is unclear if the optimistic numbers put forward by the government were produced taking into account rising inflation and salaries stagnation. What today seems like a considerable increase in household consumption power might translate into a more modest one by 2030. Any expectations in terms of economic growth in general and their impact on the GDP should be presented in terms of different possible scenarios and contexts, e.g. a post-Brexit UK. This has not been the case for the economic modelling reported. In the absence of a detailed and transparent methodology for the study, the assumptions behind the model also seem to be problematic.
For example, when evaluating the potential of AI on the production side (increased ‘labour productivity through automation’), a number of assumptions on the future pervasiveness of automation, but also on the potential for increasing human capacity and productivity – what they call ‘augmented automation’ – are not discussed. On the demand side, an increase in household budgets in terms of extra spending power is measured through an obscure variable called ‘products enhancement’. When we look closely at the meaning of ‘enhancement’ we find it is conceptualised in terms of three dimensions, namely, quality, personalisation and increased time available for leisure or work. According to PwC, this is based on a qualitative assessment drawing on the opinions of industry representatives and AI experts who estimated the scale of products enhancements by 2030, but there is no comment on the vested interest of these experts when asked about their expectations regarding advancements in the sector.
The explanation on how product enhancements translate into ‘extra spending power’ at the household level is related to the potential availability of more affordable and bespoke goods brought by stimulated consumption and more consumer choice (motivated by these enhancements within market competition logic). Social welfare is brought about as people are assumed to save time through using automated systems. While the outputs from their model – or the ‘net’ effect of AI’s impact on the economy – are all positive, there is a mention to ‘other secondary effects’ estimated by the model, but what these effects are made of is not clear in the document. Job creation, which is something for which PwC did not model, is extensively discussed despite no evidence to back up bold statements around the potential of AI to generate jobs (both for humans and robots).
Differing projections and the rhetoric of AI impact
This short report was followed by an ‘independent review’ commissioned this time to Professor of Computer Science Wendy Hall and Jérôme Pesenti, the CEO of an AI firm, who led the research. Here, the estimated contribution to the growth of the UK economy brought by AI is set by the authors at £630bn by 2035 (only five years ahead the PwC estimate, but a value that is almost three times greater). The large discrepancy in estimations between assessments is noted later in the report, but the problem is not addressed.
The recommendations put forward by the independent report, now incorporated to the industrial strategy white paper, suggest greater public and private investment in AI infra-structure (including improved access to data), training and capability. When describing the benefits of AI, the report emphasises a boost in productivity and our ability to perform more tasks in less time, from drafting emails quicker, to exploring larger sets of data. AI represents an industrial opportunity, as it helps create new products and services, especially in healthcare, automotive and financial services.
There seems to be no space for doubt on the trade-offs of an expansion of AI capability and penetration in the UK. This must be a win-win situation. Potential controversial issues are framed as ‘barriers to overcome’, limited to public and regulatory acceptance, and technological improvements. Indeed, ethical and societal issues are intentionally left out of the report, based on the authors’ understanding that such issues are out of the scope of an ‘industry-focused review’ and cannot be ‘resolved’ in the short-time frame of the study.
This seems to be the remit of the new Centre for Data Ethics and Innovation promoted by the government, which has already mobilised responses from some key organisations, but which has yet to have its role and place defined within the political and regulatory landscape for AI. Interestingly, the name of the centre conflates the terms ‘data ethics’ and ‘innovation’ (not necessarily ‘responsible innovation’ or ‘ethical innovation’ or ‘innovation ethics’), so we are yet to know more about its mission and agenda.
Another mechanism to ensure public benefit from AI, the new Industrial Strategy Challenge Fund (ISCF) promises to deliver value by focusing on grand challenges. For AI, this could mean using data held by public organisations, as mentioned in the report. However, the agenda for the ISCF is also unclear at this point.
Are we rushing into funding AI?
Recommendations on how to mobilise resources to advance AI in the UK come before any anticipatory work on its implications. They also come before deliberation on the societal needs AI should and could aim to address. The former are in the hands of future public bodies and funding mechanisms, while the latter seems to be taken for granted by the government (or voiced by experts at best). A fundamental question of public interest is not only how much impact AI could have on the economy in terms of big numbers – an exercise that is in its essence already very speculative – but what does this impact mean, that is, how could it translate into public benefits, as well as how we might see any potential benefits distributed across different social groups. If we were to take a responsible approach to the development of AI in the UK that focuses on the public interest, what would be the questions we should be asking?
The public interest
Most of the points highlighted in the two reports and in the government’s industrial strategy white paper seem to be concerned with data access, as data is the engine that moves AI. But questions should not be limited to those around privacy. We will also want to ask what kind of research will be supported by AI applications such as machine learning and data mining. What will be their objectives and direct beneficiaries? Will public institutions be partnering with private organisations? If so, how will data be used and for which purposes?
Another very important question relates to the distribution of any potential benefits. Which social groups will be able to afford solutions brought by AI mentioned in the reports such as personalised financial planning? How will access to healthcare technologies that improve our capacity to perform timeliness and more precise diagnosis be guaranteed to patients from lower income backgrounds? How will driverless technologies benefit public transportation in cities in an affordable way to most of its citizens? How will they improve the air quality of our cities and urban mobility, including poorer regions which typically get less attention from planners? Finally, a benefit that is highlighted in the two reports, how could AI help us having more free time for our families and leisure activities when digital technologies can have the exact opposite effect?
Further advancements in AI could benefit those social groups who are able to pay for ‘enhanced’, personalised products and services. However, as investments in AI will come at a cost for all taxpayers, the key question is how exactly will these translate into societal benefits and how will these be distributed? Investment in AI, as in any other emerging technology, is underpinned by an assumption of economic growth as a driver to social welfare. This assumption has been challenged before.
Fairly distributed societal benefits do not translate automatically from economic growth or technological and industrial development and the government should not hide itself behind the promises of big data and AI. The questions above are just as relevant to AI as to any other sector prioritised by the UK’s industrial strategy.