Policy@Manchester Articles

Expert insight, analysis and comment on key public policy issues

  • All Posts
  • UK Politics
  • Energy and Environment
  • Growth and Inclusion
  • Health and Social Care
  • Urban
  • Science and Engineering
Policy@Manchester Articles: All posts
You are here: Home / All posts / Insulating Robotics from an AI Winter

Insulating Robotics from an AI Winter

By Michael Fisher Filed Under: All posts, Science and Engineering, Science and Technology Posted: November 21, 2025

Artificial Intelligence (AI) has been promoted as a force that will transform the lives of working people, and the UK government is  trying to position our country  as one of the great AI superpowers. However, the “AI can solve everything” mantra has led to the development of an “AI Bubble”, buoyed by stock market and government sentiment. And we all know what happens to bubbles – they burst. Here, Professor Michael Fisher, from The University of Manchester, explores how we might insulate the work and deployment of autonomous robotics from any fallout of the AI bubble bursting, such as another “AI Winter”.

  • Over-promising the benefits of AI is likely to lead to a collapse in confidence in this technology.
  • A significant loss in confidence around AI could lead to another “AI Winter” that might also stall the development of autonomous robotics for societal benefit.
  • To insulate robotics, policymakers should emphasise its unique role by resisting the characterisation of robotics as a sub-part of AI and should implement stronger guidelines for responsible technological innovation.

Can we rely on AI?

Over the last 20 years, the focus of AI development has been on data-driven, or sub-symbolic, AI in the guise of Machine Learning, Deep Learning, GenerativeAI (GenAI), or Large Language Models (LLMs). These data-driven forms of AI can be very useful in certain application areas such as recognising or generating patterns in large data sets. However, their key drawback is that any correctness arguments will be inherently probabilistic as they are usually based on unknown data distributions and are therefore susceptible to errors (sometimes termed “hallucinations”). So, from a technical point of view we cannot rely on data-driven AI in safety critical areas such as autonomous decision making in robots.

Consequently, the practical uses of AI appear limited to obvious pattern recognition/generation areas. While the science is clear, the AI hype continues, buoyed by stock market and government sentiment. However, the “AI can solve everything” mantra does not seem to have led, so far, to a revolution across industry: the number of AI startups appears to be down, while the use of GenAI only marginally improves productivity for general tasks and significantly decreases expert productivity. By now, even the financial sector has recognised the “AI Bubble” and is warning of its potential negative effects.

The impact of AI hype on robotics

There is a danger that further development in robotics will be stalled by issues with AI. Any massive loss in confidence in AI will also affect Robotics and could seriously impact government support, industry investment, research funding, and public sentiment – another “AI Winter”. Even if the AI Bubble continues growing, artificially stimulated by stock-market sentiment based on a stream of government and industry announcements, talk of “Physical AI”, “Embodied AI”, replacing robotics will also undermine the area.

Our work at The University of Manchester suggests that there are a number of steps that can be taken now to prevent robotics being tainted by issues with the AI bubble.

Firstly, it should be explained that while robots, particularly Autonomous Robots, use some AI components, there is much more to robotics than AI. Even within software, there are likely modules for adaptive control, fault diagnosis, sensor fusion, localisation, reporting, or decision-making, none of which need to be AI. If data-driven AI suddenly did not exist, then robots could still be constructed and operated in a number of fields.

Secondly, we should acknowledge the fact that robots are tangible, compared to forms of AI with hard to visualise code, and can help us in our everyday lives. Therefore, part of the communication around robotics should be emphasising its practical benefits. An obvious domestic example is the robot vacuum cleaner, but robots do, and will increasingly, impact our lives. This impact extends beyond robots on production lines within factories, but also to robot surgery, robots for infrastructure (maintaining roads and sewers) and the growing use of robots in hazardous environments. All can potentially provide economic, environmental, and social benefits.

Thirdly, we should emphasise that there are several varieties of AI. Besides the data-driven AI approach currently very popular, the other main category is that of Symbolic AI. This, captures “intelligence” in some explicit, symbolic form, making it easier to analyse and verify. Consequently, an important approach is to ensure that all critical decisions are not made by data-driven AI but by symbolic components, allowing us to strongly  verify their decision-making. This “Symbolic AI” category includes rule-based systems, expert systems, logical deduction, and new directions in“Neuro-Symbolic AI” that can bring benefit to robotics in the future without any drawbacks of purely data-driven AI.

There will be, of course, some effects from any “AI Crash” that robotics will not be able to avoid. There has been a stream of pronouncements telling us how fantastic AI is, how our lives will be better for it, and how massive investment in AI can help us out of economic and social problems. With any “AI Crash”, the public would certainly question these pronouncements and very likely would lose further confidence in Politicians and Scientists, especially when they come to the next “big thing” in technology. This damaged perception of Scientists probably cannot be avoided by Roboticists.

So, finally, it is important that we follow a responsible approach to future technological development in areas such as robotics. Our work on Responsible Robotics aims to build responsibility into all aspects of robotic systems design, development, deployment and use. This should help future robotics to integrate into human life, and to ensure that robotic systems co-evolve with human societies, promoting human agency and humane values rather than ignoring them or, worse, undermining them.  Importantly, this also emphasises tempered and reasoned language for promoting the benefits of a technology.

Policy Recommendations

  • The Government should explicitly avoid the merging, and especially subsumption, of Robotic and Autonomous Systems within (data-driven) AI. Here, clear policy statements that the areas are linked, but clearly distinct, are needed in order to reinforce the steps above.
  • If an “AI Crash” occurs, there will likely be significant excess resource, not only computational but also, unfortunately, human. Specifically, there may be large AI infrastructures and very many trained AI professionals who might no longer be needed. The government should have a plan – even if it believes an AI crash to be unlikely – for retraining AI professionals and re-purposing AI hardware, if this is possible, as it is not a straight-forward task.
  • AI is not unique. There have been many technology “bubbles” in the past. So, how we manage and deploy emerging technologies is very important. Ensuring a responsible approach to technology adoption is central. In 2024, the government introduced a new tool to help teams across the public sector to innovate responsibly with data and AI, so that risks can be rapidly identified and mitigated. The government should encourage private sector organisations to follow a similar approach, and this should be extended to broader forms of responsibility and a wider range of technologies.

Tagged With: AI, Robotics, technology

About Michael Fisher

Michael is Professor of Computer Science at The University of Manchester, where he holds a Royal Academy of Engineering Chair in Emerging Technologies in the Department of Computer Science. He is also a Fellow of both BCS and IET, and sits on the EPSRC's Strategic Advisory Network.

He also chairs the BSI Committee on Sustainable Robotics, co-chairs the IEEE Technical Committee on the Verification of Autonomous Systems, and is am a member of both the BSI AMT/10 committee on Robotics and the IEEE P7009 Standards committee on Fail-Safe Design of Autonomous Systems.

Our RSS feed

Receive our latest content and timely updates by subscribing to our RSS feed.

 Subscribe in your reader

More from this author

No posts available.

Become a contributor

Would you like to write for us on a public policy issue? Get in touch with a member of the team, ask for our editorial guidelines, or access our online training toolkit (UoM login required).

Disclaimer

Articles give the views of the author, and are not necessarily those of The University of Manchester.

Policy@Manchester

Manchester Policy Articles is an initiative from Policy@Manchester. Visit our web site to find out more

Contact Us

policy@manchester.ac.uk
t: +44 (0) 161 275 3038
The University of Manchester, Oxford Road, Manchester M13 9PL, UK

Copyright © 2025 · Policy Blog 2 on Genesis Framework · WordPress · Log in