Emerging technologies – from Artificial Intelligence (AI) and big data to digital platforms, and driverless vehicles – are transforming everyday life in the UK. These innovations promise significant benefits: improving efficiency, reducing resource use, and enabling new jobs and services. Yet alongside their promise, these technologies bring new challenges. Concerns are rising about data privacy, cybersecurity threats, job displacement through automation, and the impact of “always-online” digital culture on mental health and wellbeing. In this article, Pawan Srikanth and Dr Peng Khoon Gerald Chan explore the often-overlooked nexus between technology use and mental health, and outline principles for making tech more responsible.
- As technology accelerates, UK regulation is decentralised and made up of numerous bodies, departments and regulators.
- The impacts of immersive technology and social media on mental health raises concerns – but impacts are still being evaluated.
- Policymakers can prioritise wellbeing by design into legislation whilst strengthening coordination among regulators.
Emerging technologies and why they matter
The UK is at a crossroads: digital infrastructure and adoption are accelerating, but issues like privacy breaches, geopolitical conflicts over tech (such as debates around apps like Tiktok), and growing mental health crisis are reaching new heights.
“Emerging Technologies” generally refers to a cluster of fast-evolving innovations that have gained prominence in the 21st century. Including developments in AI and machine learning, the Internet of Things (IoT) and smart devices, digital platforms, and social media, advanced materials and biotechnology, renewable energy tech, and more. Crucially, these are not isolated tools – they operate in technology clusters, supported by networks of data, infrastructure, and markets. Not all emerging technologies affect mental wellbeing equally. Social media, immersive environments, gaming and generative AI have direct psychological impacts, while others such as autonomous vehicles or renewable tech influence wellbeing more indirectly.
Individually or combined, they are reshaping how we consume, communicate, travel, work and care for the environment. They often operate within strong commercial, profit-driven frameworks that prioritise user engagement and growth. The pace of change is unprecedented – everyday life today involves far more human-technology interaction points than even a few decades ago.
The “great rewiring” of society brings trade-offs: while technology makes life more convenient in many ways, it erodes privacy, concentrate corporate power, and alter how people interact and form communities. As the UK rapidly expands its digital infrastructure (aiming for nationwide 5G coverage by 2030) and encourages tech innovation, it must pay attention to the side-effects. It needs an approach look beyond the hype of each technology or gadget and consider the broader system of technology-human interactions – a systems-thinking approach that examines not just technical performance but also societal and psychological impacts.
The UK regulates emerging technologies through a patchwork of sectoral regulators rather than single “tech regulator”. This decentralised, pro-innovation model consisting of numerous government bodies, departments and regulators allows domain-specific expertise but risks gaps or overlaps.
Technology and the mental health nexus
One area of growing concern – and a prime example of why “responsible or humane technology” is crucial – is the impact of modern digital technologies on mental health and wellbeing. Britons are more connected than ever: as of 2024, two-third of UK adults say being online benefits them personally, only a minority believe it is good for society or their own mental health.
The “always-on” culture of notifications, algorithmic feeds and social pressure can contribute to stress, anxiety and loneliness. While digital overload is not entirely new, today’s AI-driven feeds and constant connectivity mark a sharper, more personalised form of “always on” engagement than earlier tech eras. Social media is a key focus: a UK parliamentary report found heavy users aged 14-24 were more than twice as likely to show mental illness symptoms as occasional users, with risks linked to body-image pressure, bullying and low self-esteem.
Beyond social media, gaming, generative AI and immersive technologies raise questions about addiction, emotional dependency and cognitive effects. Recent real-world harms, such as the AI deepfake scandal, further illustrate how misuse can affect mental wellbeing and privacy. Overall, the impact on these technologies on UK society is much more systemic and still not fully understood. On the other hand, digital platforms can support wellbeing by reducing isolation and offering mental-health apps, as shown during COVID-19. Government and organisations such as The University of Manchester and NHS Mindtech, aim to clarify and establish clear interconnections and guide policy so the general public gain the benefits of evolving technologies while harms are minimised.
Globally, countries are experimenting with ways to make technology more “brain-healthy”. The EU’s Digital Services Act and AI act impose duties on platforms to mitigate mental-health risks, while France, Spain and Italy restrict phone use in Schools. China limits gaming for minors. These approaches contrast with the UK’s guidance-based model but show a share shift towards holding tech firms accountable for user wellbeing.
Steps towards responsible innovation
Our analysis of emerging technologies advocates for the several key policy steps. This connects with University of Manchester ongoing research on the UK’s Connected and Automated Mobility (CAM) transition, examining how macro-emotional currents shape governance and public trust in emerging technologies.
- Embedding wellbeing by design: To ensure emerging technologies serve the public good without intended harm, the UK needs a multi-pronged approach, especially on the tech-mental health nexus. Embedding “wellbeing by design” into legislation and regulation; encourage or require providers to build features that protect user’s mental health, such as banning addictive defaults (endless scrolling, autoplay) and introducing nudges for break or opt-in usage timers. Regulators like Ofcom and the ICO, working with DSIT and public-health experts, could set design standards for attention-intensive services.
- Strong unified regulation: Strengthen coordination among regulators. The UK’s diffuse network needs clearer authority, pooled expertise and resources so cross-cutting risks do not slip between silos. In the short term, bolster bodies like the Digital Regulation Cooperation Forum; longer term, evaluate whether a unified Digital Authority becomes a necessity.
- Prioritise in research and education: Government could support the expansion of independent, longitudinal studies on technology’s effects and translate findings into guidance, while boosting digital literacy and mental-health tools through schools and public campaigns.
- Multidisciplinary and ethics: Finally, embed ethics in frontier technologies. Create multidisciplinary advisory groups, ensuring the AI Safety Institute includes mental-health considerations, and champion wellbeing metrics in international AI governance.
By strengthening coordination, demanding user-centric design, supporting beneficial innovation and collaborating globally, the UK can be both a tech leader and a leader in protecting mental and social health.