As COVID-19 sees us become increasingly reliant on the digitisation of healthcare data, how have the UK public previously reacted to the use of technology in healthcare? In this blog, originally published in our On Digital Trust publication, Dr Barbara Ribeiro examines previous approaches to integrating data into care, the impact on public trust, and what policymakers can learn from them.
- Digital technologies are being integrated at all levels of healthcare; preventative, predictive, and personalised
- The Care.data incident led to the assumption that acceptance is the same as trust, and can be gained by informing the public about how their data will be used
- However, more nuanced understandings are needed across different groups and in different contexts, and how digitisation will affect our relationships with healthcare practitioners
Digitisation is a phenomenon that has been transforming the way we live and work. Today, ubiquitous devices that generate, interpret and share digital data are increasingly mediating our social relationships and our interactions with organisations. While innovation often promises a brighter future, the use of digital technologies is permeated with challenging questions around its effects on everyday life, public benefit and, ultimately, public trust in these systems. The digital healthcare sector is no exception.
New technologies and trends
The term digital healthcare refers to those forms of health or social care delivery that are mediated by digital technologies, such as telecommunications and sensing technology that allow patient assistance, for example by triggering a control centre to try to make contact and to get help if there’s no answer. It is supported by devices like electronic medical records, wearables and data analytics software.
Digital technologies support three approaches to healthcare: preventative, predictive and personalised. Preventative healthcare consists of taking measures that avoid the development of health conditions through monitoring of things such as blood glucose, medication adherence and physical activity; predictive approaches make use of data analytics to assess the likelihood of developing conditions such as dementia; while personalised healthcare seeks to combine our genetic and clinical information to deliver tailored treatment on an individual basis.
The informatisation of medicine and the rise of data Digitisation in healthcare is part of a broader process of informatisation of medicine; one where digital data plays not only a fundamental but a lead role. This process is underpinned by trends in genomics, physical and behavioural sciences and the assumption that our bodies – and their ‘illnesses’ – are best assessed via our DNA and various forms of metrics.
In the context of digitisation, our understandings of health and healthcare are at risk of becoming reduced to what is simply health data. Policymakers tend to reinforce the focus on health data by prioritising issues such as confidentiality, anonymity, privacy and security which become dominant in public debate. As a consequence, the ethics of digital healthcare – that is, what we deem as matters of interest and concern to society – are mainly framed in data-centric terms. For instance, a policy paper produced by the UK Government in 2018 on the future of digital healthcare puts forward an ambitious vision for the implementation of digital technologies. This vision is accompanied by principles of social inclusion and a focus on user needs, which are definitely matters of social interest. However, aligned to the idea that health means data, potential public concerns are seen reduced to privacy and security issues in the Government’s parlance, at the expense of much wider issues around how technology is affecting healthcare and what impact this is having on patients.
Are we confusing trust with agreement to share?
How is this happening? Despite the undeniable importance of privacy and security, public trust has also become primarily defined in these data-centric terms. The short-lived Care.data programme launched by the NHS a few years ago, which was the subject of a public backlash over the use of patients’ data by the NHS and its partner organisations, helped frame the debate on trust and the future of healthcare around privacy and security.
Care.data follows a trend of supply-driven, top-down approaches to the implementation of health innovation in the NHS. Here, we might end up confusing a complex concept such as trust with acceptability: we assume that people will either reject or embrace a programme that was imposed in a top-down manner like Care.data, based on how much they agree to share their data with public and private organisations.
A 2018 survey found, for example, that older people are generally more reluctant than younger groups to use an app or fitness tracker to self-collect lifestyle data and to have their data shared with private organisations for research purposes. This does not tell us anything about the reasons why these people might be reluctant, but because most of us know very little about, or are unaware of, how organisations use our health data (something that the same survey shows), we too quickly jump to claim that a lack of this knowledge is the main reason behind a lack of public acceptance and, therefore, of public trust (something that the same survey does).
When thinking about trust in digital healthcare we therefore tend to take two shortcuts that support and justify the focus on privacy and security issues, like the one adopted by the UK Government in their vision for the future of healthcare. The first is the assumption that trust means acceptability or people’s agreement in sharing their data; the second, which follows from the first, is the simplified explanation that the main reason why people might be wary of sharing their data is only because they lack knowledge on how their data will be used.
Trust, assumptions and what’s missing
So, what’s missing here? I’d argue that what we rarely do is ask how the very nature of healthcare is being transformed by delivery through digital technologies and how people make sense of and value these new forms of healthcare. We need more nuanced and rich understandings of trust across different publics, practices and spaces.
Take, for instance, the case of insulin pumps, a technology to support self-care practice that is used by people living with Type 1 diabetes. Research into use of insulin pumps found that some insulin pump users have experienced a change in their relationship with healthcare practitioners and family caregivers after they adopted the technology; others felt frustrated by not seeing their high expectations materialise; some perceived a lack of control over the technology; and some even struggled with an increased awareness of their own body. Insulin pumps are not digital technologies, but these are the kinds of issues that shape people’s acceptance of new healthcare technologies and, ultimately, their levels of trust in these and, importantly, in those involved in their care.
Digital technologies not only influence the relationship between caregivers and patients, they can also transform the way practitioners work and, fundamentally, change the nature of healthcare. Research has shown that these technologies change where care takes place and contact between people, producing new forms of monitoring, communicating, controlling, advising and attributing responsibilities in care practices.
We should challenge the assumption that digital healthcare technologies are simply new means of delivering the same form of care and learn more about what new forms of care are being created.
Because trust is embedded in our social relationships and our relationships with technological systems, these are the areas health policymakers must pay more attention to, in addition to data privacy and security issues.