How does coronavirus impact data privacy? If we give up private information to help fight COVID-19, can we be sure it won't be misused? Find out from our data governance expert

Our lives have changed with the coronavirus crisis. But have they changed forever? In Does This Change Everything? European Investment Bank experts examine the implications of the COVID-19 crisis for sectors from education and digitalisation to urban mobility and medicineand for your everyday life.

To find out what coronavirus means for the privacy of your data, we spoke to Dennis Kessler, head of the data governance unit at the European Investment Bank, the EU Bank.


Read Does This Change Everything? from the European Investment Bank, the EU bank. Subscribe to the podcast on iTunesAcastPlayerFM and Spotify 

Does the coronavirus crisis change the future of digital privacy?

It doesn’t necessarily change it, but it does, inevitably. As far as privacy is concerned, there is legislation that has been introduced, most notably the General Data Protection Regulation, or GDPR, by Europe that is the global gold standard in terms of such legislation. There are a lot of people concerned about the fact that the measures that need to be introduced to combat and control the virus involve increasing amounts of surveillance, increasing amounts of intrusion into people’s private lives and their movements, especially through the tracking applications that are being developed to be installed on people’s smart phones. Now this shouldn’t cause a concern, because everyone is going to agree that preservation of life and public health is absolutely the priority. But the big question is, what will happen afterwards? Once a lot of safeguards have been eroded or compromised, once a lot more data is in the hands of private corporations and governments. What will happen to it?

A medical expert on another of the episodes in this podcast series said that he hopes medical treatment will become more digitalised, because it will help doctors deal with pandemics—as well as other diseases. That means doctors accessing our medical data digitally, and you can’t get much more private than medical data. Do you think it’s inevitable that we will accept a broader digitalisation of our privacy?

I think it is inevitable. The real question is not broader digitalisation of privacy, but all aspects of our lives. The ways in which we live, and the services we consume and the services that are provided to us. Data protection regulation and legislation is in place to try to make sure that these services are provided in ways that don’t compromise our privacy. We still seek to find the balance with all the benefits that we get.

We take for granted interacting with our banks online, shopping online, storing personal documents, storing photos, consuming music. All of those activities leave traces. And that’s not even to mention social media. It seems obvious that there are data traces of our digital activities and we seem to be happy with knowing that the data is out there, but we seek reassurance from government authorities that the data’s not being abused and it’s being stored and protected in ways that we would expect from the kind of society that we live in.

It seems absolutely obvious that anyone seeking medical treatment would want to make sure that medical practitioners have available their medical records, containing both medical history and also alerts about any medication they’re taking or any allergies. It’s fine to be able to recall all that information when you’re there and conscious, seeking emergency treatment. But it’s very beneficial for practitioners to know what the person’s medical history is, especially if they don’t have all that information available or they’re not able to share it themselves.

There’s no question that the future of medical care lies with digitalisation—digitalisation of personal health data at its heart. People receiving improved medical treatment would support that, but they would still expect that the powers that be are making sure that the right safeguards are in place to ensure that sensitive data is only made available when it’s needed and to the people that need it to provide those services.

You talked about the apps that some governments are thinking of introducing as a way to manage the end of the coronavirus lockdown. How broadly are those being considered and how likely are we to see them introduced?

It’s happening right now. Singapore was an early example of developing an app that would trace people’s contacts if they were found to have been diagnosed as positive. It has been accepted well by society in Singapore, although the adoption is low. It’s estimated it needs to be at 60% of adoption to be effective, but the actual takeup is much lower than that.

The UK government and the National Health Service is developing a contact tracing act that’s about to be launched. It will use Bluetooth technology. The idea is that if someone is found to be positive, they can find out who had recent contact with that person and notify them, so that those people that might not be experiencing any symptoms to find out and get tested and start self-isolating.

It’s generally accepted by epidemiologists and medical professionals that this is the key—quick contact-tracing is the key to containing the virus. The challenge is, what will happen to all the data? What kind of data is being captured? Is it personally identifiable data that can then be used by private companies in the future, to identify a person’s movements, what they consume, where they’ve been and what they like to do, and where will it be kept and used for what purposes?

Is that really what this is all about—how private companies would use it? Because in newspapers there’s always this Orwellian fear that the government will say, “You’re a political dissident, so we’re going to tell your coronavirus app that you’re infected and therefore you have to stay home,” effectively a house arrest. But it seems you’re saying it’s more a matter of private companies getting our data and using it to make money out of us.

There are two aspects to this. The example that you’ve just given is no longer science-fiction. It’s already happened in China. Classification based on data profiles has been applied to minorities in China to restrict their movement. There’s already anecdotal evidence that people given the green status in China that allows them to leave home or move out of their registered city—there are cases where erroneous yellow or red ratings that restrict them have been given due to technical errors, but people haven’t been able to get them removed because the processes for making a complaint or an appeal are not clear. In addition there would be very little accountability in such a society if a politician or the police, who have access to this data by the way, decide to put a red marker on someone, even if it had nothing to do with their health status. So these are not fears about the future; this is happening now in some parts of the world. The question is, what are the safeguards to prevent those steps being introduced in supposedly liberal Western democracies, if the platforms and infrastructure is already being built and adopted by citizens.

We’re talking now about a time of emergency. But once the threat of coronavirus recedes, will be it be possible to roll back these changes in the privacy of our data? Should we even want to roll them back?

You began by asking about our data being in the public domain. The whole point of GDPR and similar legislation—such as the California Consumer Privacy legislation introduced last year, which is being increasingly respected in other parts of the US—is about making sure that individuals, consumers, citizens have control over what data is stored by private companies, how it’s used, for what purpose and their right to have it removed. If we lose control of that, our personal data can be used by not just governments, but for private organisations that are driven by profit.

People will accept this if they feel they’re getting some benefit. But when it comes to the motivations of private companies, there’s an incentive to get insights into personal data for profit purposes. For example, if there were aspects of your lifestyle that you didn’t want to be disclosed to an insurance company, but the insurance company could find out that they could put you into a category that was higher risk for something, you could end up being charged higher insurance premiums in a way that wasn’t transparent.

Private companies seeking to select candidates for a particular job could get access to personal data, your private history and activities that you haven’t authorised, but they could use it to influence their decision about the kind of person they’re looking for.

The goal of this legislation is to let people control what data gets shared, what can be done with it and what can’t be done with it, and their right to have it withdrawn from unauthorised use.

Let me ask you about when this goes wrong. Just this week in Wales 13 000 letters were sent by the government to the wrong address telling people they were at risk of coronavirus based on their medical history. How often does that kind of thing happen and how damaging is it to our privacy?

Most governments have a data protection authority and GDPR says that organisations and authorities need to have a responsible officer. Such breaches need to be reported and disclosed within a particular timeframe and there are penalties for not doing so. The fact is that the rise of awareness of the value of data, both commercially and for people’s private freedoms and privacy is becoming more widespread. There’s a growing awareness of the tension that this current crisis brings between the need to protect public health, which is unquestioned when people’s lives are at stake, and the need to make sure that any information that’s gathered in the current situation is carefully managed and not used for other purposes, whether it’s deliberate or accidental, once the crisis is over.

What effect might these changes have on everyday life for citizens once this crisis is over? How much will our data privacy have changed?

We can expect that the use of these contact-tracing apps is going to be extremely widespread starting right now. It’s already happening. It opens the door to people being more accepting of the fact that their movements are tracked. They will trust governments and responsible authorities to use that information in a responsible way.

But if you think about the way the use of social media has exploded people take for granted sharing more and more insights into their private lives on a forum, using a platform and infrastructure where they don’t know who has access to the data, they don’t know where the data’s stored. Very few people, unless they’re brilliant technical specialists, understand the complexity of the infrastructure that Facebook uses to manage its services, let alone Amazon or eBay with all the shopping activity.

What this opens the door for is people being more accepting of sharing more and more information and using more and more applications which are offered ostensibly to make their lives easier.

The danger is that people start to accept the fact that they carry around a smart phone, which has these tracking applications, but they’re not paying attention and their isn’t enough scrutiny applied to who has access to that data and the purposes that the insights are being used for.

We have smart phones by choice. When we talk about a smart phone, it’s not really a phone. It’s an incredibly powerful handheld computer that just happens to have a phone built into it. In parts of China people have to have a smart phone, so that their movements are tracked. They’re not asked if they want one. It’s hard to imagine, but we’re coming closer potentially, to a situation where we are told we must have a smartphone in order to conduct day to day life and that that phone must have certain applications activated in order to be functioning legally to interact in civil society.

If that takes place it has to be something that people choose to do, not something that’s imposed by stealth by governments, and especially driven by private sector interests.

In all this, what role can the European Investment Bank take?

The EIB is one of the high-profile European institutions that, especially as a member of the Eurogroup, is emerging as one of the key players in the rescue plan that is being put together to try to help the European economy to recover.

Something that is little understood and has been overshadowed by this crisis is that in February the European Commission published a new multi-year data strategy encouraging the digitalisation of every aspect of EU society for the benefit of civil and economic prosperity. Hardly anyone noticed this because the news was starting to come in about the impact of the virus.

But this has, as a huge component of it, a very big focus on data and on artificial intelligence. The goal of this data strategy is to create a single market for data where data flows across the EU and across sectors with full respect for privacy and data protection, where rules of access are fair, but where there’s an enormous benefit to the European economy as a global player because of this new data economy.

Given the fears about the misuse of data, the EU also launched a strategy on the use of artificial intelligence; it had a working group investigating how to ensure trust in the use of artificial intelligence. These were published in two white papers that seem to have been overlooked in all the focus on the crisis. One on AI is called A European Approach to Excellence and Trust.

The EIB can play a really important role in ensuring that, when it’s evaluating the loans that it provides to counterparties, it takes into account not just the usual Know Your Counterparty and anti-money laundering due diligence about where the money is going, but also to ensure that the organisations that benefit from the lending will be aware of and respect European standards and regulations on the use of data to ensure that legal and ethical practices are respected.

The EIB can emerge as a leader in the area of data ethics, to ensure that the letter and the spirit of these regulations and the principles of preserving data privacy and protection are not just respected but established throughout the European economy and society.

Read Does This Change Everything? from the European Investment Bank, the EU bank. Subscribe to the podcast on iTunesAcastPlayerFM and Spotify