Can we protect our data in the artificial intelligence era?

Europe wants to be a leader in tech revolutions like AI. This ambition, however, contrasts sharply with Brussels’ desire to protect the right to privacy and AI needs data to develop. A new European ruling promises to make both objectives compatible, though it does not resolve the problem. 

Published On: January 10th, 2022
Can we protect our data in the artificial intelligence era?_62cca30c43c5f.jpeg
Can we protect our data in the artificial intelligence era?_62cca30c43c5f.jpeg

Image: Geralt / Pixabay

Can we protect our data in the artificial intelligence era?

Europe wants to be a leader in tech revolutions like AI. This ambition, however, contrasts sharply with Brussels’ desire to protect the right to privacy and AI needs data to develop. A new European ruling promises to make both objectives compatible, though it does not resolve the problem. 

Image: Geralt / Pixabay

It is 2016. Donald Trump has won the United States presidency and Brexit has promised to take the United Kingdom out of the European Union. Both campaigns employ Cambridge Analytica , who harvest the data of millions of Facebook users to personalise electoral messaging and sway their voting intentions. Millions of people begin to ask themselves whether, in the digital era, they have lost something that they valued dearly: their privacy. 

Two years later, millions of email inboxes in Europe would be filling up with messages from companies, asking them for permission to continue processing their data, complying with the new General Data Protection Regulation (GDPR). Despite its imperfections , this law has served as a point of reference for laws in Brazil and Japan and began the era of data protection in earnest. 

Nevertheless, what was once seen as a triumph for privacy is now seen as a roadblock in Europe’s quest to develop digital technologies, especially artificial intelligence. Can European law protect the privacy of its citizens when faced with a technology that is not always transparent?

Do we prioritise digital rights or innovation?

An artificial intelligence (AI) system is a tool created through information technology, using data algorithms to offer correlations, predictions, decisions and recommendations. This capacity to provide new interpretations to human decisions puts AI at the very heart of the data economy. 

The capacity to make more efficient decisions thanks to AI also has geopolitical consequences. States are investing more and more in this type of technology, driven by the motto coined by Vladimir Putin in 2017: “Whoever dominates artificial intelligence dominates the world”. By 2019 , the US was investing almost 200% more and Japan 1,109% more in AI than they did in 2015. 

This sense of urgency has also spilled over into other areas, such as digital rights in Europe. European lawmakers legislate in favour of privacy, fight against big tech monopolies and create spaces that value safe and secure private data. These advances in digital rights, however, could threaten the economic prosperity of the continent. 

When GDPR was first implemented in 2018, companies were already warning that complying with the strict data protection demands would be a hurdle in technological innovation. Among the most common of the arguments against GDPR are that it reduces competition, that compliance is too complicated or that it limits the capacity to create European ‘unicorns’ : young startups with more than a billion dollars of market capitalisation, who look for low-regulation bases to invest in. 

On the other hand, Brussels argues that its market of more than 500m people with guarantees of political stability and economic freedom will keep attracting investors. Europe’s own Commissioner for Competition, Margrethe Vestager, added this year that the Commission would only intervene when the fundamental rights of European citizens were being endangered

Reconciling artificial intelligence and privacy

Complying with GDPR can present an additional problem in the development of artificial intelligence. AI systems need a lot of data to train themselves, but European law limits the capacity of businesses to obtain, share and use this data. By contrast, if this regulation did not exist, the mass harvesting of data would compromise the privacy of citizens. To reach a balance, GDPR has left a margin for AI development due to the sometimes vague wording of the legislation, according to the pro-privacy European Digital Rights group. 

As expected, there continue to be delicate aspects to this precarious balance. One of them is the principle of transparency, which affords each citizen the capacity to access their data and to understand what is being done with it, and that this explanation should be clear and concise. This transparency can be difficult to maintain, however, when those processing the data are AI systems. 

Businesses and AI development institutions have spent time working on ensuring so-called ‘explicity’ and ‘interpretability’ , which is to say that a non-expert should be able to understand an AI system in layman’s terms, recognising why it takes certain decisions and not others. It is not an easy task: many of these systems work like black boxes , a commonly employed metaphor in the industry, as neither those who build the algorithm nor those who implement the decisions it recommends understand how it is coming to that decision. 

Another dilemma is the ‘right to be forgotten’ . Celebrated as a GDPR victory for privacy, it obliges businesses to delete the data of anyone who asks for it to be deleted. In the case of AI systems, a business could, in theory, delete the data of the person training the algorithm, but not the trace that this data has left on the system, making a total ‘forgetting’ impossible. 

Is new European regulation the solution?

Although it seems that privacy and innovation are two irreconcilable principles, all is not lost. In April, the European Commission published a proposal to regulate artificial intelligence. Despite much criticism for its position, such as its refusal to prohibit facial recognition systems, it is an innovative piece of legislation that obliges companies to open the black box to ensure transparency. As ever, a victory for data protection activists also angers those arguing that obligations around transparency restrains innovation and drives businesses elsewhere. 

In parallel with this initiative, European institutions came to an agreement in October 2021 to form the Data Governance Act , which covers data reuse and creates public data pools and cooperatives, so that businesses can benefit from innovating in Europe. This law will allow businesses to search for the data that they need in these spaces rather than buying it from other companies nor obtaining it through unethical channels such as online traces. It is a groundbreaking vision, permitting ‘data donation’ as a means of filling these pools, something which breaks from the consensus of data as a commodity. 

The world has still not come to an agreement on AI regulation, but the EU could become a pioneer with a possible law expected in 2022 or 2023 which would apply across its twenty-seven Member States. This law would set up a risk classification for AI systems. For example, those used in healthcare would be classed as ‘high risk’, bringing new regulations to those who develop and implement the systems. Despite agencies such as the European Data Protection Board claiming that this new regulation would also allow for innovation, we will only see its true effect if it is able to solve the great dilemmas of transparency and the right to be forgotten. 

This article has been produced within the Panelfit project , supported by the Horizon 2020 program of the European Commission (grant agreement n. 788039). The Commission did not take part in the production of the article and is not responsible for its content. The article is part of the independent journalistic production of EDJNet.

Stay up to date with our newsletter!