The European Data Protection Board (EDPB), an expert steering body which advises EU lawmakers on how to interpret rules wrapping citizen’s personal data, has warned the bloc’s legislators that a package of incoming digital regulations risks damaging people’s fundamental rights — without “decisive action” to amend the suite of proposals.
The reference is to draft rules covering digital platform governance and accountability (the Digital Services Act; DSA); proposals for ex ante rules for internet gatekeepers (the Digital Markets Act; DMA), the Data Governance Act (DGA), which aims to encourage data reuse as an engine for innovation and AI; and the Regulation on a European approach for Artificial Intelligence (AIR), which sets out a risk-based framework for regulating applications of AI.
The EDPB’s analysis further suggests that the package of pan-EU digital rules updates will be hampered by fragmented oversight and legal inconsistencies — potentially conflicting with existing EU data protection law unless clarified to avoid harmfully inconsistent interpretations.
Most notably, in a statement published today following a plenary meeting yesterday, the EDPB makes a direct call for EU legislators to implement stricter regulations on targeted advertising in favor of alternatives that do not require the tracking and profiling of internet users — going on to call for lawmakers to consider “a phase-out leading to a prohibition of targeted advertising on the basis of pervasive tracking”.
Furthermore, the EDPB statement urges that the profiling of children for ad targeting should “overall be prohibited”.
As it happens, the European Parliament’s internal market and consumer protection (IMCO) committee was today holding a hearing to discuss targeted advertising, as MEPs consider amendments to a wider digital regulation package known as the Digital Services Act (DSA).
There has been a push by a number of MEPs for an outright ban on tracking-based ads to be added to the DSA package — given rising concern about the myriad harms flowing from surveillance-based ads, from ad fraud to individual manipulation and democratic erosion (to name a few).
However MEPs speaking during the IMCO committee hearing today suggested there would not be overall support in the Parliament to ban tracking ads — despite compelling testimony from a range of speakers articulating the harms of surveillance-based advertising and calling out the adtech industry for misleading lobbying on the issue by seeking to conflate targeting and tracking.
While retail lobbyist Ilya Bruggeman did speak up for tracking and profiling — parroting the big adtech platforms’ claim that SMEs rely on privacy invasive ads — other speakers at the committee session aligned with civil society in challenging the line.
Johnny Ryan, a former adtech industry insider (now a fellow at the ICCL) — who has filed numerous GDPR complaints against real-time bidding (RTB)’s rampant misuse of personal data, dubbing it the biggest security breach in history — kicked off his presentation with a pointed debunking of industry spin, telling MEPs that the issue isn’t, as the title of the session had it, “targeted ads”; rather the problem boils down to “tracking-based ads”.
“You can have targeting, without having tracking,” he told MEPs, warning: “The industry that makes money from tracking wants you to think otherwise. So let’s correct that.”
The direction of travel of the European Parliament on behavioral ads (i.e. tracking-based targeting) in relation to another key digital package, the gatekeeper-targeting DMA, also looks like it will eschew a ban for general users in favor of beefing up consent requirements. Which sounds like great news for purveyors of dark pattern design.
That said, MEPs do appear to be considering a prohibition on tracking and profiling of minors for ad targeting — which raises questions of how that could be implemented without robust age verification also being implemented across all Internet services… Which, er, is not at all the case currently — nor in most people’s favored versions of the Internet. (The U.K. government might like it though.)
So, if that ends up making it into the final version of the DMA, one way for services to comply/shrink their risk (i.e. of being accused of ad-targeting minors) could be for them to switch off tracking ads for all users by default — unless they really have robustly age-verified a specific user is an adult. (So maybe adtech platforms like Facebook would start requiring users to upload a national ID to use their “free” services in this version of the future… )
In light of MEPs’ tentativeness, the EDPB’s intervention looks significant — although the body does not have lawmaking powers itself.
But by urging EU co-legislators to take “decisive action” it’s firing a clear shot across the Council, Parliament and Commission’s bows to screw their courage to the sticking place and avoid the bear-pit of lobbying self-interest; and remember that alternative forms of (contextually targeted) online advertising are available. And profitable.
“Our concerns consist of three categories: (1) lack of protection of individuals’ fundamental rights and freedoms; (2) fragmented supervision; and (3) risks of inconsistencies,” the Board writes in the statement, going on to warn that it “considers that, without further amendments, the proposals will negatively impact the fundamental rights and freedoms of individuals and lead to significant legal uncertainty that would undermine both the existing and future legal framework”.
“As such, the proposals may fail to create the conditions for innovation and economic growth envisaged by the proposals themselves,” it also warns.
The EDPB’s concerns for citizens’ fundamental rights also encompass the Commission’s proposal to regulate high-risk applications of artificial intelligence, with the body saying the draft AI Regulation does not go far enough to prevent the development of AI systems that are intended to categorize individuals — such as by using their biometrics (e.g. facial recognition) and according to ethnicity, gender and/or political or sexual orientation, or other prohibited grounds of discrimination.
“The EDPB considers that such systems should be prohibited in the EU and calls on the co-legislators to include such a ban in the AIR,” it writes. “Furthermore, the EDPB considers that the use of AI to infer emotions of a natural person is highly undesirable and should be prohibited, except for certain well-specified use-cases, namely for health or research purposes, subject to appropriate safeguards, conditions and limits.”
The Board has also reiterated its earlier call for a ban on the use of AI for remote biometric surveillance in public places — following a joint statement with the European Data Protection Supervisor back in June.
MEPs have also previously voted for a ban on remote biometric surveillance.
The Commission proposal offered a very tepid, caveated restriction which has been widely criticized as insufficient.
“[G]iven the significant adverse effect for individuals’ fundamental rights and freedoms, the EDPB reiterates that the AIR should include a ban on any use of AI for an automated recognition of human features in publicly accessible spaces — such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals — in any context,” the Board writes in the statement.
“The proposed AIR currently allows for the use of real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement in certain cases. The EDPB welcomes the recently adopted EP Resolution where the significant risks are highlighted.”
On oversight, the EDBP sounds concerned about data protection bodies being bypassed by the bloc’s ambitious flotilla of digital regulations updates — urging “complementarity in oversight” to enhance legal certainty, as well as emphasizing the need for DPAs to be provided with “sufficient resources to perform these additional tasks”. (A perennial problem in an age of ever bigger data.)
Legal certainty would also be improved by including explicit references to existing data protection legislation (such as the GDPR and ePrivacy Directive), it argues, to avoid the risk of incoming data packages weakening core concepts of the GDPR such as consent to data processing.
“It also creates the risk that certain provisions could be read as deviating from the GDPR or the ePrivacy Directive. Consequently, certain provisions could easily be interpreted in a manner that is inconsistent with the existing legal framework and subsequently lead to legal uncertainty,” the Board warns.
So far from the EU’s much vaunted digital regulation reboot strengthening protections for citizens — to boost their trust in data-driven services — there is, failing some very key amendments, a risk of death by a thousand cuts (and/or regulatory complexity) to foundational fundamental rights, with potentially ruinous consequences for the bloc’s much proclaimed “European values”.