What firms need to know about the ‘right to be forgotten’

If you haven’t come across the ‘right to be forgotten yet’, here’s the brief: a Spanish man asked Google to remove certain links from its search engine which came from Spanish newspapers which referenced various debts the man had in 1998. 

He contended this information was out of date and that Google should not, therefore, display them. So he brought proceedings against Google Spain and Google, Inc. The court ruled in his favour, and it announced the following:

(1)       A search engine provider is a ‘data controller,’ with obligations under the Data Protection Directive. While processing information does not amount to a business being classed as a ‘controller,’ Google – which finds, stores and indexes information – is clearly more than a simple processor of data.

(2)       A non-EU company like Google is deemed to be a data controller when it sets up an EU branch intended to promote and sell advertising space offered by that engine towards European people. 

(3)       A search engine has to remove any links to web pages published by third-parties which contain information about that person: if that person searches their name and doesn’t like what they see – even if that information is legal.

(4)       There’s a balance needed between an individual’s right to privacy and both the economic interest of the search engine operator (here: Google) and the interest of the general public in accessing information. 

The broad premise being that the less ‘public’ a person is, the more likely they are to have their right to privacy upheld.

Implications for non-EU companies in Europe

Perhaps the most applicable part of the decision is to capture non-EU companies as ‘EU data controllers’: when they have operations in the EU similar to the above. 

The ramifications of this are huge – entities with sales and marketing operations in the EU are all involved and have to pay attention to the ECJ’s ruling.

Implications for search engine providers

The implications for search engine providers are also weighty: clearly they cannot make an informed decision with respect to whether the information they link does or does not contain personal data which is no longer relevant, incomplete or inaccurate.  

The processing of big data available on the internet is almost entirely automated: yet in order to automate the filtering out of such personal data seems, frankly, a little like science-fiction at this point.

Does this mean that search engines will therefore reply to all requests to block the processing of an individual’s data? 

Perhaps, but opponents argue this could lead to censorship. Suppose an individual seeks to block links to published information which relate to a crime they committed. Should they have the right to censor that information?

Arguably current laws on defamation would adequately protect those who seek to block libellous information. But why should information, which is true (but not libellous), be censored? That seems very 1984.

So what about the balance between this and the public interest test referred to? For a business like Google to make that decision, not a small amount of human intervention is necessary – will search engines need to increase resources to manage the potential floodgates here?

Google’s initial response has been to employ a small army of paralegals to address take-down requests. It looks like they’re taking a highly cautious approach, with Google preferring to take down requests rather than spend time analysing the merits of a request and applying the public interest test. This may also partly stem from the volume of requests – some commentators estimating over 10,000 so far.

Google’s initial attempts to deal with the thousands of requests since the decision involve a form-based approach which will filter out frivolous requests. Broadly speaking: they’ve taken the position of ‘taking it down if someone complains.’ 

What now?

The decision cannot be appealed. The floodgates have opened; and Google have responded with a ‘human automation’ approach; I.e. processing requests is going to be an administrative exercise rather than a qualitative assessment of individual requests. It’s the cheaper, more time-efficient and least painful option but disappointingly is unlikely that the requests have been given the scrutiny that the public interest needs.

Sam Jardine, partner at commercial law firm Watson Burton LLP

Image source

Share this story

Close
Menu
Send this to a friend