How can we help you?

More than 18 months on from the advent of the EU-wide General Data Protection Regulation (GDPR), organisations are not only getting far more sophisticated in their handling of data but are also increasingly embarking on much more advanced data strategies.

As companies get more adept at handling large volumes of data in a compliant manner, a new challenge has reared its head, and that is the challenge of data ethics. This highlights the difference between what companies CAN do with data, versus what they SHOULD do.

Riccardo Abbate, partner in the corporate department at Trowers & Hamlins, explains: “The term data ethics, which is now being used a lot, is really a flag to say you have got to think about a lot more than just the mechanical aspects of processing personal data. As well as thinking about where and how personal data is stored, and who has access to it, you now also need to think about the underlying core of data protection legislation, which is purpose.”

He adds, “Data protection legislation is not written in a prescriptive way but is instead underpinned by principles. As well as looking at the physicality of data, it is saying you must process personal data lawfully and fairly, and that you must only collect it for a legitimate purpose.

"With ever-increasing advances in computer power and capabilities, it is becoming more challenging for people to not be blinded by the practical aspects and make sure they keep sight of those ethical angles.”

One of the changes of emphasis that came with the introduction of GDPR and the UK’s Data Protection Act 2018 was a suggestion that organisations should be asking themselves periodically about the purposes for which they are collecting, storing and processing personal data. On a practical level, that requires much greater understanding and coordination between the members of a business that are responsible for compliance and the people at the coalface developing software and other processes that rely on data.

“That’s the challenge,” says Abbate, “making sure that the software people, the sales people and everyone else across the organisation understands and really appreciates what is actually going on. All too often there is a disconnect and people on the front line are doing things for the benefit of the business without keeping compliance in mind.”

The way that data is used is increasingly throwing up its own ethical issues too, as algorithms, artificial intelligence and machine learning take more and more responsibility for corporate decision-making. There have been a number of high-profile incidents of computers delivering outcomes that would fall foul of equality legislation, for example, with tech giant Apple one of the culprits. The algorithms behind its credit card business Apple Card hit the headlines for awarding men better credit scores than women, even within married couples where the pair shared all their bank accounts, credit cards and assets.

While computers cannot discriminate off their own bat, they can produce decisions that are racist, sexist or in some other way discriminatory if the bank of data that they are analysing, or the programmers that made the initial inputs, are already biased, even unintentionally.

Abbate says: "What’s happening is that algorithms are processing data in a purely statistical way, but there needs to be an additional layer of scrutiny beyond the raw processing to say that while the data might say X, we have to draw our own conclusions and potentially come up with the answer Y.”

Decisions on data ethics are impacting more and more areas of our daily lives. In the US, artificial intelligence is being used to reduce some of the burden on the criminal courts, including making decisions on custodial sentences. In the UK, the Serious Fraud Office has started using a document review service backed by artificial intelligence to analyse documents in large cases, piloting it on the high-profile Rolls-Royce bribery case. Capable of processing half a million documents a day, the system operated 2,000 times faster than a human lawyer, according to the SFO.

Mark Kenkre, partner in commercial litigation at Trowers & Hamlins, says: “The Law Society is now looking at situations where you rely on algorithms to analyse documents and asking whether that increases the potential for missing something that could be critical to a case. If there’s a risk of someone going to prison for a long time, is AI really good enough? Or could it be better?”

There are also further complexities when it comes to the use of employee data, where it is increasingly possible to track the movements of staff throughout the working day and collate data on their habits and preferences. There is a growing business in the field of worktech, which can be used to monitor time use and attendance, to identify signs of fatigue, manage the allocation of overtime or spotlight incidents of stress.

Christopher Recker, an associate in commercial litigation, says: "Organisations need to be embracing the fact that all this technology is out there, whether it is worktech, regtech, wearable tech or fintech.

“But the question that has to be asked when it is being used to produce enormous amounts of data is why? Why are you creating all of that data?”

Emma Burrows, partner in the Employment and Pensions team says, “If you are collating information on your staff’s exercise habits or eating habits and you haven’t told them that is happening, the implications are potentially very serious. That data may be very valuable for identifying signs of stress or spotting challenges at home, and that may well be beneficial to the employee, but if you are surreptitiously generating or analysing that information the impact could be hugely damaging. It could give rise to employment challenges, because an employee who is being monitored may well have claims for constructive dismissal and discrimination, and reputational issues, when the modern workplace puts emphasis on employees’ flexibility and autonomy. On balance, harvesting that data may not be seen as beneficial.”

One key way to avoid failings is to return to the point about everyone in the organisation fully understanding what an ethical data policy looks like.

“Organisations must continuously challenge themselves to ensure they truly understand the purpose for which they are generating and using personal data,” says Abbate. “The really key thing is to make sure the people actually developing the software and working with the client development teams are aware of what the law has to say on data protection. It is not so much at board level, where senior management of course has to understand this, but getting everyone in the business to break down internal silos and work together to make sure the products that are being developed are compliant.”

He adds: “The laws continue to evolve and GDPR demonstrates that everyone is now really serious about data protection. We now see the Information Commissioner’s Office showing they mean business and using their new powers to impose large fines and that’s the big message – the supervisory authorities are now coming in with fines that are commensurate with the powers they have been given.”

A failure to understand the ethics behind data privacy rules could be very serious indeed.