The radical change caused by the pandemic requires new approaches to data privacy practice, says Daniel Gordon.
Over the past 17 months, business operations and consumer demand have changed forever. The transition to remote working has been swift and more permanent than anyone would have envisaged initially. Governments have embraced technology to gain insights into critical health issues, such as using mass testing and advanced analytical models for disease spread, and to inform policy creation and decision-making.
Although these are positive changes, the speed of the transformation has created some long-term privacy and ethical considerations.
For most professionals, this rapid innovation in the way we work has been visible through the use of videoconferencing and new collaboration tools. However, some organisations have taken this a step further and are increasing the automated monitoring of employees. This monitoring of behaviour can be done for many reasons, such as health and wellbeing or performance management.
At one of the largest accounting firms, staff were recently invited to participate in a wearable tech trial. This monitoring of staff activity can help with safeguarding and ensuring that staff are happy, healthy and productive. It could also be used to enable organisations to meet their duty of care to workers in remote locations.
However, these wearables could also be used to monitor things like how many times employees have moved from their desk, the time they clocked off or how long they spent on their lunch break. The technology is the same, but the two scenarios are very different.
The ethical and privacy considerations of this are significant and organisations looking to adopt these approaches should ensure the incorporation of privacy by design. They should also consult with data subjects to understand the feelings and concerns of stakeholder groups. This will help to ensure that the product or outcome is readily accepted by users, the public and advocacy groups.
Mass monitoring may be here to stay
At the start of the pandemic, it would have been hard to believe that the government would be leading some of the most cutting-edge data science and analytics on the planet. Now governments are innovating at pace, developing advanced capabilities to monitor their populations for the spread of Covid-19.
Many of these activities have been highly scrutinised by privacy groups and regulators, with the current consensus that the actions are necessary and beneficial to society. In the future, this may not be the case, but this capability has been costly to build and will not be willingly lost.
As advanced analytics provide such deep insight into complex issues in our new world, governments are likely to want to expand on the platforms they have created. Possible use cases that have been touted are increased ability for private-public research partnerships into diseases or to improve the management of the healthcare system through a better understanding of current and future demands.
Such initiatives provide an excellent opportunity. However, they could pose a substantial risk to the rights and freedoms of citizens. Governments will need to increase their ability to weigh up ethical, legal and privacy-related concerns on a case-by-case basis.
One way of ensuring that safeguards are integrated by default in the future is to use techniques such as anonymisation and data minimisation. By integrating these techniques tightly into the development cycle, organisations will sustain the current analytical methods into the future.
Data ethics must also be deeply integrated into the development of new analytics and monitoring activities. Integration can be achieved by building an ethics framework that provides development teams with guardrails for their work while systematically evaluating ethical issues relating to new and emerging technology. Having a robust framework and processes in place is especially important when initiatives could have a real-world impact, such as the ability to receive treatment or service.
Regulators are already thinking about the future
Regulators across the EU and within the UK have already started to work on protecting citizens. Most notably, recent proposals related to artificial intelligence regulation in the EU would help to ensure that future advances are secure, unbiased and monitored. Within the UK, the Information Commissioner’s Office is working on new anonymisation guidelines that will clarify options for organisations that want to limit their use of personal data.
These kinds of activities seem to be accelerating, so it is reasonable to anticipate new regulatory announcements on advanced technologies, analytics and monitoring in the near future.
This underlines the need to prepare for future developments carefully, including: