Online Harms legislation: what to expect

online harms legislation
© iStock/RichVintage

Chris Priebe, CEO of Two Hat Security, explores how business, government and users will be affected by Online Harms and Duty of Care legislation in the UK and beyond.

First, it is important for companies to understand what the UK’s Online Harms legislation is: Online Harms seeks to protect internet users from exposure to readily available illegal or harmful content. The legislation is going to demand much more transparency and accountability from social networks and will provide users with new tools to protect themselves online.

Certainly, there will be a bit of a learning curve on the parts of both regulators and companies; so it is important for all parties to co-operate in order to streamline the implementation of the legislation. It is all parties’ common goal to protect users from harm and support an online environment where that protection is increasingly seamless, and virtually invisible. At Two Hat, one of our many goals is to assist in achieving and maintaining that level of protection.

A good place to start is by setting expectations for how online harms will come to be defined, the risk-based regulatory approach to enforcement that will be broadly adopted; and the unique culture of transparency new legislation will create for users, companies and governments alike.

Definitions of harms will be standardised

Providing a higher level of standardisation means more comfort for companies that they are in fact, compliant. This will allow them to implement improved protection in alignment with regulations, rather than respond in fear of them.

We are currently helping with the co-ordination of multiple international efforts to standardise the definition of each category of harmful content, such as grooming, sextortion or cyberbullying; with the UK leading the way.  It is essential moving forward to have a shared definition (as opposed to a single government definition), so the report compares apples to apples.

I also believe there should be a mechanism to have third party companies approved to perform audits and submit an annual report on behalf of organisations. A safe harbour programme wherein companies can get certified as compliant, such as that offered by PRIVO, may provide a good framework for regulators in the UK and elsewhere.

Duty of Care will be proportional

As part of the Duty of Care component, companies will need to have effective and user friendly functions to allow users to register complaints which will be overseen by regulators and responded to in a timely manner by the company.

It is my understanding that the regulator, through Department for Culture, Media and Sport (DCMS) oversight, will take a risk-based and proportionate approach across business types. This will probably mean the regulator’s initial focus will be on those companies that pose the biggest and clearest risk of harm to users, either because of the scale of those platforms, or because of known issues with serious harms.

As mentioned, we are working with academia and industry partners to propose international definitions for online harms categories such as cyberbullying, and also hope to release an international test set. This way, companies everywhere can report on their accuracy levels: a balanced playing field for assessing potential harms.

Regardless of a company’s risk level, however, an important component for all companies regarding Online Harms legislation is information, both in terms of what is collected and what is shared.

Transparent public reporting will become law

As a starting point, the regulator will have the power to require annual transparency reports from platform operators that outline the prevalence of harmful content on their platforms, and what countermeasures operators are taking to address the problem. These reports will be published publicly, so that users – in many cases parents – can make better informed decisions about internet use. It is extremely important that these reports are digital and readily accessible in app stores for each app.

For the most serious online offences, such as incidents of CSEA and terrorism content, companies will be expected to go much further and demonstrate the steps taken to combat the dissemination of any associated content and illegal behaviours. To support this, the regulator will work closely with UK Research and Innovation (UKRI) and other partners to improve the evidence base for assessing such cases.

The regulator will also set out expectations for companies to do what is reasonably practicable to counter harmful activity or content, depending on the nature of the harm, the risk of the harm occurring on their services, and the resources and technology available to them. Our expectation is that regulators in other countries will also adopt this risk-based approach.

Companies are already adapting

Platform operators have seen a much greater Duty of Care on the horizon for some time. Many have already started examining their roles in protecting users from harmful content that finds its way onto their platforms.

MovieStarPlanet is an outstanding example of a company that has been taking action and applying best practices for a long time. Based in Denmark, MovieStarPlanet produces online games for children – and it takes user safety seriously. The company has produced detailed documentation on its safety practices, including deployment of tools and human moderators; and comprehensive child safeguarding policies.

With online safety practices led by Head of Safety Vernon Jones, a veteran in the industry, MovieStarPlanet has worked with the UK Home Office and DCMS for some time in relation to the Online Harms white paper, and have been extremely proactive in leading the way on collaborations pertaining to user safety.

Others are redesigning their products, or planning new ones, built on Safe By Design principles: a global movement to create online products and communities purposefully built for safe user experience. The Australian eSafety Commissioner is a leading force in this area.

Most companies already have procedures in place to monitor the content on their platforms; however many still fall woefully behind due to the sheer volume of content to be monitored. To meet their requirements, it is extremely important for proactive moderation, such as Artificial Intelligence (AI)-based techniques and models, to continually evolve; or content moderation as a whole will not be scalable.

In addition to supporting the creation and growth of these AI tools through public investment, regulators must also collaborate with private industry to publish educational resources to help end users understand the new laws. This is an established best practice and will be a critical component of online harms initiatives globally.

In summary

Technology and how people use it is constantly evolving; so policies to manage online safety must also always evolve, the alternative being obsolete or ineffective legislation. As new regulations emerge, definitions of harm are standardised and failures under Duty of Care start to become public knowledge, the public-private partnership that has for so long been concerned with being a driver of digital and economic scale will adapt to become purposeful custodians of our online experience.

Chris Priebe

CEO

Two Hat Security

British Columbia, Canada

  • LinkedIn
  • Twitter
  • Facebook

LEAVE A REPLY

Please enter your comment!
Please enter your name here