Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
Digital & dataSeptember 13 2023

Robots, regulation and real people: managing AI in the workplace

While touched upon by existing financial, intellectual property and data protection regulation, the rise of generative AI has prompted regulatory bodies to craft bespoke regulation for AI.
Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
Robots, regulation and real people: managing AI in the workplace Image: Getty Images

Artificial intelligence (AI) is fuelling a new wave of digital change. ChatGPT and generative AI have soared in popularity, bringing new momentum to the adoption of AI and driving organisations to explore its potential to transform their business.

This rush of interest and investment in generative AI has also brought a renewed focus on the risks of AI, including potential impact on the workforce. As the potential for AI and the associated risks become more apparent, governments and regulators are seeking to regulate the fast-developing technology.

The changing regulatory landscape

AI is already regulated under a myriad of rules, including intellectual property, data protection, anti-trust and financial regulation, product liability and consumer protection. However, the development of AI-specific regulation has accelerated rapidly as governments race to respond to the risks identified, notably with the rise of generative AI.

The EU is advancing the world’s first bespoke legislation for AI regulation with its AI Act, expected to be passed later this year and come into force around 2026. The act will impose specific requirements including transparency and fairness obligations, accountability requirements, and rules regarding training, validation and testing data. The UK will host the first major global summit on AI safety later this year, but is currently adopting a lighter touch than the EU as it moves forward with the world’s first comprehensive AI regulations, the impact of which will likely be felt globally.

Through the employment lifecycle

Increasingly, AI is becoming a feature throughout the employment lifecycle as part of recruitment, ongoing line management, promotion and appraisal decisions, and even termination. Using AI in this way is not problematic per se, but it is crucial that employers adopt policies and practices which ensure that there is human input at each stage in the process. 

AI tools can inherit bias from the underlying data set. This means they can perpetuate existing stereotypes and biases, treating certain groups less favourably. This may result in discrimination claims being brought by employees with protected characteristics, such as ethnicity or sex, if they are subject to biased decision-making.

While employers may be able to show objective justification to defend discrimination claims (for example, that using AI was a proportionate means of achieving a legitimate aim), there are two significant problems: first, explaining algorithmic decision-making can be tricky; and second, an AI system can disguise the stage at which the bias entered the system.

Guardrails and training

To clearly establish acceptable usage of AI and effectively mitigate the risks posed, firms should implement clear policies and regular training for employees on the use of AI, as well as the interplay with its data protection and confidentiality obligations. Firms must ensure that employees understand what acceptable use is, and that they have been trained to understand what the limitations may be and where there is a need for human oversight.

Firms may also embrace a pro-innovation approach by running ideas campaigns to share best practice and evaluate the most productive ways of harnessing AI.

Complement, not replace

The perceived threat of robots eradicating human jobs is nothing new, and while AI tools can drive huge efficiencies, they are not (yet) sophisticated enough to replace human input. This is especially true in a role where individuals are expected to apply their own specialist expertise as an overlay to any machine-generated output.

The financial services regulatory regime is centred on strengthening individual accountability and responsibility. The regulator is likely to take a dim view of the use of AI as a means to shift or remove that accountability.

Mitigating risk while embracing opportunities

Existing rules and rapidly evolving regulation create a complex matrix of risks, challenges and opportunities for firms to navigate.

However, the key takeaways are that firms must: first, provide oversight and accountability for their AI; second, be able to explain to customers — and regulators — how AI is being used; third, validate initial findings and monitor products as they evolve; and fourth, retain an element of human oversight and input.

Sonia Cissé and Sinead Casey are partners in Linklaters’ global technology sector practice.

Was this article helpful?

Thank you for your feedback!

Read more about:  Digital & data , Regulations