Following on from our inaugural Data, privacy & security in the workplace: key issues in 2023 event we are happy to share our top takeaways from our AI and the world of work session where Bryony Long and David Lorimer were joined by Adriano Koshiyama, Co-founder at Holistic AI. We discussed what use cases are already being made of AI in the workplace, as well as what use cases will exist in the future, and what are the do’s and don’ts from both a data & privacy and an employment law perspective.

Here are our top 5 takeaways:

  1. Myriad of rules: While AI regulation is still being finalised across the globe, there is a plethora of existing legislation at least in the UK that one needs to take into account where deploying AI in the workplace.  For example, you have existing case law ensuring employees are treated fairly, the Equality Act ensuring that decisions should not be made based on protected characteristics, complex data protection laws, and rights to privacy to name but a few.
  2. Assess purpose: When deploying AI, always assess compliance in the context of the use case and associated risk to the business. This means the first question anyone should be asking when deploying AI is what is the purpose of the AI tool and/or what problem is this AI tool trying to solve?  Once you come to a view on risk, you can put in place the relevant compliance strategy and appropriate safeguards. Typically a higher level of compliance will be required where AI tools are involved in decision making or are likely to have an impact on privacy or status of your workforce. Even greater care must be taken where those tools involve the processing of special category data and/or involve decision making that may give rise to bias and discrimination.
  3. Carry out risk assessments: Consider whether a data protection/algorithmic impact assessment is required. In most cases when deploying AI, a data protection/algorithmic impact assessment will likely be required, however as mentioned above the level of risk assessment will depend on the purpose of the tool – any impact assessment should explain how the tool is fair and proportionate (consider whether employees or others would feel uneasy about the tool); transparent; a lawful basis is in place; that data subjects can exercise rights; and that you are being accountable. If your organisation is a vendor that supplies AI tools – you may not have ultimate responsibility for risk assessments but consider carrying one out to assist your customers. This can be a useful part of the sales’ team’s toolkit!
  4. Human in the loop: De-risk AI by always ensuring there is a trained human in the loop who is able to make meaningful decisions. This is particularly important where the AI tool is being used to help make decisions that are likely to result in legal or significant impact to your workforce – remember in the event of an employee claim, if you can’t explain the decision making process, you open yourself up to vulnerability.
  5. Robust policies and procedures:  When deploying any form of AI, always have policies and procedures in place – ensure that there is a robust governance structure in place and every decision about AI rollout is documented. It is also imperative that staff have a clear set of ‘do’s and don’ts’ when using AI tools (e.g. what data can be put in the tool and what can be done/should be done with the output.).  

If you are considering, or already, using AI in your business and would like to discuss any issues in more detail, or have any questions please do get in touch with Bryony Long, David Lorimer or your usual Lewis Silkin contact.