Loading profile data...

Loading profile data...

BLOG

Responsible AI: Why ethics matter in artificial intelligence

Responsible AI: Why ethics matter in artificial intelligence
Nicola Cain
Nicola CainHandley Gill Limited

Posted: Tue 7th May 2024

While artificial intelligence (AI) is an emerging technology and is not currently subject to specific regulation in the UK, that doesn't mean existing laws and regulations don't apply.

From laws and regulations on data protection, human rights, including privacy and the right to be free from discrimination, and intellectual property, the development and use of AI models need to be compliant.

The mere fact that an AI model is available to use in the UK doesn't mean that using it is lawful or compliant or that it doesn't expose your organisation to liability.

With many AI developers offering free access to their tools, often subject to terms and conditions, which enable them to use prompts and data uploaded to train the AI.

This makes users wholly liable for any adverse consequences. Staff can be tempted to submit confidential, commercially sensitive information and/or personal data to AI models that may be operated overseas, which can unwittingly result in legal liability for their employers.

Ensuring that your organisation has considered whether and how staff are to be allowed to use AI, that they receive appropriate training and that controls, governance and oversight mechanisms are in place, will empower your organisation to capitalise on the opportunities offered by AI, while doing so safely, responsibly and ethically.

It is sensible to take a risk-based approach to the use of AI models and identify preferred suppliers where staff are permitted to use AI. For low-risk uses of AI, such as those which don't involve any personal data, IP, confidential information or decision-making, an approach limited to staff training and appropriate controls may be sufficient. For higher-risk uses of AI, a more in-depth approach to compliance and governance will be required.

Being transparent about your use of AI and your approach to compliance can support the building of trust on the part of those who are affected by the use of AI, such as your employees, customers or third parties.

Watch this webinar to to learn how to harness the power of AI:


These are steps you can take to ensure that your organisation's approach to AI is safe, responsible, ethical and legally compliant:

  1. Brief the board or other organisational leadership on artificial intelligence (AI) opportunities and risks

  2. Secure a strategic decision as to the organisation’s AI risk appetite

  3. Prepare and publish a policy on the use of AI, detailing what AI tools can be used, for what use cases, upon what conditions and subject to what safeguards

  4. Provide training and wider awareness raising to staff on AI and on the specific AI tools they can access

  5. Block access via your network to unauthorised AI tools

  6. Identify each intended AI use case and conduct an AI risk assessment in relation to each

  7. Only authorise the use of AI tools appropriate to the risk of each relevant use case

  8. Design and procure AI models with built-in safeguards

  9. Establish a gateway process for AI procurement/deployments

  10. Implement safeguards appropriate to the nature and scale of risk

  11. Consult with affected individuals in advance where possible, or with appropriate representatives or stewards to represent their interests

  12. Establish a beta phase of testing AI tools, running them alongside traditional approaches to identify benefits, as well as any divergence which may reveal potential deficiencies

  13. Be transparent about the use of AI, particularly where individuals are affected

  14. Test and monitor the operation of the AI model to confirm its accuracy, reliability and propriety

  15. Establish a reporting mechanism to enable users to report unexpected or inappropriate outcomes and act upon reports

  16. Monitor the way AI is being used in practice and whether it is impacting user behaviour in unexpected ways

  17. Ensure AI is not the sole mechanism for decision-making impacting individuals

  18. Establish a governance and oversight mechanism

  19. Consider the impact of the use of AI by your supply chain, both for your staff and for your organisation’s risk profile

  20. Remember to consider and meet wider legal and regulatory compliance obligations specific to your industry and/or use of AI

  21. Engage an iterative approach, reviewing and revising your safeguards and governance to reflect emerging risks and regulation

  22. Establish a process for individuals to raise concerns and seek redress

Relevant resources

Nicola Cain
Nicola CainHandley Gill Limited
At Handley Gill, our experienced, legally qualified consultants offer pragmatic and robust data protection, privacy and wider legal advice, compliance and assurance services to our clients, which range from micro-entities, to SMEs, multi-national corporations and public bodies in industries spanning marketing, regulated services, recruitment, tech, content providers, political parties and lobbying groups, charities, law enforcement, sport and fitness and healthcare.  Our services include:  · Establishing and implementing data protection compliance frameworks;  · Conducting data mapping exercises;  · Advising on the lawful basis for personal data processing;  · Advising on the need for, and providing, outsourced data protection officer (DPO) services;  · Conducting data protection impact assessments (DPIAs), advising on high risk processing and prior consultation obligations;  · Conducting legitimate interests assessments (LIAs);  · Drafting privacy, data protection and cookie policies and notices;  · Drafting data handling and management policies and standards;  · Drafting, advising on and negotiating data processing agreements;  · Drafting, advising on and negotiating data sharing agreements;  · Advising on compliant marketing practices and campaigns;  · Advising on and conducting vendor and supply chain risk assessments;  · Conducting international data transfer risk assessments (TRAs);  · Drafting, advising on and negotiating international data transfer agreements and other safeguards;  · Advising on and preparing responses to data subject rights requests, including data subject access requests (DSARs);  · Preparing and rehearsing data breach and cyber incident response preparedness plans;  · Advising on data breach notification obligations;  · Designing and delivering standard and bespoke data protection training;  · Advising on the application of the Age Appropriate Design Code (Children’s Code);  · Providing independent data stewardship representation to support consultation obligations;  · Advising on the ethical design and implementation of machine learning and Artificial Intelligence (AI);  · Conducting data protection audits;  · Advising and representing in regulatory and enforcement action brought by the Information Commissioner (ICO) and other regulators;  · Advising and representing in appeals to the First-Tier Tribunal (Information Rights);  · ConA · Providing independent data stewardship representation to support consultation obligations;  · Advising and representing in regulatory and enforcement action brought by the Information Commissioner (ICO) and other regulators;  · Advising and representing in appeals to the First-Tier Tribunal (Information Rights);  · Advising and supporting preparations for the implementation of the Online Safety Bill.