Skip to content
Subscribe

US and UK Collaboration: Guidelines for Secure AI System Development

Bola Ogbara
Bola Ogbara Connect on LinkedIn
2 min. read

As Artificial Intelligence (AI) becomes a more integrated part of our world, it becomes more and more clear that guidance on how to use it safely and ethically is needed. The Guidelines for Secure AI System Development are a result of international collaboration to address these concerns. 

US and UK Collaboration

The Cybersecurity and Infrastructure Security Agency (CISA) released a roadmap for AI in November of 2023 to outline its plans for AI, involving its use within the agency to fight cybercrime and the steps it would take to counter cyber threats powered by AI. Later that month, CISA and the U.S. Department of Homeland Security (DHS) collaborated with international partners in the UK’s National Cyber Security Centre (NCSC) to jointly release the Guidelines for Secure AI System Development

 

The document centers on making sure that every part of the creation of an AI system (design, development, deployment, and finally, operation and maintenance) is secure. Here’s a brief overview of the instructions in each section:   

 

  1. Secure Design: Identify and share awareness on threats and vulnerabilities, prioritize cybersecurity along with product performance, and carefully weigh AI model options
  2. Secure Development: Develop with security in mind, use secure tools and technologies, keep track of and protect AI-related assets, document models, datasets, and prompts, and manage technical debts that are made  
  3. Secure Deployment: Deploy into secure environments, implement robust access controls and cybersecurity best practices, protect access credentials, test models before release, prepare procedures for incidents, and provide users with a comprehensive guide to the model 
  4. Secure Operation and Maintenance: Continuously monitor your system’s behaviors and inputs, include automatic updates in all your products, and save and distribute lessons learned 

 

The document aligns well with CISA's secure by design initiative to ensure that software manufacturers are releasing digital products with minimal cybersecurity risk, while incorporating their goals to improve global collaboration. Jen Easterly, the director of CISA said in a press release about the guidelines that “domestic and international unity in advancing secure by design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology revolution.”

 

“International unity” has certainly become a theme in UK and US relations. CISA and NCSC teamed up to host a strategic dialogue in September of 2023 that included six other countries (Australia, Canada, Estonia, France, Japan, New Zealand, and Norway) to find ways of improving their cybersecurity in the context of cyberattacks from non-democratic countries. The US and UK are also collaborating to disrupt Russian cybercrime and ransomware, and have sanctioned Trickbot, a cybercrime group that is Russia based and has targeted health care providers in the US. 

 

The release of the Guidelines for Secure AI System Development, a collaborative effort between CISA, DHS, and NCSC, marks a significant step towards a safer and more ethical future for AI. By working together, nations can not only ensure the safe and ethical development of AI, but also address other cybersecurity threats and bring about change in an impactful way.