Skip to content
Subscribe

CISA’s 2023-2024 Roadmap for AI

Bola Ogbara
Bola Ogbara Connect on LinkedIn
2 min. read

CISA's 2023-2024 Roadmap for AI aims to responsibly use and protect critical infrastructure from potential risks and malicious use of AI, emphasizing collaboration and evaluation.

CISA’s 2023-2024 Roadmap for AI

Artificial Intelligence (AI) has taken the world by storm. Even before the advent of Chat GPT, lower levels of AI had widespread use in face ID, digital voice assistants like Siri and Alexa, and most recommendation algorithms (from your Twitter feed to your Amazon home page). Advancements in large language models have dramatically improved the capabilities of AI, and now AI has applications everywhere. 

 

But just as AI has been used to reduce the time and the specific know-how to do many benevolent tasks, it has also streamlined the pathway for many cybercriminals to create malicious tools. In November of 2023, the Cybersecurity Infrastructure and Security Agency (CISA) released a roadmap specifically for AI to address these concerns, as well as create standards for their use of AI. 

 

The 2023-2024 CISA Roadmap for Artificial Intelligence discusses CISA’s relationship with AI, pointing out that it has the potential to help the agency complete its original goals (put goals here) outlined in its strategic plan, released earlier that year, but still could be used to impede them, depending on whose hands it fell into. In response, the roadmap hinges on five lines of effort: 

 

1. “Responsibly use AI to support our mission”:

CISA plans to promote responsible AI use by establishing ethical and safety processes to check AI use. This will involve examining cases where AI is already being used and checking the data that AI is using to create guidelines and privacy controls. The agency also says it will continuously evaluate AI models to ensure they are secure and integrate well with existing IT security practices, making sure to limit biases in AI systems when employing them in their cybersecurity mission. 

 

2. “Assure AI systems”:

CISA will look for the potential risks that AI-based software offers to critical infrastructure sectors and consider ways to mitigate said risks. Relevant stakeholders will be told about the use of AI and CISA will make guidelines that encourage secure AI systems, and create tools that will test AI systems. AI security will become a part of the Secure By Design program to make sure that AI-based software and products do not have avoidable vulnerabilities before being released. 

 

3. “Protect Critical Infrastructure from malicious use of AI”: 

Building on their collaboration goals from their strategic plan, CISA will work with the Information Technology Sector Coordinating Council’s AI Working Group and other industry stakeholder partners. The Joint Cyber Defense Collaborative (JCDC) will put out a specific effort (JCDC.AI) to improve collaboration on AI threats, such that industry, federal, and international partners will share information with each other and also the broader community.  CISA will release materials on emerging AI risks to critical infrastructure as well as appropriate risk management approaches to counter them. 

 

4. “Collaborate with and communicate on key AI efforts with the interagency, international partners and the public”:

CISA will work closely with the Department of Homeland Security (DHS), particularly the DHS AI Task Force on AI policy issues. Interagency meetings on AI policy will be set for successful cooperation across different federal departments and international partners, and CISA will create policy positions and policy decisions to ensure there is a national standard on AI use and that critical infrastructure systems are not harmed by AI policies. 

 

5. “Expand AI expertise in our workforce”: 

Lastly, CISA will find and use experts on AI (as well as interns, fellows, and staff with AI expertise) to create a strong cybersecurity workforce through many pathways like the Cyber Talent Management System (CTMS). The agency will also give training and education opportunities for their employees to make sure they can understand objectives on policy aspects of AI as well as the technical capabilities afforded by it. 

 

CISA’s Roadmap for AI emphasizes collaboration across federal agencies as well as continued evaluation of AI uses and risks. With such a robust blueprint, it’s clear that CISA is working hard to use AI wisely and protect critical infrastructure sectors from AI-powered attacks.