By Dr. Darren Death, Vice President of Information Security, Chief Information Security Officer, ASRC Federal
AI security is a hot topic in today’s cybersecurity landscape due to the increasing integration of AI systems in essential areas like healthcare, finance, transportation, and national security. My company advocates for the responsible design and implementation of AI by taking an ethical, human-on-the-loop approach to AI operations.
While offering immense benefits, these systems also present new vulnerabilities and threats, including data breaches, manipulation of algorithms, and misuse of AI technologies. Ensuring AI security is crucial for maintaining trust in AI systems, protecting sensitive data, and safeguarding against malicious activities. This involves addressing unique challenges such as protecting AI models, robustness against adversarial attacks, and ensuring ethical use of AI. The overarching goal is to create AI systems that are intelligent and efficient but also resilient and trustworthy.
CISA’s Guidelines for Secure AI System Development provide a comprehensive framework for securely developing AI systems, including mitigating AI security risks, integrating security from the start, and protecting AI models and infrastructures. The focus is on addressing unique AI security vulnerabilities, advocating ‘secure by design’ principles, and ensuring the protection of AI models and infrastructure. These guidelines aim to help organizations understand and mitigate AI-specific threats, maintain supply chain security, and promote responsible AI deployment and continuous monitoring.
The key themes outlined in the document are centered around the secure design, development, deployment, and operation of AI systems:
- Secure Design: CISA emphasizes integrating security at the foundational level of AI systems, and advocates for threat modeling to identify and assess potential security risks specific to AI, including evaluating how AI systems might be exploited or manipulated.
Additionally, the guidelines focus on designing AI models and systems with inherent resilience to a variety of security threats, especially those unique to AI technologies. This preparedness can be enabled by educating staff about threats and risks related to the tech, ensuring a high level of awareness within the team to support a secure design. This approach aims to embed security as a core aspect of the AI system’s architecture and functionality, ensuring a more secure and reliable deployment. - Secure Development: When focusing on secure development, CISA emphasizes supply chain security, documentation, and asset management. This involves ensuring the security of all components and services in the AI system’s supply chain. Additionally, comprehensive documentation is crucial for maintaining clarity and traceability in AI system development, particularly for data models and prompts, to maintain transparency and accountability throughout development.
Asset management, including digital and physical assets related to AI systems, is key to maintaining overall system security. Managing technical debt is addressed, stressing the need to handle any technical compromises made during development to avoid future security and operational challenges. - Secure Deployment: To deploy AI systems securely, technologists must ensure robust infrastructure, continuous protection of AI models, and the models supporting infrastructure and services during deployment. The guidelines stress developing incident management processes to handle potential security issues effectively.
Releasing AI responsibly is also emphasized, advocating for deployment practices that include security testing to ensure the environment’s security controls are operating as expected. Additionally, it highlights the importance of designing systems that encourage users to adopt secure practices by setting options to their secure configuration by default, ensuring the overall security of the deployed AI system and the user’s experience. - Secure Operation: Lastly, CISA emphasizes the need for ongoing maintenance and operation of AI systems. This involves continuous monitoring to detect and address new threats, vulnerabilities, or irregularities in the system.
Continuous monitoring activities include regular updates to system software to maintain the system’s security posture against evolving threats. Monitoring the system’s inputs is also critical, as it helps identify and mitigate any malicious or anomalous input data that could compromise the system. This comprehensive approach ensures AI systems’ long-term security and reliability.
In my next article, I’ll address how agencies can take heed of these guidelines and implement best practices for securing AI systems.