Security Threats to AI Systems

According to an eBook published by Utimaco, organizations consider security risks as the leading barrier to effective AI adoption on their data. The value of AI depends on the data collected, but building and training models involves significant risks, in addition to threats to intellectual property.

The main threats include:

  • Manipulation of training data to degrade model outputs.
  • Extraction or copying of models, eroding intellectual property rights.
  • Exposure of sensitive data used during training or inference.

Quantum Resilience and Crypto-Agility

Current public key cryptography will become vulnerable in the next ten years with the advent of quantum computers. Organized groups may already be collecting encrypted data to decrypt it in the future. Therefore, datasets with long-term sensitivity require protection against future decryption.

The migration to quantum-resistant cryptography will affect protocols, key management, interoperability, and performance. 'Crypto-agility' is suggested, i.e., the ability to change cryptographic algorithms without redesigning underlying systems, combining established algorithms with post-quantum methods.

Secure Hardware and AI Lifecycle

Cryptography alone does not solve all risks. The use of hardware-based trust devices is recommended to isolate cryptographic keys and sensitive operations. Protection should extend throughout the AI lifecycle, from data ingestion to training, deployment, and inference.

Hardware-based enclaves isolate workloads, preventing even system administrators from accessing the data. Hardware modules can verify that the enclave is in a trusted state before releasing keys, creating a 'chain of trust' from hardware to the application. Hardware-based key management produces tamper-resistant logs to support compliance with regulations such as the EU AI Act.