Introduction
GitHub's agentic security principles were developed to ensure the security of our AI agents, minimizing the risk of security breaches.
Section with subheadings
Security risks
Data exfiltration
The agent can transmit sensitive data to unauthorized destinations.
Impersonation and action attribution
The agent may not be clear who was responsible for the action.
Prompt injection
Prompt injection can allow malicious users to manipulate the agent.
Rules for agentic products
Ensuring all context is visible
Hiding context can allow malicious users to hide directives that maintainers may not see.
Firewalling the agent
The agent must be firewalled to limit its access to external risks.
Limiting access to sensitive information
Only give the agent access to what is necessary for it to function.
Preventing irreversible state changes
The agent must be designed so that it cannot initiate irreversible state changes without human intervention.
Consistently attributing actions to both initiator and agent
Any agentic interaction initiated by a user is clearly attributed to the user, and any action taken by the agent is clearly attributed to the agent.
Only gathering context from authorized users
The agent must gather context only from authorized users.
Practical implications
GitHub's agentic security principles are designed to be applicable to all AI agents, from code generation systems to chat functionality.
Conclusion
GitHub's agentic security principles were developed to ensure the security of our AI agents, minimizing the risk of security breaches.
๐ฌ Commenti (0)
๐ Accedi o registrati per commentare gli articoli.
Nessun commento ancora. Sii il primo a commentare!