CVE-2026-34450 Overview
A vulnerability has been identified in the Anthropic Python SDK affecting the local filesystem memory tool. From version 0.86.0 to before version 0.87.0, memory files were created with insecure permissions (mode 0o666), making them world-readable on systems with a standard umask and world-writable in environments with permissive umask settings such as many Docker base images. This insecure permissions vulnerability (CWE-276) allows local attackers on shared hosts to read persisted agent state, and in containerized deployments, to modify memory files to influence subsequent model behavior.
Critical Impact
Local attackers can read sensitive persisted agent state and potentially modify memory files to manipulate AI model behavior in shared or containerized environments.
Affected Products
- Anthropic Python SDK versions >= 0.86.0 and < 0.87.0
- Both synchronous and asynchronous memory tool implementations
Discovery Timeline
- 2026-03-31 - CVE CVE-2026-34450 published to NVD
- 2026-04-01 - Last updated in NVD database
Technical Details for CVE-2026-34450
Vulnerability Analysis
The vulnerability exists in the local filesystem memory tool implementation within the Anthropic Python SDK. When the SDK creates memory files for persisting agent state, it uses mode 0o666, which grants read and write permissions to all users on the system. This behavior bypasses the protective effects of a properly configured umask on standard systems and becomes particularly dangerous in containerized environments where default umask settings are often more permissive.
The issue affects both the synchronous and asynchronous implementations of the memory tool, meaning any application using either API pattern to persist agent state is vulnerable. In shared hosting environments, this allows any local user to read potentially sensitive agent conversation history, context, and state information. In Docker deployments—where base images frequently ship with permissive umask configurations—attackers can not only read but also write to these memory files, potentially injecting malicious context that could influence subsequent model responses.
Root Cause
The root cause is the use of overly permissive file creation mode (0o666) when writing memory files to the local filesystem. Instead of respecting system umask settings or using restrictive permissions such as 0o600 (owner read/write only), the SDK explicitly sets world-readable and potentially world-writable permissions on sensitive data files.
Attack Vector
This vulnerability requires local access to the system where the Anthropic Python SDK is running. An attacker with local user privileges on a shared host or within the same container environment can exploit this vulnerability through direct filesystem access. The attack does not require any user interaction and can be performed with low privilege levels. In read-only scenarios, the attacker gains access to persisted agent state which may contain sensitive conversation data. In write-enabled scenarios (permissive umask environments), the attacker can modify memory files to inject malicious context that may influence future model behavior, potentially leading to prompt injection or manipulation of AI agent actions.
The vulnerability mechanism involves the memory tool creating files with explicit 0o666 permissions when persisting agent state to the local filesystem. For detailed technical information, see the GitHub Security Advisory GHSA-q5f5-3gjm-7mfm.
Detection Methods for CVE-2026-34450
Indicators of Compromise
- Memory files in the SDK storage directory with world-readable permissions (-rw-rw-rw- or similar)
- Unexpected access patterns to memory files from non-owner user accounts
- Modified timestamps on memory files that don't correlate with legitimate application activity
- Anomalous agent behavior suggesting external modification of persisted state
Detection Strategies
- Monitor file permission audits for files created by applications using the Anthropic Python SDK
- Implement file integrity monitoring on directories containing agent memory files
- Review access logs for unauthorized reads or writes to SDK memory storage locations
- Deploy auditd rules to track access to memory file directories by non-owner processes
Monitoring Recommendations
- Enable filesystem access auditing on directories used by the Anthropic Python SDK for memory storage
- Configure alerts for permission changes or unexpected access to memory files
- Monitor container environments for umask misconfigurations that could exacerbate the vulnerability
- Track SDK version deployments across your infrastructure to identify vulnerable installations
How to Mitigate CVE-2026-34450
Immediate Actions Required
- Upgrade the Anthropic Python SDK to version 0.87.0 or later immediately
- Audit existing memory files and reset permissions to 0o600 (owner read/write only)
- Review containerized deployments for permissive umask configurations
- Assess whether any sensitive agent state may have been exposed or modified
Patch Information
The vulnerability has been patched in version 0.87.0 of the Anthropic Python SDK. The fix addresses the insecure file permissions by implementing proper file mode settings when creating memory files. For technical details on the fix, see the GitHub Commit. The patched version is available via the GitHub Release v0.87.0.
Workarounds
- Manually correct file permissions on existing memory files using chmod 600
- Configure a restrictive umask (0077) for processes running the SDK
- Isolate SDK deployments in dedicated containers or environments without shared user access
- Implement filesystem access controls or SELinux/AppArmor policies to restrict access to memory directories
# Fix permissions on existing memory files
find /path/to/sdk/memory -type f -exec chmod 600 {} \;
# Set restrictive umask before running SDK applications
umask 0077
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


