The SentinelOne Annual Threat Report - A Defenders Guide from the FrontlinesThe SentinelOne Annual Threat ReportGet the Report
Experiencing a Breach?Blog
Get StartedContact Us
SentinelOne
  • Platform
    Platform Overview
    • Singularity Platform
      Welcome to Integrated Enterprise Security
    • AI for Security
      Leading the Way in AI-Powered Security Solutions
    • Securing AI
      Accelerate AI Adoption with Secure AI Tools, Apps, and Agents.
    • How It Works
      The Singularity XDR Difference
    • Singularity Marketplace
      One-Click Integrations to Unlock the Power of XDR
    • Pricing & Packaging
      Comparisons and Guidance at a Glance
    Data & AI
    • Purple AI
      Accelerate SecOps with Generative AI
    • Singularity Hyperautomation
      Easily Automate Security Processes
    • AI-SIEM
      The AI SIEM for the Autonomous SOC
    • AI Data Pipelines
      Security Data Pipeline for AI SIEM and Data Optimization
    • Singularity Data Lake
      AI-Powered, Unified Data Lake
    • Singularity Data Lake for Log Analytics
      Seamlessly Ingest Data from On-Prem, Cloud or Hybrid Environments
    Endpoint Security
    • Singularity Endpoint
      Autonomous Prevention, Detection, and Response
    • Singularity XDR
      Native & Open Protection, Detection, and Response
    • Singularity RemoteOps Forensics
      Orchestrate Forensics at Scale
    • Singularity Threat Intelligence
      Comprehensive Adversary Intelligence
    • Singularity Vulnerability Management
      Application & OS Vulnerability Management
    • Singularity Identity
      Identity Threat Detection and Response
    Cloud Security
    • Singularity Cloud Security
      Block Attacks with an AI-Powered CNAPP
    • Singularity Cloud Native Security
      Secure Cloud and Development Resources
    • Singularity Cloud Workload Security
      Real-Time Cloud Workload Protection Platform
    • Singularity Cloud Data Security
      AI-Powered Threat Detection for Cloud Storage
    • Singularity Cloud Security Posture Management
      Detect and Remediate Cloud Misconfigurations
    Securing AI
    • Prompt Security
      Secure AI Tools Across Your Enterprise
  • Why SentinelOne?
    Why SentinelOne?
    • Why SentinelOne?
      Cybersecurity Built for What’s Next
    • Our Customers
      Trusted by the World’s Leading Enterprises
    • Industry Recognition
      Tested and Proven by the Experts
    • About Us
      The Industry Leader in Autonomous Cybersecurity
    Compare SentinelOne
    • Arctic Wolf
    • Broadcom
    • CrowdStrike
    • Cybereason
    • Microsoft
    • Palo Alto Networks
    • Sophos
    • Splunk
    • Trellix
    • Trend Micro
    • Wiz
    Verticals
    • Energy
    • Federal Government
    • Finance
    • Healthcare
    • Higher Education
    • K-12 Education
    • Manufacturing
    • Retail
    • State and Local Government
  • Services
    Managed Services
    • Managed Services Overview
      Wayfinder Threat Detection & Response
    • Threat Hunting
      World-Class Expertise and Threat Intelligence
    • Managed Detection & Response
      24/7/365 Expert MDR Across Your Entire Environment
    • Incident Readiness & Response
      DFIR, Breach Readiness, & Compromise Assessments
    Support, Deployment, & Health
    • Technical Account Management
      Customer Success with Personalized Service
    • SentinelOne GO
      Guided Onboarding & Deployment Advisory
    • SentinelOne University
      Live and On-Demand Training
    • Services Overview
      Comprehensive Solutions for Seamless Security Operations
    • SentinelOne Community
      Community Login
  • Partners
    Our Network
    • MSSP Partners
      Succeed Faster with SentinelOne
    • Singularity Marketplace
      Extend the Power of S1 Technology
    • Cyber Risk Partners
      Enlist Pro Response and Advisory Teams
    • Technology Alliances
      Integrated, Enterprise-Scale Solutions
    • SentinelOne for AWS
      Hosted in AWS Regions Around the World
    • Channel Partners
      Deliver the Right Solutions, Together
    • SentinelOne for Google Cloud
      Unified, Autonomous Security Giving Defenders the Advantage at Global Scale
    • Partner Locator
      Your Go-to Source for Our Top Partners in Your Region
    Partner Portal→
  • Resources
    Resource Center
    • Case Studies
    • Data Sheets
    • eBooks
    • Reports
    • Videos
    • Webinars
    • Whitepapers
    • Events
    View All Resources→
    Blog
    • Feature Spotlight
    • For CISO/CIO
    • From the Front Lines
    • Identity
    • Cloud
    • macOS
    • SentinelOne Blog
    Blog→
    Tech Resources
    • SentinelLABS
    • Ransomware Anthology
    • Cybersecurity 101
  • About
    About SentinelOne
    • About SentinelOne
      The Industry Leader in Cybersecurity
    • Investor Relations
      Financial Information & Events
    • SentinelLABS
      Threat Research for the Modern Threat Hunter
    • Careers
      The Latest Job Opportunities
    • Press & News
      Company Announcements
    • Cybersecurity Blog
      The Latest Cybersecurity Threats, News, & More
    • FAQ
      Get Answers to Our Most Frequently Asked Questions
    • DataSet
      The Live Data Platform
    • S Foundation
      Securing a Safer Future for All
    • S Ventures
      Investing in the Next Generation of Security, Data and AI
  • Pricing
Get StartedContact Us
CVE Vulnerability Database
Vulnerability Database/CVE-2025-6638

CVE-2025-6638: Huggingface Transformers DoS Vulnerability

CVE-2025-6638 is a Regular Expression Denial of Service flaw in Huggingface Transformers MarianTokenizer that enables attackers to cause excessive CPU consumption. This article covers the technical details, affected versions, and mitigation.

Published: May 4, 2026

CVE-2025-6638 Overview

CVE-2025-6638 is a Regular Expression Denial of Service (ReDoS) vulnerability in the Hugging Face Transformers library. The flaw resides in the remove_language_code() method of the MarianTokenizer class. Affected version 4.52.4 uses an inefficient regular expression that exhibits catastrophic backtracking when processing crafted input strings containing malformed language code patterns. An attacker who can submit input to a service tokenizing text with MarianTokenizer can trigger excessive CPU consumption, leading to denial of service. The issue has been resolved in Transformers 4.53.0 by removing the regular expression entirely. The vulnerability is tracked under [CWE-1333: Inefficient Regular Expression Complexity].

Critical Impact

Network-reachable inference endpoints using MarianTokenizer can be rendered unavailable by a single crafted input string that exhausts CPU resources.

Affected Products

  • Hugging Face Transformers 4.52.4
  • Python applications importing MarianTokenizer from transformers.models.marian
  • Inference services and pipelines invoking Marian translation models

Discovery Timeline

  • 2025-09-12 - CVE-2025-6638 published to NVD
  • 2025-10-21 - Last updated in NVD database

Technical Details for CVE-2025-6638

Vulnerability Analysis

The defect lives in src/transformers/models/marian/tokenization_marian.py. The MarianTokenizer.remove_language_code() method applied a regular expression to strip language code prefixes such as >>fr<< from input text before tokenization. The pattern's structure permitted catastrophic backtracking: when an attacker supplies a string that nearly matches the language code shape but contains repeating ambiguous characters, the regex engine explores an exponential number of match paths. CPU time grows superlinearly with input length, blocking the worker thread executing tokenization. Because tokenization runs synchronously before model inference on most translation pipelines, a single malicious request can stall the entire serving process.

Root Cause

The root cause is inefficient regular expression complexity [CWE-1333]. The regex used to detect and remove language code markers contained overlapping quantifiers that produced ambiguous matches against partially malformed inputs. The maintainers' fix abandons regex matching altogether and replaces it with deterministic string operations, eliminating the backtracking surface.

Attack Vector

Exploitation requires only the ability to supply input text to a service that calls MarianTokenizer. No authentication or user interaction is needed. Common attack surfaces include public translation APIs, chatbots, document processors, and any HTTP endpoint that forwards user input into a Marian-based pipeline. Confidentiality and integrity are not impacted; the effect is sustained CPU exhaustion on the host running tokenization.

python
# Patch excerpt: src/transformers/models/marian/tokenization_marian.py
 from shutil import copyfile
 from typing import Any, Optional, Union

-import regex as re
 import sentencepiece

 from ...tokenization_utils import PreTrainedTokenizer

Source: Hugging Face Transformers commit 47c34fb

The patch removes the regex import and rewrites remove_language_code() using non-regex string parsing, eliminating backtracking risk.

Detection Methods for CVE-2025-6638

Indicators of Compromise

  • Sustained 100% CPU utilization on Python worker processes loading transformers.models.marian.tokenization_marian.
  • Request latency spikes or timeouts on translation endpoints correlating with single inbound requests.
  • Application logs showing tokenization calls that never return for specific inputs containing repeated > or < characters.

Detection Strategies

  • Inventory Python environments for transformers==4.52.4 using pip list or SBOM tooling, and flag instances importing MarianTokenizer.
  • Inspect application traces for unusually long execution time inside MarianTokenizer.remove_language_code frames.
  • Apply input length and character-class rate limits at the API gateway and alert when payloads exceed expected language code formats.

Monitoring Recommendations

  • Track per-request CPU time on inference workers and trigger alerts when tokenization exceeds a defined threshold (for example, 500 ms).
  • Monitor process restarts, OOM events, and worker pool saturation on services hosting Marian models.
  • Forward web application firewall logs to a SIEM such as Singularity Data Lake to correlate anomalous payload patterns with backend resource exhaustion.

How to Mitigate CVE-2025-6638

Immediate Actions Required

  • Upgrade transformers to version 4.53.0 or later in all production, staging, and development environments.
  • Audit dependency manifests (requirements.txt, pyproject.toml, Pipfile.lock) and rebuild container images that pin vulnerable versions.
  • Restart inference services after upgrade to ensure the patched tokenizer is loaded.

Patch Information

The fix is contained in commit 47c34fba5c303576560cb29767efb452ff12b8be, released as part of Hugging Face Transformers 4.53.0. The maintainers replaced regex-based language code stripping with plain string operations. Reference: Hugging Face Transformers commit 47c34fb and the Huntr bounty disclosure.

Workarounds

  • Enforce a strict maximum input length (for example, 2,048 characters) before passing text to MarianTokenizer.
  • Validate or strip language code prefixes (>>xx<<) at the application layer using a deterministic parser before invoking the tokenizer.
  • Run tokenization in a worker process with a hard CPU time limit (resource.setrlimit(RLIMIT_CPU, ...)) so malicious inputs cannot block the main service.
bash
# Upgrade to the patched release
pip install --upgrade "transformers>=4.53.0"

# Verify installed version
python -c "import transformers; print(transformers.__version__)"

Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

  • Vulnerability Details
  • TypeDOS

  • Vendor/TechHuggingface Transformers

  • SeverityHIGH

  • CVSS Score7.5

  • EPSS Probability0.03%

  • Known ExploitedNo
  • CVSS Vector
  • CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
  • Impact Assessment
  • ConfidentialityLow
  • IntegrityNone
  • AvailabilityHigh
  • CWE References
  • CWE-1333
  • Technical References
  • Huntr Bounty Announcement
  • Vendor Resources
  • GitHub Commit Update
  • Related CVEs
  • CVE-2025-6921: Huggingface Transformers ReDoS Vulnerability

  • CVE-2025-3262: Huggingface Transformers DoS Vulnerability

  • CVE-2025-2099: Huggingface Transformers ReDoS Vulnerability

  • CVE-2026-1839: HuggingFace Transformers RCE Vulnerability
Default Legacy - Prefooter | Experience the World’s Most Advanced Cybersecurity Platform

Experience the World’s Most Advanced Cybersecurity Platform

See how our intelligent, autonomous cybersecurity platform can protect your organization now and into the future.

Try SentinelOne
  • Get Started
  • Get a Demo
  • Product Tour
  • Why SentinelOne
  • Pricing & Packaging
  • FAQ
  • Contact
  • Contact Us
  • Customer Support
  • SentinelOne Status
  • Language
  • Platform
  • Singularity Platform
  • Singularity Endpoint
  • Singularity Cloud
  • Singularity AI-SIEM
  • Singularity Identity
  • Singularity Marketplace
  • Purple AI
  • Services
  • Wayfinder TDR
  • SentinelOne GO
  • Technical Account Management
  • Support Services
  • Verticals
  • Energy
  • Federal Government
  • Finance
  • Healthcare
  • Higher Education
  • K-12 Education
  • Manufacturing
  • Retail
  • State and Local Government
  • Cybersecurity for SMB
  • Resources
  • Blog
  • Labs
  • Case Studies
  • Videos
  • Product Tours
  • Events
  • Cybersecurity 101
  • eBooks
  • Webinars
  • Whitepapers
  • Press
  • News
  • Ransomware Anthology
  • Company
  • About Us
  • Our Customers
  • Careers
  • Partners
  • Legal & Compliance
  • Security & Compliance
  • Investor Relations
  • S Foundation
  • S Ventures

©2026 SentinelOne, All Rights Reserved.

Privacy Notice Terms of Use

English