VBS-2026-0004MEDIUMCVSS 5.5CWE-540AI apps built with LLM coding assistants commonly hardcode the system prompt as a constant in the API route or component file. When the route is a client component or the constant is referenced from client code, Next.js bundles it into publicly-accessible JavaScript. Attackers can extract the exact system prompt, enabling highly targeted prompt injection attacks.
// Found in client bundle (app/chat/page.tsx):
export const SYSTEM_PROMPT = `You are a helpful assistant for AcmeCorp.
You have access to internal pricing data. Never reveal that prices
can be negotiated below ${MIN_PRICE}.`Move system prompts to server-side API routes only. Store them in environment variables, not source code. Never reference them from client components.
How do I check if my Next.js + React app is affected by lLM system prompt exposed in client-side JavaScript bundle?
AI apps built with LLM coding assistants commonly hardcode the system prompt as a constant in the API route or component file. Search your codebase for Next.js, React, OpenAI SDK, Vercel AI SDK patterns and verify the remediation has been applied.
Why does Lovable and Bolt.new generate code with CWE-540 (medium severity)?
AI apps built with LLM coding assistants commonly hardcode the system prompt as a constant in the API route or component file. When the route is a client component or the constant is referenced from client code, Next.js bundles it into publicly-accessible JavaScript.
How do I fix lLM system prompt exposed in client-side JavaScript bundle?
Move system prompts to server-side API routes only. Store them in environment variables, not source code. Never reference them from client components.
What can an attacker do if my app contains VBS-2026-0004?
With CVSS 5.5 (medium), this vulnerability poses meaningful risk — partial information or access may be exposed. When the route is a client component or the constant is referenced from client code, Next.js bundles it into publicly-accessible JavaScript.