Incident Response
My API Key Got Leaked — What Do I Do? 30-Minute Incident Plan
April 22, 2026 · 9 min read · PolyDefender Research Team
A calm, practical response plan for leaked OpenAI, Stripe, Supabase, and other production secrets.
Discovering that an API key has been leaked is one of the most stressful moments in building a product. But most of the damage from a leaked key does not happen at discovery—it happens in the hours after discovery when the response is slow or disorganized. This guide gives you a clear, time-boxed plan to contain the damage and close the vulnerability within 30 minutes.
Why Speed Is Everything
Automated scanners continuously sweep GitHub, npm packages, Pastebin, and public Discord servers for leaked credentials. GitHub's own research found that leaked secrets are typically abused within 60 seconds of being pushed to a public repository. If your key has been exposed for more than a few hours, assume it has already been tested by automated tooling.
This means the moment you detect a leak, the first action is rotation—not investigation. Rotate first, investigate second. An inactive key cannot be abused.
Minutes 0–5: Rotate Immediately
For each provider, go directly to their key management console and invalidate the exposed key. Generate a new key at the same time so you can restore service immediately after rotation.
- ▸OpenAI: platform.openai.com → API keys → Delete the exposed key, create a new one
- ▸Stripe: dashboard.stripe.com → Developers → API keys → Roll the key
- ▸Supabase: supabase.com → your project → Settings → API → Regenerate the anon or service_role key
- ▸AWS: IAM → Users → Security credentials → Deactivate the access key, create a replacement
- ▸GitHub: Settings → Developer settings → Personal access tokens → Delete and regenerate
- ▸Resend / SendGrid / Mailgun: Account → API keys → Delete and regenerate
After rotation, update your environment variables in your deployment platform (Vercel, Railway, Render, etc.) and redeploy. The service will be unavailable for the time it takes to deploy—that is acceptable. An active leaked key is worse than a brief outage.
Minutes 5–15: Assess the Damage
Once the key is rotated and service is restored, open the provider's usage logs and look for abuse. You are looking for three things: requests from unexpected IP addresses or geographies, a spike in usage or spend in the period before rotation, and actions that your application would not have taken (deleting data, creating users, sending bulk email).
- ▸OpenAI: platform.openai.com → Usage → Filter by the last 24-48 hours and look for model calls with unusual prompts or high token counts
- ▸Stripe: Check the Events log for charges, refunds, payout changes, or webhook registrations you did not initiate
- ▸Supabase: Check the Logs → API and Logs → Auth sections for unusual query patterns, new user registrations, or bulk reads
- ▸AWS CloudTrail: Check for unusual IAM activity, EC2 instance launches, or S3 bucket access
Document what you find. The log review is both for immediate incident response and for any downstream obligation to notify users if their data was accessed.
Minutes 15–30: Fix the Root Cause
Rotating the key stops the bleeding, but if you do not fix why the key was exposed, the same thing will happen again. The three most common root causes in AI-built apps are: a secret committed to a public or private repository, a secret in a NEXT_PUBLIC_ or VITE_ environment variable that ships to the browser, and a secret logged to a third-party logging service.
- ▸Search your repository history: git log -S "your_key_prefix" — even deleted files remain in git history unless the history is rewritten
- ▸Check all NEXT_PUBLIC_ and VITE_ prefixed variables in your .env files — anything with those prefixes is readable in the browser
- ▸Check your logging configuration for any middleware that logs request headers or environment variables
- ▸If the key was in a public repository at any point, assume it was scanned and treat the incident as confirmed compromise
After the 30 Minutes: Harden Against Recurrence
Once the immediate incident is closed, add preventive controls so this cannot happen again:
- ▸Install a git pre-commit hook or CI secret scanner (Gitleaks, TruffleHog, or GitHub's built-in secret scanning) that blocks commits containing credential patterns
- ▸Add a PolyDefender scan to your deploy pipeline — it catches exposed keys in compiled JavaScript, source maps, and build artifacts that git scanners miss
- ▸Set spending caps and rate limits on every provider so that even if a key leaks, the attacker cannot generate a large bill or send bulk requests before being throttled
- ▸Audit all team members' access to production secrets and remove anyone who no longer needs it
Need a fast security baseline?
Run a free scan to detect secrets, auth bypass, RLS exposure, injection paths, and dependency risk in minutes.