Most security breaches don't happen because attackers are brilliant. They happen because developers made well-known, preventable mistakes under deadline pressure, with incomplete context, or simply without anyone telling them better.
This article covers the ten most common security mistakes developers make — not to shame anyone, but to build the kind of awareness that prevents the next breach. Each section includes what goes wrong, why it's dangerous, and how to fix it.
1. Exposing API Keys and Secrets in Code
What happens
A developer commits a .env file to a public GitHub repository. Or hardcodes a Stripe secret key directly in the frontend JavaScript. Or pushes AWS credentials into a Docker image. These are not rare edge cases — they happen constantly, and automated bots scan GitHub 24/7 looking for exactly these patterns.
// ❌ Never do this
const stripe = require('stripe')('sk_live_4eC39HqLyjWDarjtT1zdp7dc');
const response = await fetch(`https://api.openai.com/v1/chat/completions`, {
headers: { Authorization: `Bearer sk-proj-...` } // hardcoded secret
});
Why it's dangerous
Once a secret is in a public repository — even for 30 seconds before you delete it — it should be considered compromised. Bots index commits faster than humans can react. Attackers with your AWS keys can spin up thousands of GPU instances in minutes. Your bill will be enormous; the damage may be irreversible.
How to fix it
Store secrets in environment variables, never in source code
Use .gitignore to exclude .env files from day one
Use secret scanning tools: GitHub Secret Scanning, GitGuardian, or truffleHog
Rotate any key that has ever been committed, even briefly
For production, use a secrets manager: AWS Secrets Manager, HashiCorp Vault, or Doppler
# .gitignore — add these from the very first commit
.env
.env.local
.env.production
*.pem
2. Storing Passwords Without Proper Hashing
What happens
A developer stores user passwords as plain text in the database. Or uses MD5/SHA-1 — which are not password hashing algorithms, despite what Stack Overflow answers from 2009 suggest. When the database is breached (and for any sufficiently successful product, it eventually will be), every user's password is immediately compromised.
# ❌ Plain text — catastrophic
user.password = request.form['password']
# ❌ SHA-256 — still wrong for passwords (fast = bad for hashing passwords)
import hashlib
user.password = hashlib.sha256(password.encode()).hexdigest()
SPONSORED
InstaDoodle - AI Video Creator
Create elementAI Explainer Videos That Convert With Simple Text Prompts.
Passwords stored without proper hashing are recovered instantly via precomputed rainbow tables. Even unsalted bcrypt can be cracked faster than you think on modern hardware. The goal isn't just "not plain text" — it's making brute-force attacks computationally infeasible.
How to fix it
Use a purpose-built, slow password hashing algorithm. The three acceptable choices today are:
Algorithm
When to use
bcrypt
Most languages, widely supported, battle-tested
Argon2id
Preferred for new systems; winner of Password Hashing Competition
Never implement your own hashing scheme. The library authors have spent years thinking about this; you haven't.
3. Not Enforcing HTTPS
What happens
The API runs on HTTP. The login form submits credentials over an unencrypted connection. The production app uses HTTPS, but the staging environment doesn't — and someone tests with production credentials on staging. Mixed content warnings are silently ignored.
Why it's dangerous
On any network the user doesn't fully control — a coffee shop Wi-Fi, a corporate proxy, a compromised router — HTTP traffic is trivially intercepted and modified. Credentials, session tokens, and sensitive data travel in plain text. Even worse: an attacker performing a man-in-the-middle attack can inject malicious scripts into your HTML responses before they reach the user.
How to fix it
Obtain a TLS certificate — Let's Encrypt provides them for free
Redirect all HTTP traffic to HTTPS at the server or load balancer level
Set the Strict-Transport-Security (HSTS) header
Mark session cookies as Secure
# nginx — force HTTPS redirect
server {
listen 80;
return 301 https://$host$request_uri;
}
# HSTS header — tells browsers to always use HTTPS
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
4. Trusting Client-Side Validation Alone
What happens
The frontend form validates that an email address looks correct, that a price is positive, that a quantity doesn't exceed stock. The backend receives the data and trusts it completely — because "the frontend already checked." An attacker bypasses the UI entirely and sends raw HTTP requests with arbitrary payloads.
// ❌ Backend that trusts frontend validation
app.post('/purchase', (req, res) => {
const { itemId, quantity, price } = req.body;
// Assumes price came from the UI and is correct
chargeUser(req.user, price * quantity);
});
Why it's dangerous
Client-side validation exists for user experience. Backend validation exists for security. They are not interchangeable. Any data that travels over the network can be modified — by the user, by a proxy, by automated tooling. A single missing backend check can allow an attacker to purchase items for $0.00, submit negative quantities, or inject malicious strings.
How to fix it
Treat every incoming request as if it came from a hostile actor. Validate every field server-side: type, range, format, length, and business logic constraints.
# ✅ FastAPI with Pydantic — server-side validation
from pydantic import BaseModel, condecimal, conint
class PurchaseRequest(BaseModel):
item_id: int
quantity: conint(gt=0, le=100) # must be 1–100
@app.post("/purchase")
def purchase(req: PurchaseRequest, user: User = Depends(get_current_user)):
item = get_item_or_404(req.item_id)
price = item.price # price comes from the database, never from the client
charge_user(user, price * req.quantity)
5. SQL Injection via Unsanitized Input
What happens
User input is concatenated directly into SQL queries. This is one of the oldest vulnerabilities in web development — it has been on the OWASP Top 10 for over two decades — and it still appears in production code every year.
# ❌ SQL injection waiting to happen
username = request.args.get('username')
query = f"SELECT * FROM users WHERE username = '{username}'"
db.execute(query)
# Attacker sends: username = ' OR '1'='1
# Resulting query: SELECT * FROM users WHERE username = '' OR '1'='1'
# Returns all users in the database
Why it's dangerous
SQL injection can allow an attacker to bypass authentication, read arbitrary data from the database, modify or delete records, and in some configurations, execute operating system commands. A single vulnerable endpoint can expose your entire data layer.
How to fix it
Always use parameterized queries or an ORM. Never concatenate user input into query strings.
# ✅ Parameterized query
cursor.execute("SELECT * FROM users WHERE username = %s", (username,))
# ✅ SQLAlchemy ORM
user = db.query(User).filter(User.username == username).first()
6. Insecure File Uploads
What happens
An application allows users to upload files without validating type, content, or destination. An attacker uploads a PHP file disguised as a profile picture. The server executes it. Alternatively, a user uploads a 10GB file and exhausts disk space, or uploads an SVG containing embedded JavaScript that executes in other users' browsers.
// ❌ Accepting uploads without any validation
app.post('/upload', upload.single('file'), (req, res) => {
// Trusts the filename and MIME type from the client
fs.renameSync(req.file.path, `./uploads/${req.file.originalname}`);
res.json({ url: `/uploads/${req.file.originalname}` });
});
Why it's dangerous
Unrestricted file upload is a direct path to Remote Code Execution (RCE) — the worst possible outcome in application security. Even without RCE, attackers can use uploads for stored XSS, path traversal, or denial-of-service attacks.
How to fix it
Validate file type by inspecting magic bytes (file content), not just the extension or MIME type header
Generate a random filename; never trust the client-provided name
Store uploaded files outside the web root, or better yet, in object storage (S3, GCS)
Set strict file size limits
Serve user-uploaded files with Content-Disposition: attachment to prevent browser execution
Session tokens are short and predictable. JWTs use the none algorithm. Tokens never expire. Session IDs aren't regenerated after login. Password reset tokens are valid indefinitely. These mistakes are individually subtle but collectively devastating.
// ❌ Accepting "none" algorithm in JWT — a famous vulnerability
const decoded = jwt.verify(token, secret, {
algorithms: ['HS256', 'none'] // 'none' allows unsigned tokens
});
Why it's dangerous
Broken authentication lets attackers impersonate legitimate users — including administrators. Session fixation attacks, token theft, and credential stuffing all exploit weaknesses in how applications issue and validate identity tokens.
How to fix it
Use a well-maintained authentication library rather than rolling your own
Issue cryptographically random session tokens (minimum 128 bits of entropy)
Set session expiry; implement sliding windows for active sessions
Regenerate session IDs after login and privilege escalation
Explicitly whitelist JWT algorithms — never allow none
Implement rate limiting on authentication endpoints
// ✅ Explicit algorithm whitelist for JWT
const decoded = jwt.verify(token, secret, {
algorithms: ['HS256'] // only allow what you intend
});
8. Cross-Site Scripting (XSS)
What happens
User-supplied content is rendered in HTML without sanitization. A malicious user posts a comment containing <script>document.cookie</script>. Everyone who views that comment now has their session cookie sent to the attacker's server. This is stored XSS — but reflected and DOM-based XSS variants are equally common.
<!-- ❌ Dangerous — renders arbitrary HTML from user input -->
<div dangerouslySetInnerHTML={{ __html: userComment }} />
Why it's dangerous
XSS allows an attacker to execute arbitrary JavaScript in the context of your application, in your users' browsers. This means stealing session tokens, performing actions on behalf of the user, redirecting to phishing pages, or logging keystrokes — all without any visible indication to the user.
How to fix it
Use a framework that escapes output by default (React, Vue, and Angular all do this — unless you explicitly bypass them)
Never use dangerouslySetInnerHTML, innerHTML, eval(), or document.write() with untrusted content
If you must render rich user content, sanitize with a trusted library like DOMPurify
Implement a strict Content Security Policy (CSP) header
// ✅ Sanitize before rendering rich HTML
import DOMPurify from 'dompurify';
const clean = DOMPurify.sanitize(userComment);
return <div dangerouslySetInnerHTML={{ __html: clean }} />;
A request hits an unhandled exception and the server returns a full stack trace, database schema details, or internal file paths. Sometimes the error includes the SQL query that failed — including the table name, column names, and query structure. This information is a gift to an attacker performing reconnaissance.
// ❌ What a verbose production error looks like
{
"error": "PG::UndefinedColumn: ERROR: column users.password_hash does not exist",
"query": "SELECT id, email, password_hash FROM users WHERE email = $1",
"backtrace": [
"app/models/user.rb:42:in `authenticate'",
"app/controllers/sessions_controller.rb:17:in `create'"
]
}
Why it's dangerous
Stack traces reveal your technology stack, file structure, dependency versions, and internal logic. This dramatically reduces the effort required to find and exploit other vulnerabilities. Information leakage is often the first step in a targeted attack chain.
How to fix it
Return generic error messages to clients in production
Use different error handling for development and production environments
Never expose internal identifiers, paths, or query details in API responses
// ✅ Generic client error, detailed server log
app.use((err, req, res, next) => {
logger.error({ err, requestId: req.id }); // detailed log — server only
res.status(500).json({ error: 'Something went wrong', requestId: req.id });
});
10. Missing or Misconfigured Security Headers
What happens
The application works perfectly. It's on HTTPS. Passwords are hashed. The SQL queries are parameterized. But the HTTP response headers are missing a dozen security directives that browsers rely on to protect users. No Content-Security-Policy. No X-Frame-Options. No Referrer-Policy. The application is technically functional but defensively naked.
Why it's dangerous
Security headers are the browser-side line of defense. Without them:
Clickjacking attacks can embed your app in an invisible iframe and trick users into clicking things
MIME sniffing attacks can cause browsers to execute files as scripts
Cross-origin data leaks can expose sensitive information to third-party scripts
Referrer leakage can expose internal URLs and session tokens to third parties
How to fix it
Add these headers to every response. Use the Helmet middleware in Node.js or equivalent in your framework:
// ✅ Node.js / Express — using Helmet
const helmet = require('helmet');
app.use(helmet()); // sets sensible defaults for all major security headers
Use securityheaders.com to audit your current headers against best practices.
Quick Reference: The Full Checklist
#
Mistake
Fix
1
API keys in code
Environment variables + secrets manager
2
Passwords without hashing
bcrypt or Argon2id
3
No HTTPS
Let's Encrypt + HSTS header
4
Client-only validation
Always validate server-side
5
SQL injection
Parameterized queries / ORM
6
Insecure file uploads
Validate magic bytes, randomize name, use object storage
7
Broken authentication
Crypto-random tokens, explicit JWT algorithms
8
XSS
Framework escaping + DOMPurify + CSP
9
Verbose error messages
Generic client errors, detailed server logs
10
Missing security headers
Helmet (Node) or framework equivalent
Closing Thoughts
Security isn't glamorous work. It doesn't ship features. It doesn't impress anyone in a demo. But the absence of it — discovered after a breach — is one of the most devastating things that can happen to a product and the people who trust it with their data.
The good news: most of these mistakes are easy to fix once you know about them. The hard part isn't the implementation. It's building the habit of asking "what could go wrong here?" before you merge.
Start with this list. Add automated checks to your CI pipeline. Run tools like OWASP ZAP, Snyk, or Semgrep on your codebase. Make security review a normal part of code review, not an afterthought.
The attacker only needs to find one mistake. You need to avoid all of them.