Kuboid Secure Layer LogoKuboid Secure Layer
Back to Intelligence
February 19, 2026Vinay KumarSecure Coding

Why Developers Write Insecure Code — And How to Fix It

Cover Image for Why Developers Write Insecure Code — And How to Fix It

Why Developers Write Insecure Code — And How to Fix It

TLDR: The first vulnerability I found during a pen test was code I had written myself. Same pattern. Same shortcut. Same assumption made at 2am trying to meet a deadline. I just never imagined someone would attack it. Insecure code is almost never the result of carelessness or incompetence. It's the result of a system — deadlines, missing training, unclear requirements — that quietly prioritises shipping over security. This post breaks down why it happens, what the most common mistakes actually look like, and what engineering teams and business leaders can do about it before those vulnerabilities become headlines.


The Vulnerability I Wrote Myself

Early in my career, I was brought in to assess a web application for a mid-sized SaaS company. Standard scope — authentication flows, API endpoints, data handling. Within the first hour, I found a classic insecure direct object reference. A user could manipulate a simple ID parameter in the URL and pull records belonging to any other user in the system. No authentication check. No ownership validation. Just raw database access, dressed up with a clean frontend.

I went to flag it in my report and stopped.

I had written something almost identical six months earlier. Different project, same logic. I was under pressure to ship a feature before a product demo. The validation logic was on my list — I just hadn't gotten to it yet. I assumed we'd harden it before go-live. We didn't. I moved to the next ticket.

That moment reframed everything for me about how I think about application security. The developer who wrote the vulnerable code in that SaaS company wasn't reckless. I'd have bet money they were talented. They were just operating inside a system that made security easy to defer — and hard to prioritise.

That's the real story of insecure code. And it's one that most security conversations completely miss.


The Real Reasons Developers Write Insecure Code

The lazy narrative is that developers just don't care about security. That they cut corners. That they need more discipline. This is wrong, and believing it will cause you to solve the wrong problem.

Here's what's actually happening.

Security was never part of their education. The vast majority of computer science and software engineering degree programmes teach almost nothing about secure coding. Students learn data structures, algorithms, object-oriented design, and system architecture. They do not learn how SQL injection works, what a race condition looks like in authentication logic, or why storing passwords in plaintext is catastrophic. They graduate, join a company, and inherit a codebase with existing patterns — some of which are insecure — and replicate those patterns because that's what professional development looks like when you're learning on the job.

A 2022 survey by Secure Code Warrior found that fewer than half of developers had received any formal secure coding training. That's not a developer failure. That's an industry-wide curriculum failure.

Deadlines create pressure that security cannot survive. When a sprint ends on Friday and a feature needs to ship, security tasks get moved to the backlog. The backlog grows. The security tasks age. They're eventually closed as "won't fix" or forgotten entirely when priorities shift. This isn't laziness — it's rational behaviour in a system that measures developers on delivery velocity and rarely on security outcomes. You get what you measure.

Security requirements are almost never written down. Most product requirements documents describe what a feature should do. They almost never describe what a feature should prevent. If a user story says "As a user, I can view my invoices," the developer builds exactly that — they build a way for a user to view invoices. Whether that same user can also view someone else's invoices by modifying a URL parameter is not in the story, so it's not in the developer's frame of reference when they build the solution.

The feedback loop is broken. When a developer writes a bug, the QA process catches it relatively quickly. When a developer writes a security vulnerability, it may sit undetected for months or years — until a penetration test, a bug bounty report, or an actual breach surfaces it. By then, the developer has moved on to different work, different projects, sometimes different companies. There is no learning moment. The feedback loop that normally accelerates skill development simply doesn't exist for security.

Security tooling often treats developers as the enemy. Legacy security tools — the kind that generate 900-line vulnerability reports full of CWE codes and CVSS scores — are not built for developers. They're built for security teams. A developer who receives a wall of findings with no prioritisation, no code-level context, and no remediation guidance will do what any rational person does when handed an incomprehensible document: they'll close the tab and keep shipping.


The 5 Most Common Developer Security Mistakes

These are not exotic. They are not theoretical. They appear consistently in real codebases — including those at companies that believe their engineering standards are high.

1. SQL Injection

Still. In 2026. SQL injection is the practice of constructing database queries using raw user input rather than parameterised queries or prepared statements. An attacker who figures this out can rewrite your queries entirely — extracting data, bypassing authentication, or deleting records. The fix has been known for decades. The pattern persists because developers learn to build queries by concatenating strings, and no one corrects it.

-- Vulnerable
"SELECT * FROM users WHERE email = '"
+ userInput + "'"

-- Secure
"SELECT * FROM users WHERE email = ?"  -- with parameterised binding

2. Broken Access Control

This was the number one vulnerability in the OWASP Top 10 2025 — the industry's definitive list of the most critical web application security risks. Broken access control covers situations where users can act outside their intended permissions: viewing another user's data, accessing admin functionality, modifying records they don't own.

The Uber breach, which we covered in our previous post, partly involved an attacker moving laterally through internal systems because access controls between services were insufficiently enforced. This is an application architecture problem as much as it is a coding problem.

3. Hardcoded Credentials and Secrets in Code

Developers regularly commit API keys, database passwords, private tokens, and cloud credentials directly into source code. This is especially catastrophic when that code lives in a public GitHub repository — and it happens more often than any engineering team would like to admit. Automated scanners crawl public repositories looking for exactly this pattern, and they find it constantly.

The 2025 GitGuardian State of Secrets Sprawl Report found over 23 million secrets exposed in public GitHub commits in year 2024. The majority were developer-owned credentials.

4. Insecure Dependency Management

Modern applications are built on layers of third-party libraries and open source packages. A typical Node.js application might have hundreds of transitive dependencies — packages that your packages depend on, that you've never directly reviewed. When a vulnerability is discovered in one of those packages (as happened with Log4Shell in 2021, which affected an enormous portion of the world's enterprise software), every application that depends on it becomes vulnerable overnight.

Most development teams do not have a formal process for tracking dependency vulnerabilities or ensuring packages are updated when CVEs are published.

5. Verbose Error Messages and Stack Traces in Production

When an application throws an error and returns a full stack trace — including file paths, library versions, database schema details, or internal variable names — that information is a map for an attacker. It tells them exactly what the technology stack looks like, where files live, and which inputs cause the application to behave unexpectedly. This is information they would otherwise have to work hard to discover.

Developers leave verbose error handling in production because it's enormously useful during development and testing. Removing or sanitising it before go-live is a step that frequently gets skipped.


The Security Mindset — Thinking Like an Attacker

The shift from writing code to writing secure code is fundamentally a shift in perspective. Developers are trained to think about what their code should do. Security requires thinking about what their code could be made to do — by someone with intent to misuse it.

This is sometimes called threat modelling, but it doesn't have to be a formal process to be effective. It can start with a single question applied at every design decision: "What happens if someone tries to abuse this?"

What happens if a user submits a negative quantity on a purchase form? What happens if they modify the user ID in the API request to a different number? What happens if they submit 50,000 characters in a field that expects a postcode? What happens if they replay an authentication token after it should have expired?

These are not exotic attack scenarios. They are the first questions an attacker asks when they encounter an application. Developers who build this habit of questioning their own assumptions create code that is substantially harder to exploit — not because they've learned every vulnerability class, but because they've stopped assuming that users will behave the way the interface expects them to.

Security champions programmes — where one or two developers per team are given dedicated security training and become the internal point of reference for their colleagues — have been shown to dramatically improve security outcomes without requiring every developer to become a security specialist.


Practical Tools That Actually Help

The good news is that the tooling available to development teams today is significantly better than it was even five years ago. The key is integrating it into the development workflow rather than bolting it on at the end.

Static Application Security Testing (SAST). SAST tools analyse source code without executing it, looking for known vulnerability patterns — SQL injection, command injection, use of deprecated cryptographic functions, and so on. Tools like Semgrep, SonarQube, and Snyk Code can be integrated directly into a CI/CD pipeline, flagging issues before code is ever merged. The critical factor is choosing a tool that surfaces findings in the developer's IDE or pull request workflow — not in a separate security dashboard that no one checks.

Software Composition Analysis (SCA). SCA tools specifically analyse your third-party dependencies, comparing them against known vulnerability databases like the National Vulnerability Database (NVD). Tools like OWASP Dependency-Check, Snyk Open Source, and GitHub's built-in Dependabot alerts can surface vulnerable packages and, in many cases, suggest the exact version upgrade required to resolve them.

Secrets Scanning. Tools like TruffleHog, GitGuardian, and GitHub's native secret scanning detect credentials and tokens committed to repositories — in real time, before they're pushed to remote, or retrospectively across commit history. These should be running on every repository, always.

Secure Code Review. Automated tools catch patterns. They don't catch logic flaws. An application that correctly validates inputs but implements a flawed authentication flow, a broken business logic check, or an insecure multi-step process will pass most automated scans without issue. This is where human-led secure code review becomes essential — a security professional who reads your code with the specific goal of finding what automation misses.

Developer Security Training. Platforms like Secure Code Warrior, HackTheBox, and OWASP WebGoat offer hands-on, language-specific training that teaches developers to identify and fix vulnerabilities in realistic coding scenarios. This is orders of magnitude more effective than slideshow-based compliance training.


What Business Leaders Actually Need to Understand

If you are a CEO, CTO, or engineering manager reading this, here is the practical implication of everything above.

Your developers are not your security problem. Your processes are. If your team has no security requirements in their sprint stories, no security tooling in their pipeline, no security training in their onboarding, and no security review before code ships to production — you have created a system that reliably produces insecure software. Hiring more developers won't change that. Pressuring the team won't change that. Only changing the system will.

The investment required to shift left on security — to catch vulnerabilities during development rather than after deployment — is a fraction of the cost of remediating them post-breach. IBM's Cost of a Data Breach Report 2025 found that vulnerabilities identified in development cost around $80 to fix. The same vulnerability identified post-breach costs an average of $7,600. The maths are not complicated.


How Kuboid Secure Layer Can Help

At Kuboid Secure Layer, we work closely with engineering teams — not around them. Our approach to application security is deliberately developer-friendly: findings come with clear explanations, code-level context, and actionable remediation guidance rather than raw vulnerability dumps.

Our secure code review and application penetration testing services are designed to surface the vulnerabilities that automated scanning misses — the logic flaws, the broken access controls, the subtle authentication weaknesses that only emerge when someone reads your code the way an attacker would. We also work with organisations to integrate the right automated tooling into their development pipelines so that the next codebase starts with better foundations.

If you want to understand the security posture of your existing codebase — or build the right practices into a new one — we'd be glad to have that conversation.


Final Thought

The vulnerability I found in my own code years ago wasn't the result of not caring. I cared deeply. It was the result of a system — a deadline, a missing requirement, a gap in my training — that made it easy to ship something insecure and hard to notice until later.

Most insecure code has that same story behind it. And the companies that understand that story are the ones that fix the right things.

Your developers aren't writing insecure code because they don't know better. They're writing insecure code because no one has given them the time, the tools, the training, or the requirements to do otherwise. Fix those things, and the code takes care of itself.


Kuboid Secure Layer provides application security assessments, secure code reviews, and developer security programmes for growing businesses. Learn more at www.kuboid.in.

Vinay Kumar
Security Researcher @ Kuboid
Get In Touch

Let's find your vulnerabilities before they do.

Tell us about your product and we'll tell you what we'd attack first. Free consultation, no commitment.

  • 📧support@kuboid.in
  • ⏱️Typical response within 24 hours
  • 🌍Serving clients globally from India
  • 🔒NDA available before any discussion
Loading form...