JWT Decoder JavaScript β atob(), TextDecoder & jose
Use the free online JWT Decoder directly in your browser β no install required.
Try JWT Decoder Online βEvery authentication flow I've built eventually reaches the same point: you have a JWT sitting in a cookie, a header, or an OAuth callback URL, and you need to read what's inside it. A JWT decoder in JavaScript does not require any npm package. The token's header and payload are just Base64url-encoded JSON, and both the browser and Node.js ship with everything needed to decode them. This guide covers the full JavaScript text decoder pipeline for JWTs: splitting the token, normalizing base64url to standard Base64, atob() and TextDecoder for proper UTF-8 handling, Node.js Buffer.from(), signature verification with jose, and the common mistakes that trip up developers every day. For a quick one-off inspection, try the online JWT Decoder instead. All examples target ES2020+ and Node.js 18+.
- βSplit the JWT on "." β index 0 is the header, index 1 is the payload, index 2 is the signature.
- βatob() decodes Base64 but returns Latin-1, not UTF-8. Use TextDecoder or Buffer.from() for non-ASCII claims.
- βBuffer.from(segment, "base64url") handles base64url natively in Node.js β no manual character replacement needed.
- βDecoding is NOT verification. Never trust claims from a decoded JWT without checking the signature server-side.
- βThe jose library does both: it verifies HS256/RS256/ES256 signatures and returns the decoded payload in one call.
What is JWT Decoding?
A JSON Web Token is three Base64url-encoded segments separated by dots. The first segment is the header, the second is the payload (the claims you actually care about), and the third is the cryptographic signature. The header is a small JSON object that describes the token itself. Its most important field is alg β the signing algorithm (e.g., HS256, RS256, ES256). The typ field is almost always "JWT", and the optional kid field identifies which key was used to sign the token β critical when an identity provider rotates keys and publishes a JWKS endpoint with multiple public keys.
The payload carries the claims. RFC 7519 defines seven registered claim names: sub (subject β usually the user ID), iss (issuer β the auth server URL), aud (audience β the API the token is intended for), iat (issued-at timestamp), exp (expiration timestamp), nbf (not-before timestamp), and jti (JWT ID β used to prevent replay attacks). All timestamps are Unix epoch seconds, not milliseconds. The signature segment is raw binary β a keyed HMAC digest or an asymmetric digital signature. It is Base64url-encoded like the other segments, but its bytes are not JSON and have no human-readable structure.
In practice, you decode JWTs in JavaScript for three common reasons. First, debugging: you have a token from an OAuth flow or a test environment and you want to confirm the claims match what the auth server should have issued. Second, reading user claims for display purposes on the client side β showing the logged-in user's name, avatar URL, or role badge from the token payload without an extra API call. Third, checking expiry before attempting a refresh: if exp is within the next 60 seconds, trigger a silent refresh before the next API call rather than waiting for a 401 response.
Decoding does not check whether the token is valid or tampered with. That is a separate operation called verification, which requires the HMAC secret or the RSA/ECDSA public key. Anyone can decode a JWT. Only the holder of the correct key can verify one. This distinction trips up many developers, especially when building client-side auth flows where decoded claims are displayed but must never be trusted for authorization decisions without a verified backend check.
eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1c3JfOTIxZiIsInJvbGUiOiJhZG1pbiIsImlhdCI6MTcxMTYxMDAwMH0.dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXk
// Header
{ "alg": "HS256" }
// Payload
{
"sub": "usr_921f",
"role": "admin",
"iat": 1711610000
}atob() + TextDecoder β Browser-Native JWT Decode
The browser-native pipeline for decoding a JWT has four steps. First, split the token string on "." to get the three segments. Second, normalize the base64url segment by replacing - with + and _ with /, then padding with = characters until the length is a multiple of 4. Third, call atob() to decode the Base64 into a binary string. Fourth, convert the binary string to proper UTF-8 using TextDecoder. That last step matters because atob() returns Latin-1. Multi-byte characters β emoji, CJK text, accented characters beyond the Latin-1 range β come out garbled without the JavaScript text decoder step.
const token = "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1c3JfOTIxZiIsInJvbGUiOiJhZG1pbiIsImlhdCI6MTcxMTYxMDAwMH0.dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXk";
function decodeJwtPayload(jwt) {
const base64Url = jwt.split(".")[1];
const base64 = base64Url.replace(/-/g, "+").replace(/_/g, "/");
const padded = base64.padEnd(base64.length + (4 - (base64.length % 4)) % 4, "=");
const binary = atob(padded);
const bytes = Uint8Array.from(binary, ch => ch.charCodeAt(0));
const json = new TextDecoder("utf-8").decode(bytes);
return JSON.parse(json);
}
console.log(decodeJwtPayload(token));
// { sub: "usr_921f", role: "admin", iat: 1711610000 }The padding step is easy to overlook. JWTs strip trailing = characters from their Base64url segments because the JWT specification (RFC 7515) defines base64url without padding. But atob() in some browser engines throws an InvalidCharacterError if the input length is not divisible by 4. Padding defensively with padEnd() avoids that edge case across all environments. Here is a reusable version that decodes both header and payload into separate objects:
function decodeBase64Url(segment) {
const base64 = segment.replace(/-/g, "+").replace(/_/g, "/");
const padded = base64.padEnd(base64.length + (4 - (base64.length % 4)) % 4, "=");
const binary = atob(padded);
const bytes = Uint8Array.from(binary, ch => ch.charCodeAt(0));
return new TextDecoder("utf-8").decode(bytes);
}
function decodeJwt(token) {
const [headerB64, payloadB64] = token.split(".");
return {
header: JSON.parse(decodeBase64Url(headerB64)),
payload: JSON.parse(decodeBase64Url(payloadB64)),
};
}
const { header, payload } = decodeJwt(token);
console.log("Algorithm:", header.alg); // "HS256"
console.log("Subject:", payload.sub); // "usr_921f"
console.log("Role:", payload.role); // "admin"Once you have these two functions, it is worth placing them in a shared utility module rather than copy-pasting the logic across files. A src/lib/jwt.ts or utils/jwt-decode.ts file with a typed return shape makes intent explicit across the codebase. In TypeScript, you can type the return as { header: JwtHeader; payload: JwtPayload } where JwtHeader includes alg, typ, and optional kid, and JwtPayload extends the RFC 7519 registered claims with an index signature for custom claims. Centralizing the decode logic means that when you later want to add error handling (catching malformed segments) or telemetry (logging decode failures), you only have one place to update.
TextDecoder step is what makes this pipeline safe for non-ASCII claims. Without it, atob() returns a Latin-1 string where multi-byte UTF-8 sequences are split across characters. You will see garbage instead of emoji or CJK text. Always pipe through new TextDecoder("utf-8") after atob().Decoding UTF-8 JWT Claims with Multi-Byte Characters
JWT payloads are UTF-8 JSON encoded as base64url. Most payloads contain ASCII-only fields like user IDs and timestamps, so developers never notice that atob() returns Latin-1 instead of UTF-8. The problem surfaces the moment a claim contains emoji, Japanese characters, Cyrillic, or any code point above U+00FF. The JavaScript decode UTF-8 pattern requires converting the binary string to a byte array first, then running it through TextDecoder.
// Simulating a JWT payload with emoji and CJK characters
const payloadObj = {
sub: "usr_e821",
display_name: "η°δΈε€ͺι",
team: "Platform π",
region: "ap-northeast-1"
};
// Encode: object β JSON β UTF-8 bytes β base64url
const jsonStr = JSON.stringify(payloadObj);
const utf8Bytes = new TextEncoder().encode(jsonStr);
const base64 = btoa(String.fromCharCode(...utf8Bytes))
.replace(/\+/g, "-").replace(/\//g, "_").replace(/=+$/, "");
// Decode: base64url β base64 β binary string β bytes β UTF-8 string
const base64Std = base64.replace(/-/g, "+").replace(/_/g, "/");
const binary = atob(base64Std);
const bytes = Uint8Array.from(binary, c => c.charCodeAt(0));
const decoded = new TextDecoder("utf-8").decode(bytes);
const result = JSON.parse(decoded);
console.log(result.display_name); // "η°δΈε€ͺι" β correct
console.log(result.team); // "Platform π" β correctThere is a legacy fallback pattern you will see in older codebases that uses decodeURIComponent combined with a percent-encoding trick. This JavaScript decodeURIComponent approach works because it re-encodes each byte as a percent-hex pair, then decodeURIComponent reassembles the multi-byte UTF-8 sequences:
function decodeBase64UrlLegacy(segment) {
const base64 = segment.replace(/-/g, "+").replace(/_/g, "/");
const binary = atob(base64);
// Convert each char to %XX hex, then decodeURIComponent reassembles UTF-8
const utf8 = decodeURIComponent(
binary.split("").map(c =>
"%" + c.charCodeAt(0).toString(16).padStart(2, "0")
).join("")
);
return utf8;
}
// Works for non-ASCII claims without TextDecoder
const payload = decodeBase64UrlLegacy(token.split(".")[1]);
console.log(JSON.parse(payload));decodeURIComponent(escape(atob(segment))) pattern in older JWT utility snippets. The escape() function is deprecated and non-standard. Replace it with the TextDecoder approach shown above. The JavaScript unescape decoder pattern has the same problem: unescape() is deprecated. Both functions may be removed from future JavaScript engines.JWT Decode Pipeline β Step Reference
Each step in the browser-native JWT decode pipeline, with the JavaScript API used and what it produces:
The Node.js equivalent collapses steps 2 through 4 into a single call: Buffer.from(segment, "base64url").toString("utf-8"). The "base64url" encoding option handles the alphabet conversion and padding internally.
Buffer.from() β The Node.js String Decoder for JWTs
Node.js has a much simpler path. The Buffer class accepts a "base64url" encoding directly, so you skip the manual character replacement and padding. This is the JavaScript string decoder path for server-side code. One line turns a JWT segment into a UTF-8 string, and it handles multi-byte characters correctly without any extra steps.
const token = "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1c3JfOTIxZiIsIm9yZyI6ImFjbWUtY29ycCIsInJvbGUiOiJiaWxsaW5nIiwiaWF0IjoxNzExNjEwMDAwfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c";
function decodeJwt(jwt) {
const segments = jwt.split(".");
return {
header: JSON.parse(Buffer.from(segments[0], "base64url").toString("utf-8")),
payload: JSON.parse(Buffer.from(segments[1], "base64url").toString("utf-8")),
};
}
const { header, payload } = decodeJwt(token);
console.log(header);
// { alg: "HS256" }
console.log(payload);
// { sub: "usr_921f", org: "acme-corp", role: "billing", iat: 1711610000 }This is the approach I reach for in every Node.js project. It is shorter, faster, and already handles UTF-8 correctly. No TextDecoder needed, no character replacement, no padding math. The Buffer class is a JavaScript string decoder that handles the base64url alphabet natively, which eliminates an entire class of bugs related to character substitution. If your code needs to run in both the browser and Node.js, check the FAQ at the bottom for an isomorphic wrapper function that detects the environment at runtime.
Here is a more complete example showing how to extract common JWT claims and convert timestamps to readable dates, which is the pattern you will use most often in middleware and API route handlers:
function inspectToken(token) {
const segments = token.split(".");
if (segments.length !== 3) {
throw new Error("Not a valid JWT β expected 3 dot-separated segments");
}
const header = JSON.parse(Buffer.from(segments[0], "base64url").toString("utf-8"));
const payload = JSON.parse(Buffer.from(segments[1], "base64url").toString("utf-8"));
const inspection = {
algorithm: header.alg,
tokenType: header.typ || "JWT",
subject: payload.sub,
issuer: payload.iss || "(not set)",
audience: payload.aud || "(not set)",
issuedAt: payload.iat ? new Date(payload.iat * 1000).toISOString() : "(not set)",
expiresAt: payload.exp ? new Date(payload.exp * 1000).toISOString() : "(never)",
isExpired: payload.exp ? payload.exp < Math.floor(Date.now() / 1000) : false,
customClaims: Object.keys(payload).filter(
k => !["sub", "iss", "aud", "iat", "exp", "nbf", "jti"].includes(k)
),
};
return inspection;
}
console.log(inspectToken(process.env.ACCESS_TOKEN));
// {
// algorithm: "RS256",
// tokenType: "JWT",
// subject: "usr_921f",
// issuer: "https://auth.internal",
// audience: "billing-api",
// issuedAt: "2026-03-10T14:00:00.000Z",
// expiresAt: "2026-03-10T15:00:00.000Z",
// isExpired: true,
// customClaims: ["role", "scope", "org"]
// }In production Node.js services, the Buffer.from() decode pattern shows up in three recurring places. The first is request logging middleware: you decode the incoming Authorization header to attach userId and org to every structured log entry without an extra network round-trip to the auth server. The second is debugging: you print decoded token claims to the console during development to confirm the correct scopes were issued before writing test assertions. The third is proactive token refresh in API gateways. Rather than forwarding a token upstream and letting the downstream service return a 401 when the token expires mid-request, the gateway decodes the token at the edge, reads the exp claim, and triggers a refresh if expiry is within the next 30 seconds. This eliminates a class of transient auth failures that are difficult to reproduce and frustrating to debug.
"base64url" encoding was added in Node.js 15.7.0. If you are stuck on Node.js 14 or earlier, fall back to Buffer.from(segment.replace(/-/g, "+").replace(/_/g, "/"), "base64") which works the same way but requires the manual character swap.Decode JWT from a File and API Response
Two scenarios come up constantly. The first is reading a JWT from a local file: a saved token during development, a test fixture, or a file dumped during an incident for post-mortem analysis. The second is extracting a JWT from an HTTP response, typically the access_token field in an OAuth token response body or an Authorization header. Both need error handling because malformed tokens, truncated files, and network errors are everyday realities. A token that was valid last week might have trailing whitespace or newlines from copy-paste. A response body might be HTML instead of JSON if the auth server returned an error page.
Read JWT from a File (Node.js)
import { readFileSync } from "node:fs";
function decodeJwtFromFile(filePath) {
const raw = readFileSync(filePath, "utf-8").trim();
const segments = raw.split(".");
if (segments.length !== 3) {
throw new Error(`Invalid JWT: expected 3 segments, got ${segments.length}`);
}
try {
return {
header: JSON.parse(Buffer.from(segments[0], "base64url").toString("utf-8")),
payload: JSON.parse(Buffer.from(segments[1], "base64url").toString("utf-8")),
};
} catch (err) {
throw new Error(`Failed to decode JWT from ${filePath}: ${err.message}`);
}
}
try {
const { header, payload } = decodeJwtFromFile("./test-fixtures/access-token.txt");
console.log("Algorithm:", header.alg);
console.log("Expires:", new Date(payload.exp * 1000).toISOString());
} catch (err) {
console.error(err.message);
}Extract JWT from an API Response (fetch)
async function fetchAndDecodeToken(loginUrl, credentials) {
const response = await fetch(loginUrl, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(credentials),
});
if (!response.ok) {
throw new Error(`Login failed: ${response.status} ${response.statusText}`);
}
const { access_token } = await response.json();
if (!access_token || access_token.split(".").length !== 3) {
throw new Error("Response does not contain a valid JWT");
}
const payload = access_token.split(".")[1];
const json = Buffer.from(payload, "base64url").toString("utf-8");
return JSON.parse(json);
}
// Usage
try {
const claims = await fetchAndDecodeToken(
"https://auth.internal/oauth/token",
{ username: "deploy-bot", password: process.env.DEPLOY_TOKEN }
);
console.log("Token subject:", claims.sub);
console.log("Token scopes:", claims.scope);
console.log("Expires at:", new Date(claims.exp * 1000).toISOString());
} catch (err) {
console.error("Token decode error:", err.message);
}Command-Line JWT Decoding
Sometimes you just want to peek at a token from the terminal without writing a script. Node.js is available on most developer machines, so a one-liner works well. jq handles the pretty-printing.
# Decode JWT payload with Node.js one-liner
echo "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1c3JfOTIxZiIsInJvbGUiOiJhZG1pbiJ9.dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXk" \
| cut -d. -f2 \
| node -e "process.stdin.on('data', d => console.log(JSON.parse(Buffer.from(d.toString().trim(), 'base64url').toString('utf-8'))))"
# Pipe to jq for pretty output
echo "$JWT_TOKEN" | cut -d. -f2 \
| node -e "process.stdin.on('data', d => process.stdout.write(Buffer.from(d.toString().trim(), 'base64url').toString('utf-8')))" \
| jq .
# Decode both header and payload
echo "$JWT_TOKEN" | node -e "
process.stdin.on('data', d => {
const parts = d.toString().trim().split('.');
console.log('Header:', JSON.parse(Buffer.from(parts[0], 'base64url').toString()));
console.log('Payload:', JSON.parse(Buffer.from(parts[1], 'base64url').toString()));
});
"If you prefer pure bash without Node.js, pipe the segment through base64 -d after fixing the base64url characters with tr:
# Pure bash: decode JWT payload without Node.js echo "$JWT_TOKEN" | cut -d. -f2 | tr '_-' '/+' | base64 -d 2>/dev/null | jq . # macOS variant (base64 -D instead of -d) echo "$JWT_TOKEN" | cut -d. -f2 | tr '_-' '/+' | base64 -D 2>/dev/null | jq .
For quick visual inspection without any terminal at all, paste your token into the ToolDeck JWT Decoder for a side-by-side breakdown of all three segments with color-coded claim labels and expiration status.
jose β Verification and Decoding in One Library
For production authentication middleware, you need signature verification, not just decoding. The jose library is the best option. It works in both Node.js and browsers (via the Web Crypto API), supports HS256, RS256, ES256, EdDSA, and JWE (encrypted tokens), and has zero native dependencies. Install with npm install jose.
import * as jose from "jose";
const secret = new TextEncoder().encode("k8s-webhook-signing-secret-2026");
const token = "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1c3JfOTIxZiIsInNjb3BlIjoiYmlsbGluZzpyZWFkIiwiaWF0IjoxNzExNjEwMDAwLCJleHAiOjE3MTE2MTM2MDB9.abc123";
try {
const { payload, protectedHeader } = await jose.jwtVerify(token, secret);
console.log("Algorithm:", protectedHeader.alg); // "HS256"
console.log("Subject:", payload.sub); // "usr_921f"
console.log("Scope:", payload.scope); // "billing:read"
} catch (err) {
if (err.code === "ERR_JWT_EXPIRED") {
console.error("Token expired at:", err.payload.exp);
} else {
console.error("Verification failed:", err.message);
}
}import * as jose from "jose";
// Fetch the public key set from the identity provider
const jwks = jose.createRemoteJWKSet(
new URL("https://auth.internal/.well-known/jwks.json")
);
const token = req.headers.authorization?.split(" ")[1];
if (!token) {
return res.status(401).json({ error: "Missing token" });
}
try {
const { payload } = await jose.jwtVerify(token, jwks, {
issuer: "https://auth.internal",
audience: "billing-api",
});
// payload.sub, payload.scope, etc. are now verified
req.userId = payload.sub;
} catch (err) {
return res.status(401).json({ error: "Invalid token" });
}When deciding between jose and the older jsonwebtoken package, the key difference is runtime scope. jsonwebtoken is Node.js-only β it relies on the crypto built-in and will not bundle for the browser. jose is fully isomorphic: it uses the Web Crypto API, which is available in all modern browsers, Node.js 16+, Deno, Bun, and Cloudflare Workers. If your auth logic lives in a Next.js middleware file (which runs in the Edge Runtime), or in a Cloudflare Worker, or in a shared utility that is imported by both server and client code, jose is the correct choice because it has zero native dependencies and installs without a build step. jsonwebtoken remains reasonable for pure Node.js server applications where you need its broader ecosystem of signing helpers and you are not planning to run the code in an edge environment. In a greenfield project in 2026, default to jose unless you have a specific reason to prefer the older API.
If you only need decode without verification, jose provides jose.decodeJwt(token) which returns the payload and jose.decodeProtectedHeader(token) for the header. These are convenience functions that do the Base64url decoding internally. But the whole reason to reach for jose is that you rarely should decode without also verifying. If you are on the client side and just need to show the user their own display name or avatar URL from the token claims, decode-only is fine. On the server side, always verify. I have seen production systems that decoded JWT claims for access control decisions without checking the signature, and that is an open door for any attacker who understands the JWT format.
import * as jose from "jose";
// Decode-only: no secret needed, no verification
const payload = jose.decodeJwt(token);
console.log(payload.sub); // "usr_921f"
console.log(payload.scope); // "billing:read"
const header = jose.decodeProtectedHeader(token);
console.log(header.alg); // "HS256"
console.log(header.typ); // "JWT"
// Check expiry without verification (client-side display)
if (payload.exp && payload.exp < Math.floor(Date.now() / 1000)) {
console.log("Token has expired β redirect to login");
}Terminal Output with Syntax Highlighting
When debugging JWT tokens in a Node.js CLI tool or during an incident, color-coded output makes a real difference. The chalk library paired with JSON.stringify gets the job done. Install with npm install chalk.
import chalk from "chalk";
function printJwt(token) {
const segments = token.split(".");
if (segments.length !== 3) {
console.error(chalk.red("Invalid JWT: expected 3 segments"));
return;
}
const header = JSON.parse(Buffer.from(segments[0], "base64url").toString("utf-8"));
const payload = JSON.parse(Buffer.from(segments[1], "base64url").toString("utf-8"));
console.log(chalk.bold.cyan("\n=== JWT Header ==="));
console.log(chalk.gray(JSON.stringify(header, null, 2)));
console.log(chalk.bold.green("\n=== JWT Payload ==="));
console.log(chalk.gray(JSON.stringify(payload, null, 2)));
// Highlight expiration status
if (payload.exp) {
const expiresAt = new Date(payload.exp * 1000);
const isExpired = expiresAt < new Date();
console.log(
chalk.bold("\nExpires:"),
isExpired
? chalk.red(`EXPIRED at ${expiresAt.toISOString()}`)
: chalk.green(`Valid until ${expiresAt.toISOString()}`)
);
}
console.log(chalk.dim("\nSignature: " + segments[2].substring(0, 20) + "..."));
}
printJwt(process.argv[2]);
// Run: node jwt-debug.mjs "eyJhbGci..."Processing JWTs from Large Log Files
Modern API infrastructure emits structured access logs in NDJSON format β one JSON object per line, with each line containing the request path, response status, latency, and the decoded or raw Authorization header. In a busy service these files grow quickly: a gateway handling 10,000 requests per minute produces over 14 million log entries per day. Security and compliance use cases regularly require scanning these files after the fact β identifying every request made by a compromised service account (post-incident analysis), confirming that a specific user's tokens expired before a data-access window (compliance audit), or extracting the full set of subjects who accessed a sensitive endpoint during a maintenance window. Because a single log file can exceed several gigabytes, loading it into memory with readFileSync is not viable. Node.js readline streams process the file one line at a time with constant memory overhead, making it practical to scan arbitrarily large logs on a standard developer laptop.
You will not hit the "file too large for memory" problem with individual JWTs, since a single token is rarely more than a few kilobytes. The scenario that does come up is scanning a large access log or audit trail for JWT tokens, decoding each one, and extracting specific claims. Node.js streams handle this without loading the entire file.
import { createReadStream } from "node:fs";
import { createInterface } from "node:readline";
async function scanLogsForExpiredTokens(logPath) {
const fileStream = createReadStream(logPath, { encoding: "utf-8" });
const rl = createInterface({ input: fileStream, crlfDelay: Infinity });
let lineCount = 0;
let expiredCount = 0;
const nowSeconds = Math.floor(Date.now() / 1000);
for await (const line of rl) {
lineCount++;
try {
const entry = JSON.parse(line);
if (!entry.authorization_token) continue;
const segments = entry.authorization_token.split(".");
if (segments.length !== 3) continue;
const payload = JSON.parse(
Buffer.from(segments[1], "base64url").toString("utf-8")
);
if (payload.exp && payload.exp < nowSeconds) {
expiredCount++;
const expDate = new Date(payload.exp * 1000).toISOString();
console.log("Line " + lineCount + ": expired token for " + payload.sub + ", exp=" + expDate);
}
} catch {
// Skip malformed lines
}
}
console.log(`\nScanned ${lineCount} lines, found ${expiredCount} expired tokens`);
}
scanLogsForExpiredTokens("./logs/api-access-2026-03.ndjson");import { createReadStream } from "node:fs";
import { createInterface } from "node:readline";
async function extractUniqueSubjects(logPath) {
const rl = createInterface({
input: createReadStream(logPath, { encoding: "utf-8" }),
crlfDelay: Infinity,
});
const subjects = new Set();
const jwtRegex = /eyJ[A-Za-z0-9_-]+\.eyJ[A-Za-z0-9_-]+\.[A-Za-z0-9_-]+/g;
for await (const line of rl) {
const matches = line.match(jwtRegex);
if (!matches) continue;
for (const token of matches) {
try {
const payload = JSON.parse(
Buffer.from(token.split(".")[1], "base64url").toString("utf-8")
);
if (payload.sub) subjects.add(payload.sub);
} catch {
// Not a valid JWT
}
}
}
console.log(`Found ${subjects.size} unique subjects:`);
for (const sub of subjects) console.log(` ${sub}`);
}
extractUniqueSubjects("./logs/gateway-2026-03.log");readFileSync will pin memory and trigger GC pauses. The readline approach processes one line at a time with constant memory usage.Common Mistakes
Problem: atob() returns a Latin-1 string. Multi-byte UTF-8 characters (emoji, CJK, accented characters) are split across characters and come out garbled.
Fix: Convert the atob() output to a Uint8Array, then pass it through new TextDecoder('utf-8').
// Breaks on non-ASCII payload claims
const payload = JSON.parse(atob(token.split(".")[1]));
// display_name appears as "ç°À¸ΒΓ₯Β€ΒͺΓ©\x83\x8E" instead of "η°δΈε€ͺι"const binary = atob(token.split(".")[1].replace(/-/g, "+").replace(/_/g, "/"));
const bytes = Uint8Array.from(binary, c => c.charCodeAt(0));
const payload = JSON.parse(new TextDecoder("utf-8").decode(bytes));
// display_name correctly shows "η°δΈε€ͺι"Problem: atob() throws "InvalidCharacterError" because base64url uses - and _ instead of + and /.
Fix: Replace - with + and _ with / before calling atob(). Node.js Buffer.from() with 'base64url' handles this automatically.
// Throws: InvalidCharacterError: String contains an invalid character
const payload = atob(token.split(".")[1]);const segment = token.split(".")[1];
const base64 = segment.replace(/-/g, "+").replace(/_/g, "/");
const payload = atob(base64); // now worksProblem: Anyone can create a JWT with any payload. Decoding only reads the data β it does not prove the token was issued by your auth server.
Fix: On the server side, always verify the signature using jose.jwtVerify() or jsonwebtoken.verify(). Decode-only is acceptable for client-side display of user claims.
// DANGEROUS: decoded but not verified
const claims = JSON.parse(atob(token.split(".")[1]));
if (claims.role === "admin") {
grantAdminAccess(); // attacker can forge this
}import * as jose from "jose";
const { payload } = await jose.jwtVerify(token, secretKey);
if (payload.role === "admin") {
grantAdminAccess(); // safe β signature is verified
}Problem: JWT exp is in seconds since epoch, but Date.now() returns milliseconds. The comparison will always say the token is valid because the millisecond timestamp is 1000x larger.
Fix: Divide Date.now() by 1000 and floor the result before comparing to exp.
// Bug: Date.now() is milliseconds, exp is seconds
if (payload.exp > Date.now()) {
console.log("Token is valid"); // always true β wrong!
}const nowSeconds = Math.floor(Date.now() / 1000);
if (payload.exp > nowSeconds) {
console.log("Token is valid"); // correct comparison
}JWT Decode Methods β Quick Comparison
Use atob() + TextDecoder for browser-side decode when you just need to display claims to the user. Use Buffer.from() in Node.js scripts and CLI tools. Reach for jose the moment you need to verify a signature, which is any server-side auth middleware. The jwt-decode package is a lightweight alternative if you want a one-function API for decode-only in the browser. For quick visual inspection without writing code, paste your token into the JWT Decoder tool.
Frequently Asked Questions
How do I decode a JWT token in JavaScript without a library?
Split the token on ".", take the second segment (the payload), normalize the base64url encoding by replacing - with + and _ with /, pad with = characters, then call atob() followed by TextDecoder to get the UTF-8 JSON string. Pipe the result through JSON.parse() and you have the claims object. No npm package required. This approach works in all modern browsers and in Node.js 18+. If you also need to read the header, apply the same decoding steps to the first segment. Keep in mind that this gives you the raw data without any signature verification β treat the result as display-only unless you verify the signature server-side.
const token = "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1c3JfOTIxZiIsInJvbGUiOiJhZG1pbiJ9.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c";
const payload = token.split(".")[1];
const base64 = payload.replace(/-/g, "+").replace(/_/g, "/");
const json = atob(base64);
const claims = JSON.parse(json);
console.log(claims);
// { sub: "usr_921f", role: "admin" }What is the difference between atob() and Buffer.from() for JWT decoding?
atob() is a browser API that decodes standard Base64 into a Latin-1 binary string. It does not understand base64url encoding directly, so you must replace - and _ characters first. Buffer.from(segment, "base64url") is a Node.js API that handles the base64url alphabet natively and returns a Buffer you can call .toString("utf-8") on. Use atob() in the browser, Buffer.from() in Node.js. A third option β which is slower but historically common β is the decodeURIComponent percent-encoding trick, but that pattern relies on the deprecated escape() function in some older snippets and should be avoided in new code. For isomorphic code that runs in both environments, check for typeof Buffer !== "undefined" and branch accordingly.
// Browser
const json = atob(payload.replace(/-/g, "+").replace(/_/g, "/"));
// Node.js
const json2 = Buffer.from(payload, "base64url").toString("utf-8");Why does atob() break on non-ASCII JWT claims?
atob() returns a Latin-1 string where each character maps to a single byte. Multi-byte UTF-8 sequences (emoji, CJK characters, accented letters beyond Latin-1) get split across multiple characters, producing garbled output. The fix is to convert the binary string to a Uint8Array first, then pass that array to new TextDecoder("utf-8").decode(). The TextDecoder API reassembles multi-byte sequences correctly. This issue is easy to miss in development because most JWT payloads only contain ASCII user IDs, timestamps, and role names β the bug only surfaces when a claim contains a non-ASCII display name or a localized string. Always use the TextDecoder path in new code even when your current payloads are ASCII-only, since claims may change as the application evolves.
// Broken: atob returns Latin-1, multi-byte chars are garbled
const broken = atob(base64); // "Γ°\x9F\x8E\x89" instead of the emoji
// Fixed: convert to byte array, then TextDecoder
const bytes = Uint8Array.from(atob(base64), c => c.charCodeAt(0));
const fixed = new TextDecoder("utf-8").decode(bytes);Can I verify a JWT signature in JavaScript?
Decoding and verifying are different operations. Decoding just reads the payload, which is not encrypted. Verification checks the signature against a secret (HMAC) or public key (RSA/ECDSA). The jose library supports both in the browser via the Web Crypto API and in Node.js. The jsonwebtoken package works in Node.js only. Never trust decoded claims without verifying the signature on the server side. On the client side it is acceptable to decode a JWT to read the user's display name or expiration time, but any access control decision β checking if a user has a particular role or permission β must happen in server-side code after verification. An attacker who understands the JWT format can craft a token with arbitrary claims and your client-side check will pass.
import * as jose from "jose";
const secret = new TextEncoder().encode("your-256-bit-secret");
const { payload } = await jose.jwtVerify(token, secret);
console.log(payload.sub); // verified claimsHow do I check if a JWT is expired in JavaScript?
Decode the payload and read the exp claim, which is a Unix timestamp in seconds. Compare it to the current time using Math.floor(Date.now() / 1000). If the current time is greater than exp, the token is expired. Remember: the exp value is seconds since epoch, not milliseconds, so dividing Date.now() by 1000 is required. In practice, build in a small clock-skew buffer β checking if the token expires within the next 30 seconds rather than strictly in the past prevents edge cases where the token is still technically valid when you decode it but expires by the time the next downstream API call processes it. Also handle the case where exp is absent entirely, which means the token never expires.
function isTokenExpired(token) {
const payload = JSON.parse(
atob(token.split(".")[1].replace(/-/g, "+").replace(/_/g, "/"))
);
const nowSeconds = Math.floor(Date.now() / 1000);
return payload.exp < nowSeconds;
}
console.log(isTokenExpired(myToken)); // true or falseHow do I write isomorphic JWT decode code that works in both Node.js and the browser?
Check for the existence of globalThis.Buffer. If it exists, you are in Node.js and can use Buffer.from(segment, "base64url").toString("utf-8"). If it does not exist, you are in a browser and should use atob() with the TextDecoder approach. Wrap this check in a single decodeBase64Url function and use it everywhere. This matters most for utility packages, design system components, and any shared code living in a monorepo package that is imported by both a Next.js server component and a browser React component. Keeping the environment detection in one place means you only need to update it in one spot if the runtime changes β for example, when Deno adds full Buffer support or a new edge runtime requires a different code path.
function decodeBase64Url(segment) {
if (typeof Buffer !== "undefined") {
return Buffer.from(segment, "base64url").toString("utf-8");
}
const base64 = segment.replace(/-/g, "+").replace(/_/g, "/");
const bytes = Uint8Array.from(atob(base64), c => c.charCodeAt(0));
return new TextDecoder("utf-8").decode(bytes);
}Related Tools
Marcus specialises in JavaScript performance, build tooling, and the inner workings of the V8 engine. He has spent years profiling and optimising React applications, working on bundler configurations, and squeezing every millisecond out of critical rendering paths. He writes about Core Web Vitals, JavaScript memory management, and the tools developers reach for when performance really matters.
Sophie is a full-stack developer focused on TypeScript across the entire stack β from React frontends to Express and Fastify backends. She has a particular interest in type-safe API design, runtime validation, and the patterns that make large JavaScript codebases stay manageable. She writes about TypeScript idioms, Node.js internals, and the ever-evolving JavaScript module ecosystem.