Base64 Decode JavaScript β atob() & Buffer
Use the free online Base64 Decode Online directly in your browser β no install required.
Try Base64 Decode Online Online βWhen I debug a production auth issue, the first thing I reach for is a Base64 decoder β JWT payloads, webhook signatures, and encoded config values all hide inside Base64 strings. JavaScript provides two primary built-in approaches to base64 decode: atob() (browser + Node.js 16+) and Buffer.from(encoded, 'base64').toString() (Node.js) β and they behave very differently when the original data contained Unicode characters. For a quick one-off decode without writing any code, ToolDeck's Base64 Decoder handles it instantly in your browser. This guide covers both environments β targeting Node.js 16+ and modern browsers (Chrome 80+, Firefox 75+, Safari 14+) β with production-ready examples: UTF-8 recovery, URL-safe variants, JWT decoding, files, API responses, Node.js streams, and the four mistakes that consistently produce garbled output in real codebases.
- βatob(encoded) is browser-native and available in Node.js 16+ globally, but returns a binary string β use TextDecoder to recover UTF-8 text from any content above ASCII.
- βBuffer.from(encoded, "base64").toString("utf8") is the Node.js idiomatic approach and handles UTF-8 automatically with no extra steps.
- βURL-safe Base64 (used in JWTs) replaces + with -, / with _, and strips = padding. Restore these before calling atob(), or use Buffer.from(encoded, "base64url").toString() in Node.js 18+.
- βStrip whitespace and newlines before decoding β the GitHub Contents API and many MIME encoders wrap Base64 output at 60β76 characters per line.
- βUint8Array.prototype.fromBase64() (TC39 Stage 3) is already available in Node.js 22+ and Chrome 130+ and will eventually unify both environments.
What is Base64 Decoding?
Base64 decoding is the inverse of encoding β it converts the 64-character ASCII representation back to the original binary data or text. Every 4 Base64 characters map back to exactly 3 bytes. The = padding characters at the end of an encoded string tell the decoder how many extra bytes were appended to complete the final 3-byte group.
Base64 is not encryption β the operation is completely reversible by anyone with the encoded string. Its purpose is transport safety: protocols and storage formats designed for 7-bit ASCII text cannot handle arbitrary binary bytes, and Base64 bridges that gap. Common JavaScript decoding scenarios include inspecting JWT payloads, unpacking Base64-encoded JSON configs from environment variables, extracting binary file content from REST APIs, and decoding data URIs in the browser.
ZGVwbG95LWJvdDpzay1wcm9kLWE3ZjJjOTFlNGIzZDg=
deploy-bot:sk-prod-a7f2c91e4b3d8
atob() β The Browser-Native Decoding Function
atob() (ASCII-to-binary) has been available in browsers since IE10 and became a global in Node.js 16.0 as part of the WinterCG compatibility initiative. It also works natively in Deno, Bun, and Cloudflare Workers β no import required.
The function returns a binary string: a JavaScript string where each character has a code point equal to one raw byte value (0β255). This matters: if the original data was UTF-8 text containing characters above U+007F (accented letters, Cyrillic, CJK, emoji), the returned string is the raw byte sequence, not readable text. Use TextDecoder to recover it (covered in the next section).
Minimal working example
// Decoding an HTTP Basic Auth credential pair received in a request header
// Authorization: Basic ZGVwbG95LWJvdDpzay1wcm9kLWE3ZjJjOTFlNGIzZDg=
function parseBasicAuth(header: string): { serviceId: string; apiKey: string } {
const base64Part = header.replace(/^Basics+/i, '')
const decoded = atob(base64Part)
const [serviceId, apiKey] = decoded.split(':')
return { serviceId, apiKey }
}
const auth = parseBasicAuth('Basic ZGVwbG95LWJvdDpzay1wcm9kLWE3ZjJjOTFlNGIzZDg=')
console.log(auth.serviceId) // deploy-bot
console.log(auth.apiKey) // sk-prod-a7f2c91e4b3d8Round-trip verification
// Verify lossless recovery for ASCII-only content const original = 'service:payments region:eu-west-1 env:production' const encoded = btoa(original) const decoded = atob(encoded) console.log(encoded) // c2VydmljZTpwYXltZW50cyByZWdpb246ZXUtd2VzdC0xIGVudjpwcm9kdWN0aW9u console.log(decoded === original) // true
atob() and btoa() are part of the WinterCG Minimum Common API β the same spec that governs Fetch, URL, and crypto in non-browser runtimes. They behave identically in Node.js 16+, Bun, Deno, and Cloudflare Workers.Recovering UTF-8 Text After Decoding
The most common atob() pitfall is misunderstanding its return type. When the original text was encoded as UTF-8 before Base64, atob() returns a Latin-1 binary string, not the readable text:
// 'ΠΠ»Π΅ΠΊΡΠ΅ΠΉ ΠΠ²Π°Π½ΠΎΠ²' was UTF-8 encoded then Base64 encoded before transmission const encoded = '0JDQu9C10LrRgdC10Lkg0JjQstCw0L3QvtCy' // β atob() returns the raw UTF-8 bytes as a Latin-1 string β garbled output console.log(atob(encoded)) // "Π ΡΠ Β»Π Β΅Π ΡΠ‘ΠΠ Β΅Π β Π ΛΠ ΠΠ Β°Π Π Π ΡΠ Π" β byte values misread as Latin-1
The correct approach uses TextDecoder to interpret those raw bytes as UTF-8:
TextDecoder approach β safe for any Unicode output
// Unicode-safe Base64 decode utilities
function fromBase64(encoded: string): string {
const binary = atob(encoded)
const bytes = Uint8Array.from(binary, ch => ch.charCodeAt(0))
return new TextDecoder().decode(bytes)
}
function toBase64(text: string): string {
const bytes = new TextEncoder().encode(text)
const chars = Array.from(bytes, byte => String.fromCharCode(byte))
return btoa(chars.join(''))
}
// Works with any language or script
const orderNote = 'Confirmed: η°δΈε€ͺι β SΓ£o Paulo warehouse, qty: 250'
const encoded = toBase64(orderNote)
const decoded = fromBase64(encoded)
console.log(decoded === orderNote) // true
console.log(decoded)
// Confirmed: η°δΈε€ͺι β SΓ£o Paulo warehouse, qty: 250Buffer.from(encoded, 'base64').toString('utf8'). It interprets the decoded bytes as UTF-8 automatically and is faster for large inputs.Buffer.from() in Node.js β Complete Decoding Guide
In Node.js, Buffer is the idiomatic API for all binary operations including Base64 decoding. It handles UTF-8 natively, returns a proper Buffer (binary-safe), and since Node.js 18 supports the 'base64url' encoding shortcut for URL-safe variants.
Decoding an environment variable config
// Server config stored as Base64 in an env variable (avoids JSON escaping in shell)
// DB_CONFIG=eyJob3N0IjoiZGItcHJpbWFyeS5pbnRlcm5hbCIsInBvcnQiOjU0MzIsImRhdGFiYXNlIjoiYW5hbHl0aWNzX3Byb2QiLCJtYXhDb25uZWN0aW9ucyI6MTAwfQ==
const raw = Buffer.from(process.env.DB_CONFIG!, 'base64').toString('utf8')
const dbConfig = JSON.parse(raw)
console.log(dbConfig.host) // db-primary.internal
console.log(dbConfig.port) // 5432
console.log(dbConfig.maxConnections) // 100Restoring a binary file from a .b64 file
import { readFileSync, writeFileSync } from 'node:fs'
import { join } from 'node:path'
// Read the Base64-encoded certificate and restore the original binary
const encoded = readFileSync(join(process.cwd(), 'dist', 'cert.b64'), 'utf8').trim()
const certBuf = Buffer.from(encoded, 'base64')
writeFileSync('./ssl/server.crt', certBuf)
console.log(`Restored ${certBuf.length} bytes`)
// Restored 2142 bytesAsync decoding with error handling
import { readFile, writeFile } from 'node:fs/promises'
async function decodeBase64File(
encodedPath: string,
outputPath: string,
): Promise<number> {
try {
const encoded = await readFile(encodedPath, 'utf8')
const binary = Buffer.from(encoded.trim(), 'base64')
await writeFile(outputPath, binary)
return binary.length
} catch (err) {
const code = (err as NodeJS.ErrnoException).code
if (code === 'ENOENT') throw new Error(`File not found: ${encodedPath}`)
if (code === 'EACCES') throw new Error(`Permission denied: ${encodedPath}`)
throw err
}
}
// Restore a PDF stored as Base64
const bytes = await decodeBase64File('./uploads/invoice.b64', './out/invoice.pdf')
console.log(`Decoded ${bytes} bytes β PDF restored`)Base64 Decoding Functions β Parameters Reference
Quick reference for the parameters of the two primary native decoding APIs, formatted for use as a lookup when writing or reviewing code.
atob(encodedData)
| Parameter | Type | Required | Description |
|---|---|---|---|
| encodedData | string | Yes | Standard Base64 string using the +, /, = characters. URL-safe variants (-, _) throw InvalidCharacterError. Whitespace is not allowed. |
Buffer.from(input, inputEncoding) / .toString(outputEncoding)
| Parameter | Type | Default | Description |
|---|---|---|---|
| input | string | Buffer | TypedArray | ArrayBuffer | required | The Base64-encoded string to decode, or a buffer containing encoded bytes. |
| inputEncoding | BufferEncoding | "utf8" | Set to "base64" for standard Base64 (RFC 4648 Β§4), or "base64url" for URL-safe Base64 (RFC 4648 Β§5, Node.js 18+). |
| outputEncoding | string | "utf8" | Encoding for .toString() output. Use "utf8" for readable text, "binary" for a Latin-1 binary string compatible with atob() output. |
| start | integer | 0 | Byte offset within the decoded Buffer to begin reading. Passed to .toString() as the second argument. |
| end | integer | buf.length | Byte offset to stop reading (exclusive). Passed to .toString() as the third argument. |
URL-safe Base64 β Decoding JWTs and URL Parameters
JWTs use URL-safe Base64 (RFC 4648 Β§5) for all three segments. URL-safe Base64 replaces + with - and / with _, and strips trailing = padding. Passing this directly to atob() without restoration produces incorrect output or throws.
Browser β restore characters and padding before decoding
function decodeBase64Url(input: string): string {
const base64 = input.replace(/-/g, '+').replace(/_/g, '/')
const padded = base64 + '==='.slice(0, (4 - base64.length % 4) % 4)
const binary = atob(padded)
const bytes = Uint8Array.from(binary, ch => ch.charCodeAt(0))
return new TextDecoder().decode(bytes)
}
// Inspect a JWT payload segment (the middle part between the two dots)
const jwtToken = 'eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2U4YjRkIiwicm9sZSI6ImVkaXRvciIsIndvcmtzcGFjZUlkIjoid3NfM2E3ZjkxYzIiLCJleHAiOjE3MTcyMDM2MDB9'
const payload = decodeBase64Url(jwtToken)
const claims = JSON.parse(payload)
console.log(claims.userId) // usr_9f2a1c3e8b4d
console.log(claims.role) // editor
console.log(claims.workspaceId) // ws_3a7f91c2Node.js 18+ β native 'base64url' encoding
// Node.js 18 added 'base64url' as a first-class Buffer encoding β no manual replace needed
function decodeJwtSegment(segment: string): Record<string, unknown> {
const json = Buffer.from(segment, 'base64url').toString('utf8')
return JSON.parse(json)
}
const token = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2U4YjRkIiwicm9sZSI6ImVkaXRvciIsIndvcmtzcGFjZUlkIjoid3NfM2E3ZjkxYzIiLCJleHAiOjE3MTcyMDM2MDB9.SIGNATURE'
const [headerB64, payloadB64] = token.split('.')
const header = decodeJwtSegment(headerB64)
const payload = decodeJwtSegment(payloadB64)
console.log(header.alg) // HS256
console.log(payload.role) // editor
console.log(payload.workspaceId) // ws_3a7f91c2Decoding Base64 from Files and API Responses
In production code, Base64 decoding most often happens when consuming external APIs that deliver content in encoded form. Both scenarios have important gotchas around whitespace and binary vs text output. If you just need to inspect an encoded response during debugging, paste it directly into the Base64 Decoder β it handles standard and URL-safe modes instantly.
Decoding content from the GitHub Contents API
// GitHub Contents API returns file content as Base64, wrapped at 60 chars per line
async function fetchDecodedFile(
owner: string,
repo: string,
path: string,
token: string,
): Promise<string> {
const res = await fetch(
`https://api.github.com/repos/${owner}/${repo}/contents/${path}`,
{ headers: { Authorization: `Bearer ${token}`, Accept: 'application/vnd.github.v3+json' } }
)
if (!res.ok) throw new Error(`GitHub API ${res.status}: ${res.statusText}`)
const data = await res.json() as { content: string; encoding: string }
if (data.encoding !== 'base64') throw new Error(`Unexpected encoding: ${data.encoding}`)
// β οΈ GitHub wraps at 60 chars β strip newlines before decoding
const clean = data.content.replace(/\n/g, '')
return Buffer.from(clean, 'base64').toString('utf8')
}
const openApiSpec = await fetchDecodedFile('acme-corp', 'platform-api', 'openapi.json', process.env.GITHUB_TOKEN!)
const spec = JSON.parse(openApiSpec)
console.log(`API version: ${spec.info.version}`)Decoding a Base64-encoded binary from an API (browser)
// Some APIs return binary content (images, PDFs) as Base64 JSON fields
async function downloadDecodedFile(endpoint: string, authToken: string): Promise<void> {
const res = await fetch(endpoint, { headers: { Authorization: `Bearer ${authToken}` } })
if (!res.ok) throw new Error(`Download failed: ${res.status}`)
const { filename, content, mimeType } = await res.json() as {
filename: string; content: string; mimeType: string
}
// Decode Base64 β binary bytes β Blob
const binary = atob(content)
const bytes = Uint8Array.from(binary, ch => ch.charCodeAt(0))
const blob = new Blob([bytes], { type: mimeType })
// Trigger browser download
const url = URL.createObjectURL(blob)
const a = Object.assign(document.createElement('a'), { href: url, download: filename })
a.click()
URL.revokeObjectURL(url)
}
await downloadDecodedFile('/api/reports/latest', sessionStorage.getItem('auth_token')!)Command-Line Base64 Decoding in Node.js and Shell
For CI/CD scripts, debugging sessions, or one-off decoding tasks, shell tools and Node.js one-liners are faster than a full script. Note that the flag name differs between macOS and Linux.
# ββ macOS / Linux system base64 βββββββββββββββββββββββββββββββββββββββ
# Standard decoding (macOS uses -D, Linux uses -d)
echo "ZGVwbG95LWJvdDpzay1wcm9kLWE3ZjJjOTFlNGIzZDg=" | base64 -d # Linux
echo "ZGVwbG95LWJvdDpzay1wcm9kLWE3ZjJjOTFlNGIzZDg=" | base64 -D # macOS
# Decode a .b64 file to its original binary
base64 -d ./dist/cert.b64 > ./ssl/server.crt # Linux
base64 -D -i ./dist/cert.b64 -o ./ssl/server.crt # macOS
# URL-safe Base64 β restore + and / before decoding
echo "eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2UifQ" | tr '-_' '+/' | base64 -d
# ββ Node.js one-liner β works on Windows too βββββββββββββββββββββββββββ
node -e "process.stdout.write(Buffer.from(process.argv[1], 'base64').toString())" "ZGVwbG95LWJvdA=="
# deploy-bot
# URL-safe (Node.js 18+)
node -e "process.stdout.write(Buffer.from(process.argv[1], 'base64url').toString())" "eyJhbGciOiJIUzI1NiJ9"
# {"alg":"HS256"}base64 uses -D to decode (uppercase D), while Linux uses -d(lowercase). This breaks CI scripts silently β use a Node.js one-liner when the target platform isn't guaranteed to be Linux.High-Performance Alternative: js-base64
The main reason to reach for a library is cross-environment consistency. If you ship a package that runs in both browser and Node.js without bundler configuration, Buffer requires environment detection and atob() requires the TextDecoder workaround. js-base64 (100M+ weekly npm downloads) handles both transparently.
npm install js-base64 # or pnpm add js-base64
import { fromBase64, fromBase64Url, isValid } from 'js-base64'
// Standard decoding β Unicode-safe, works in browser and Node.js
const raw = fromBase64('eyJldmVudElkIjoiZXZ0XzdjM2E5ZjFiMmQiLCJ0eXBlIjoiY2hlY2tvdXRfY29tcGxldGVkIiwiY3VycmVuY3kiOiJFVVIiLCJhbW91bnQiOjE0OTAwfQ==')
const event = JSON.parse(raw)
console.log(event.type) // checkout_completed
console.log(event.currency) // EUR
// URL-safe decoding β no manual character replacement needed
const jwtPayload = fromBase64Url('eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2U4YjRkIiwicm9sZSI6ImVkaXRvciJ9')
const claims = JSON.parse(jwtPayload)
console.log(claims.role) // editor
// Validate before decoding untrusted input
const untrusted = 'not!valid@base64#'
if (!isValid(untrusted)) {
console.error('Rejected: invalid Base64 input')
}
// Binary output β second argument true returns Uint8Array
const pngBytes = fromBase64('iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNk+M9QDwADhgGAWjR9awAAAABJRU5ErkJggg==', true)
console.log(pngBytes instanceof Uint8Array) // trueTerminal Output with Syntax Highlighting
When writing CLI debugging tools or inspection scripts, plain console.log output is hard to read for large JSON payloads. chalk (the most downloaded npm package for terminal coloring) combined with Base64 decoding produces readable, scannable terminal output β useful for JWT inspection, API response debugging, and config auditing.
npm install chalk # chalk v5+ is ESM-only β use import, not require
import chalk from 'chalk'
// Decode and display any Base64 value with smart type detection
function inspectBase64(encoded: string, label = 'Decoded value'): void {
let decoded: string
try {
decoded = Buffer.from(encoded.trim(), 'base64').toString('utf8')
} catch {
console.error(chalk.red('β Invalid Base64 input'))
return
}
console.log(chalk.bold.cyan(`\nββ ${label} ββ`))
// Attempt JSON pretty-print
try {
const parsed = JSON.parse(decoded)
console.log(chalk.green('Type:'), chalk.yellow('JSON'))
for (const [key, value] of Object.entries(parsed)) {
const display = typeof value === 'object' ? JSON.stringify(value) : String(value)
console.log(chalk.green(` ${key}:`), chalk.white(display))
}
return
} catch { /* not JSON */ }
// Plain text fallback
console.log(chalk.green('Type:'), chalk.yellow('text'))
console.log(chalk.white(decoded))
}
// Inspect a Base64-encoded JWT payload
const tokenParts = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2U4YjRkIiwicm9sZSI6ImVkaXRvciIsImV4cCI6MTcxNzIwMzYwMH0.SIGNATURE'.split('.')
inspectBase64(tokenParts[0], 'JWT Header')
inspectBase64(tokenParts[1], 'JWT Payload')
// ββ JWT Header ββ
// Type: JSON
// alg: HS256
// typ: JWT
//
// ββ JWT Payload ββ
// Type: JSON
// userId: usr_9f2a1c3e8b4d
// role: editor
// exp: 1717203600Decode Large Base64 Files with Node.js Streams
When a Base64-encoded file exceeds ~50 MB, loading it entirely into memory with readFileSync() becomes a problem. Node.js streams let you decode data in chunks β but Base64 requires multiples of 4 characters per chunk (each 4-char group decodes to exactly 3 bytes) to avoid padding errors at chunk boundaries.
import { createReadStream, createWriteStream } from 'node:fs'
import { pipeline } from 'node:stream/promises'
async function streamDecodeBase64(inputPath: string, outputPath: string): Promise<void> {
const readStream = createReadStream(inputPath, { encoding: 'utf8', highWaterMark: 4 * 1024 * 192 })
const writeStream = createWriteStream(outputPath)
let buffer = ''
await pipeline(
readStream,
async function* (source) {
for await (const chunk of source) {
buffer += (chunk as string).replace(/\s/g, '') // strip any whitespace/newlines
// Decode only complete 4-char groups to avoid mid-stream padding issues
const remainder = buffer.length % 4
const safe = buffer.slice(0, buffer.length - remainder)
buffer = buffer.slice(buffer.length - remainder)
if (safe.length > 0) yield Buffer.from(safe, 'base64')
}
if (buffer.length > 0) yield Buffer.from(buffer, 'base64')
},
writeStream,
)
}
// Decode a 200 MB video that was stored as Base64
await streamDecodeBase64('./uploads/product-demo.b64', './dist/product-demo.mp4')
console.log('Stream decode complete')4 Γ 1024 Γ 192 = 786,432 characters (768 KB). For files under 50 MB, readFile() + Buffer.from(content.trim(), 'base64') is simpler and fast enough.Common Mistakes
I've seen these four mistakes in JavaScript codebases repeatedly β they tend to stay hidden until a non-ASCII character or a line-wrapped API response reaches the decoding path in production.
Mistake 1 β Using atob() without TextDecoder for UTF-8 content
Problem: atob() returns a binary string where each character is one raw byte value. UTF-8 multi-byte sequences (Cyrillic, CJK, accented characters) appear as garbled Latin-1 characters. Fix: wrap the output in TextDecoder.
// β atob() returns the raw UTF-8 bytes as a Latin-1 string const encoded = '0JDQu9C10LrRgdC10Lkg0JjQstCw0L3QvtCy' const decoded = atob(encoded) console.log(decoded) // "Π ΡΠ Β»Π Β΅Π ΡΠ‘ΠΠ Β΅Π β Π ΛΠ ΠΠ Β°Π Π Π ΡΠ Π" β wrong
// β Use TextDecoder to correctly interpret the UTF-8 bytes const encoded = '0JDQu9C10LrRgdC10Lkg0JjQstCw0L3QvtCy' const binary = atob(encoded) const bytes = Uint8Array.from(binary, ch => ch.charCodeAt(0)) const decoded = new TextDecoder().decode(bytes) console.log(decoded) // ΠΠ»Π΅ΠΊΡΠ΅ΠΉ ΠΠ²Π°Π½ΠΎΠ² β
Mistake 2 β Passing URL-safe Base64 directly to atob()
Problem: JWT segments use - and _ instead of + and /, with no padding. atob() may return wrong data or throw. Fix: restore standard characters and re-add padding first.
// β URL-safe JWT segment passed directly β unreliable const jwtPayload = 'eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2UifQ' const decoded = atob(jwtPayload) // May produce wrong result or throw
// β
Restore standard Base64 chars and padding first
function decodeBase64Url(input: string): string {
const b64 = input.replace(/-/g, '+').replace(/_/g, '/')
const pad = b64 + '==='.slice(0, (4 - b64.length % 4) % 4)
const bin = atob(pad)
const bytes = Uint8Array.from(bin, ch => ch.charCodeAt(0))
return new TextDecoder().decode(bytes)
}
const decoded = decodeBase64Url('eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2UifQ')
// {"userId":"usr_9f2a1c3e"} βMistake 3 β Not stripping newlines from line-wrapped Base64
Problem: The GitHub Contents API and MIME encoders wrap Base64 output at 60β76 characters per line. atob() throws InvalidCharacterError on \n characters. Fix: strip all whitespace before decoding.
// β GitHub API content field contains embedded newlines const data = await res.json() const decoded = atob(data.content) // β throws InvalidCharacterError
// β Strip newlines (and any other whitespace) before decoding const data = await res.json() const clean = data.content.replace(/\s/g, '') const decoded = atob(clean) // β
Mistake 4 β Calling .toString() on decoded binary content
Problem: When the original data is binary (images, PDFs, audio), calling .toString('utf8') replaces unrecognised byte sequences with U+FFFD, silently corrupting the output. Fix: keep the result as a Buffer β don't convert to a string.
// β .toString('utf8') corrupts binary content
import { readFileSync, writeFileSync } from 'node:fs'
const encoded = readFileSync('./uploads/invoice.b64', 'utf8').trim()
const corrupted = Buffer.from(encoded, 'base64').toString('utf8') // β
writeFileSync('./out/invoice.pdf', corrupted) // β unreadable PDF// β
Keep the Buffer as binary β do not convert to a string
import { readFileSync, writeFileSync } from 'node:fs'
const encoded = readFileSync('./uploads/invoice.b64', 'utf8').trim()
const binary = Buffer.from(encoded, 'base64') // β raw bytes preserved
writeFileSync('./out/invoice.pdf', binary) // β valid PDFJavaScript Base64 Decoding Methods β Quick Comparison
| Method | UTF-8 output | Binary output | URL-safe | Environments | Requires install |
|---|---|---|---|---|---|
| atob() | β needs TextDecoder | β binary string | β manual restore | Browser, Node 16+, Bun, Deno | No |
| TextDecoder + atob() | β UTF-8 | β via Uint8Array | β manual restore | Browser, Node 16+, Deno | No |
| Buffer.from().toString() | β utf8 | β keep as Buffer | β base64url (Node 18+) | Node.js, Bun | No |
| Uint8Array.fromBase64() (TC39) | β via TextDecoder | β native | β alphabet option | Chrome 130+, Node 22+ | No |
| js-base64 | β always | β Uint8Array | β built-in | Universal | npm install |
Choose atob() only when the decoded content is guaranteed to be ASCII text. For any user-provided or multi-language text in a browser, use TextDecoder + atob(). For Node.js server-side code, Buffer is the right default β it handles UTF-8 automatically and keeps binary data intact. For cross-environment libraries, js-base64 removes all edge cases.
Frequently Asked Questions
Related Tools
For a one-click decode without writing any code, paste your Base64 string directly into the Base64 Decoder β it handles standard and URL-safe modes with immediate output in your browser.
Alex is a front-end and Node.js developer with extensive experience building web applications and developer tooling. He is passionate about web standards, browser APIs, and the JavaScript ecosystem. In his spare time he contributes to open-source projects and writes about modern JavaScript patterns, performance optimisation, and everything related to the web platform.
Sophie is a full-stack developer focused on TypeScript across the entire stack β from React frontends to Express and Fastify backends. She has a particular interest in type-safe API design, runtime validation, and the patterns that make large JavaScript codebases stay manageable. She writes about TypeScript idioms, Node.js internals, and the ever-evolving JavaScript module ecosystem.