Base64 Decode JavaScript β€” atob() & Buffer

Β·Front-end & Node.js DeveloperΒ·Reviewed bySophie LaurentΒ·Published

Use the free online Base64 Decode Online directly in your browser β€” no install required.

Try Base64 Decode Online Online β†’

When I debug a production auth issue, the first thing I reach for is a Base64 decoder β€” JWT payloads, webhook signatures, and encoded config values all hide inside Base64 strings. JavaScript provides two primary built-in approaches to base64 decode: atob() (browser + Node.js 16+) and Buffer.from(encoded, 'base64').toString() (Node.js) β€” and they behave very differently when the original data contained Unicode characters. For a quick one-off decode without writing any code, ToolDeck's Base64 Decoder handles it instantly in your browser. This guide covers both environments β€” targeting Node.js 16+ and modern browsers (Chrome 80+, Firefox 75+, Safari 14+) β€” with production-ready examples: UTF-8 recovery, URL-safe variants, JWT decoding, files, API responses, Node.js streams, and the four mistakes that consistently produce garbled output in real codebases.

  • βœ“atob(encoded) is browser-native and available in Node.js 16+ globally, but returns a binary string β€” use TextDecoder to recover UTF-8 text from any content above ASCII.
  • βœ“Buffer.from(encoded, "base64").toString("utf8") is the Node.js idiomatic approach and handles UTF-8 automatically with no extra steps.
  • βœ“URL-safe Base64 (used in JWTs) replaces + with -, / with _, and strips = padding. Restore these before calling atob(), or use Buffer.from(encoded, "base64url").toString() in Node.js 18+.
  • βœ“Strip whitespace and newlines before decoding β€” the GitHub Contents API and many MIME encoders wrap Base64 output at 60–76 characters per line.
  • βœ“Uint8Array.prototype.fromBase64() (TC39 Stage 3) is already available in Node.js 22+ and Chrome 130+ and will eventually unify both environments.

What is Base64 Decoding?

Base64 decoding is the inverse of encoding β€” it converts the 64-character ASCII representation back to the original binary data or text. Every 4 Base64 characters map back to exactly 3 bytes. The = padding characters at the end of an encoded string tell the decoder how many extra bytes were appended to complete the final 3-byte group.

Base64 is not encryption β€” the operation is completely reversible by anyone with the encoded string. Its purpose is transport safety: protocols and storage formats designed for 7-bit ASCII text cannot handle arbitrary binary bytes, and Base64 bridges that gap. Common JavaScript decoding scenarios include inspecting JWT payloads, unpacking Base64-encoded JSON configs from environment variables, extracting binary file content from REST APIs, and decoding data URIs in the browser.

Before Β· text
After Β· text
ZGVwbG95LWJvdDpzay1wcm9kLWE3ZjJjOTFlNGIzZDg=
deploy-bot:sk-prod-a7f2c91e4b3d8

atob() β€” The Browser-Native Decoding Function

atob() (ASCII-to-binary) has been available in browsers since IE10 and became a global in Node.js 16.0 as part of the WinterCG compatibility initiative. It also works natively in Deno, Bun, and Cloudflare Workers β€” no import required.

The function returns a binary string: a JavaScript string where each character has a code point equal to one raw byte value (0–255). This matters: if the original data was UTF-8 text containing characters above U+007F (accented letters, Cyrillic, CJK, emoji), the returned string is the raw byte sequence, not readable text. Use TextDecoder to recover it (covered in the next section).

Minimal working example

JavaScript (browser / Node.js 16+)
// Decoding an HTTP Basic Auth credential pair received in a request header
// Authorization: Basic ZGVwbG95LWJvdDpzay1wcm9kLWE3ZjJjOTFlNGIzZDg=

function parseBasicAuth(header: string): { serviceId: string; apiKey: string } {
  const base64Part = header.replace(/^Basics+/i, '')
  const decoded    = atob(base64Part)
  const [serviceId, apiKey] = decoded.split(':')
  return { serviceId, apiKey }
}

const auth = parseBasicAuth('Basic ZGVwbG95LWJvdDpzay1wcm9kLWE3ZjJjOTFlNGIzZDg=')

console.log(auth.serviceId) // deploy-bot
console.log(auth.apiKey)    // sk-prod-a7f2c91e4b3d8

Round-trip verification

JavaScript
// Verify lossless recovery for ASCII-only content
const original = 'service:payments region:eu-west-1 env:production'

const encoded = btoa(original)
const decoded = atob(encoded)

console.log(encoded)
// c2VydmljZTpwYXltZW50cyByZWdpb246ZXUtd2VzdC0xIGVudjpwcm9kdWN0aW9u

console.log(decoded === original) // true
Note:atob() and btoa() are part of the WinterCG Minimum Common API β€” the same spec that governs Fetch, URL, and crypto in non-browser runtimes. They behave identically in Node.js 16+, Bun, Deno, and Cloudflare Workers.

Recovering UTF-8 Text After Decoding

The most common atob() pitfall is misunderstanding its return type. When the original text was encoded as UTF-8 before Base64, atob() returns a Latin-1 binary string, not the readable text:

JavaScript
// 'АлСксСй Иванов' was UTF-8 encoded then Base64 encoded before transmission
const encoded = '0JDQu9C10LrRgdC10Lkg0JjQstCw0L3QvtCy'

// ❌ atob() returns the raw UTF-8 bytes as a Latin-1 string β€” garbled output
console.log(atob(encoded))
// "Π Ρ’Π Β»Π Β΅Π Ρ”Π‘ΠƒΠ Β΅Π β„– Π ΛœΠ Π†Π Β°Π Π…Π Ρ•Π Π†"  ← byte values misread as Latin-1

The correct approach uses TextDecoder to interpret those raw bytes as UTF-8:

TextDecoder approach β€” safe for any Unicode output

JavaScript (browser + Node.js 16+)
// Unicode-safe Base64 decode utilities
function fromBase64(encoded: string): string {
  const binary = atob(encoded)
  const bytes  = Uint8Array.from(binary, ch => ch.charCodeAt(0))
  return new TextDecoder().decode(bytes)
}

function toBase64(text: string): string {
  const bytes = new TextEncoder().encode(text)
  const chars = Array.from(bytes, byte => String.fromCharCode(byte))
  return btoa(chars.join(''))
}

// Works with any language or script
const orderNote = 'Confirmed: η”°δΈ­ε€ͺιƒŽ β€” SΓ£o Paulo warehouse, qty: 250'
const encoded   = toBase64(orderNote)
const decoded   = fromBase64(encoded)

console.log(decoded === orderNote) // true
console.log(decoded)
// Confirmed: η”°δΈ­ε€ͺιƒŽ β€” SΓ£o Paulo warehouse, qty: 250
Note:In Node.js, skip the TextDecoder step entirely β€” use Buffer.from(encoded, 'base64').toString('utf8'). It interprets the decoded bytes as UTF-8 automatically and is faster for large inputs.

Buffer.from() in Node.js β€” Complete Decoding Guide

In Node.js, Buffer is the idiomatic API for all binary operations including Base64 decoding. It handles UTF-8 natively, returns a proper Buffer (binary-safe), and since Node.js 18 supports the 'base64url' encoding shortcut for URL-safe variants.

Decoding an environment variable config

Node.js
// Server config stored as Base64 in an env variable (avoids JSON escaping in shell)
// DB_CONFIG=eyJob3N0IjoiZGItcHJpbWFyeS5pbnRlcm5hbCIsInBvcnQiOjU0MzIsImRhdGFiYXNlIjoiYW5hbHl0aWNzX3Byb2QiLCJtYXhDb25uZWN0aW9ucyI6MTAwfQ==

const raw = Buffer.from(process.env.DB_CONFIG!, 'base64').toString('utf8')
const dbConfig = JSON.parse(raw)

console.log(dbConfig.host)           // db-primary.internal
console.log(dbConfig.port)           // 5432
console.log(dbConfig.maxConnections) // 100

Restoring a binary file from a .b64 file

Node.js
import { readFileSync, writeFileSync } from 'node:fs'
import { join } from 'node:path'

// Read the Base64-encoded certificate and restore the original binary
const encoded = readFileSync(join(process.cwd(), 'dist', 'cert.b64'), 'utf8').trim()
const certBuf  = Buffer.from(encoded, 'base64')

writeFileSync('./ssl/server.crt', certBuf)

console.log(`Restored ${certBuf.length} bytes`)
// Restored 2142 bytes

Async decoding with error handling

Node.js
import { readFile, writeFile } from 'node:fs/promises'

async function decodeBase64File(
  encodedPath: string,
  outputPath:  string,
): Promise<number> {
  try {
    const encoded = await readFile(encodedPath, 'utf8')
    const binary  = Buffer.from(encoded.trim(), 'base64')
    await writeFile(outputPath, binary)
    return binary.length
  } catch (err) {
    const code = (err as NodeJS.ErrnoException).code
    if (code === 'ENOENT') throw new Error(`File not found: ${encodedPath}`)
    if (code === 'EACCES') throw new Error(`Permission denied: ${encodedPath}`)
    throw err
  }
}

// Restore a PDF stored as Base64
const bytes = await decodeBase64File('./uploads/invoice.b64', './out/invoice.pdf')
console.log(`Decoded ${bytes} bytes β€” PDF restored`)

Base64 Decoding Functions β€” Parameters Reference

Quick reference for the parameters of the two primary native decoding APIs, formatted for use as a lookup when writing or reviewing code.

atob(encodedData)

ParameterTypeRequiredDescription
encodedDatastringYesStandard Base64 string using the +, /, = characters. URL-safe variants (-, _) throw InvalidCharacterError. Whitespace is not allowed.
Returns:binary string β€” each character's code point equals one raw byte value (0–255). Not a Unicode string; pass through TextDecoder to recover UTF-8 text.

Buffer.from(input, inputEncoding) / .toString(outputEncoding)

ParameterTypeDefaultDescription
inputstring | Buffer | TypedArray | ArrayBufferrequiredThe Base64-encoded string to decode, or a buffer containing encoded bytes.
inputEncodingBufferEncoding"utf8"Set to "base64" for standard Base64 (RFC 4648 Β§4), or "base64url" for URL-safe Base64 (RFC 4648 Β§5, Node.js 18+).
outputEncodingstring"utf8"Encoding for .toString() output. Use "utf8" for readable text, "binary" for a Latin-1 binary string compatible with atob() output.
startinteger0Byte offset within the decoded Buffer to begin reading. Passed to .toString() as the second argument.
endintegerbuf.lengthByte offset to stop reading (exclusive). Passed to .toString() as the third argument.
Returns:Buffer from .from(). Returns string from .toString(). Keep as Buffer (don't call .toString()) when the decoded content is binary β€” images, PDFs, audio.

URL-safe Base64 β€” Decoding JWTs and URL Parameters

JWTs use URL-safe Base64 (RFC 4648 Β§5) for all three segments. URL-safe Base64 replaces + with - and / with _, and strips trailing = padding. Passing this directly to atob() without restoration produces incorrect output or throws.

Browser β€” restore characters and padding before decoding

JavaScript (browser)
function decodeBase64Url(input: string): string {
  const base64 = input.replace(/-/g, '+').replace(/_/g, '/')
  const padded = base64 + '==='.slice(0, (4 - base64.length % 4) % 4)
  const binary = atob(padded)
  const bytes  = Uint8Array.from(binary, ch => ch.charCodeAt(0))
  return new TextDecoder().decode(bytes)
}

// Inspect a JWT payload segment (the middle part between the two dots)
const jwtToken  = 'eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2U4YjRkIiwicm9sZSI6ImVkaXRvciIsIndvcmtzcGFjZUlkIjoid3NfM2E3ZjkxYzIiLCJleHAiOjE3MTcyMDM2MDB9'
const payload   = decodeBase64Url(jwtToken)
const claims    = JSON.parse(payload)

console.log(claims.userId)      // usr_9f2a1c3e8b4d
console.log(claims.role)        // editor
console.log(claims.workspaceId) // ws_3a7f91c2

Node.js 18+ β€” native 'base64url' encoding

Node.js 18+
// Node.js 18 added 'base64url' as a first-class Buffer encoding β€” no manual replace needed
function decodeJwtSegment(segment: string): Record<string, unknown> {
  const json = Buffer.from(segment, 'base64url').toString('utf8')
  return JSON.parse(json)
}

const token   = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2U4YjRkIiwicm9sZSI6ImVkaXRvciIsIndvcmtzcGFjZUlkIjoid3NfM2E3ZjkxYzIiLCJleHAiOjE3MTcyMDM2MDB9.SIGNATURE'
const [headerB64, payloadB64] = token.split('.')

const header  = decodeJwtSegment(headerB64)
const payload = decodeJwtSegment(payloadB64)

console.log(header.alg)          // HS256
console.log(payload.role)        // editor
console.log(payload.workspaceId) // ws_3a7f91c2

Decoding Base64 from Files and API Responses

In production code, Base64 decoding most often happens when consuming external APIs that deliver content in encoded form. Both scenarios have important gotchas around whitespace and binary vs text output. If you just need to inspect an encoded response during debugging, paste it directly into the Base64 Decoder β€” it handles standard and URL-safe modes instantly.

Decoding content from the GitHub Contents API

JavaScript
// GitHub Contents API returns file content as Base64, wrapped at 60 chars per line
async function fetchDecodedFile(
  owner: string,
  repo:  string,
  path:  string,
  token: string,
): Promise<string> {
  const res = await fetch(
    `https://api.github.com/repos/${owner}/${repo}/contents/${path}`,
    { headers: { Authorization: `Bearer ${token}`, Accept: 'application/vnd.github.v3+json' } }
  )
  if (!res.ok) throw new Error(`GitHub API ${res.status}: ${res.statusText}`)

  const data = await res.json() as { content: string; encoding: string }
  if (data.encoding !== 'base64') throw new Error(`Unexpected encoding: ${data.encoding}`)

  // ⚠️ GitHub wraps at 60 chars β€” strip newlines before decoding
  const clean = data.content.replace(/\n/g, '')
  return Buffer.from(clean, 'base64').toString('utf8')
}

const openApiSpec = await fetchDecodedFile('acme-corp', 'platform-api', 'openapi.json', process.env.GITHUB_TOKEN!)
const spec = JSON.parse(openApiSpec)
console.log(`API version: ${spec.info.version}`)

Decoding a Base64-encoded binary from an API (browser)

JavaScript (browser)
// Some APIs return binary content (images, PDFs) as Base64 JSON fields
async function downloadDecodedFile(endpoint: string, authToken: string): Promise<void> {
  const res = await fetch(endpoint, { headers: { Authorization: `Bearer ${authToken}` } })
  if (!res.ok) throw new Error(`Download failed: ${res.status}`)

  const { filename, content, mimeType } = await res.json() as {
    filename: string; content: string; mimeType: string
  }

  // Decode Base64 β†’ binary bytes β†’ Blob
  const binary = atob(content)
  const bytes  = Uint8Array.from(binary, ch => ch.charCodeAt(0))
  const blob   = new Blob([bytes], { type: mimeType })

  // Trigger browser download
  const url = URL.createObjectURL(blob)
  const a   = Object.assign(document.createElement('a'), { href: url, download: filename })
  a.click()
  URL.revokeObjectURL(url)
}

await downloadDecodedFile('/api/reports/latest', sessionStorage.getItem('auth_token')!)

Command-Line Base64 Decoding in Node.js and Shell

For CI/CD scripts, debugging sessions, or one-off decoding tasks, shell tools and Node.js one-liners are faster than a full script. Note that the flag name differs between macOS and Linux.

bash
# ── macOS / Linux system base64 ───────────────────────────────────────
# Standard decoding (macOS uses -D, Linux uses -d)
echo "ZGVwbG95LWJvdDpzay1wcm9kLWE3ZjJjOTFlNGIzZDg=" | base64 -d   # Linux
echo "ZGVwbG95LWJvdDpzay1wcm9kLWE3ZjJjOTFlNGIzZDg=" | base64 -D   # macOS

# Decode a .b64 file to its original binary
base64 -d ./dist/cert.b64 > ./ssl/server.crt       # Linux
base64 -D -i ./dist/cert.b64 -o ./ssl/server.crt   # macOS

# URL-safe Base64 β€” restore + and / before decoding
echo "eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2UifQ" | tr '-_' '+/' | base64 -d

# ── Node.js one-liner β€” works on Windows too ───────────────────────────
node -e "process.stdout.write(Buffer.from(process.argv[1], 'base64').toString())" "ZGVwbG95LWJvdA=="
# deploy-bot

# URL-safe (Node.js 18+)
node -e "process.stdout.write(Buffer.from(process.argv[1], 'base64url').toString())" "eyJhbGciOiJIUzI1NiJ9"
# {"alg":"HS256"}
Note:On macOS, base64 uses -D to decode (uppercase D), while Linux uses -d(lowercase). This breaks CI scripts silently β€” use a Node.js one-liner when the target platform isn't guaranteed to be Linux.

High-Performance Alternative: js-base64

The main reason to reach for a library is cross-environment consistency. If you ship a package that runs in both browser and Node.js without bundler configuration, Buffer requires environment detection and atob() requires the TextDecoder workaround. js-base64 (100M+ weekly npm downloads) handles both transparently.

bash
npm install js-base64
# or
pnpm add js-base64
JavaScript
import { fromBase64, fromBase64Url, isValid } from 'js-base64'

// Standard decoding β€” Unicode-safe, works in browser and Node.js
const raw   = fromBase64('eyJldmVudElkIjoiZXZ0XzdjM2E5ZjFiMmQiLCJ0eXBlIjoiY2hlY2tvdXRfY29tcGxldGVkIiwiY3VycmVuY3kiOiJFVVIiLCJhbW91bnQiOjE0OTAwfQ==')
const event = JSON.parse(raw)
console.log(event.type)     // checkout_completed
console.log(event.currency) // EUR

// URL-safe decoding β€” no manual character replacement needed
const jwtPayload = fromBase64Url('eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2U4YjRkIiwicm9sZSI6ImVkaXRvciJ9')
const claims     = JSON.parse(jwtPayload)
console.log(claims.role) // editor

// Validate before decoding untrusted input
const untrusted = 'not!valid@base64#'
if (!isValid(untrusted)) {
  console.error('Rejected: invalid Base64 input')
}

// Binary output β€” second argument true returns Uint8Array
const pngBytes = fromBase64('iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNk+M9QDwADhgGAWjR9awAAAABJRU5ErkJggg==', true)
console.log(pngBytes instanceof Uint8Array) // true

Terminal Output with Syntax Highlighting

When writing CLI debugging tools or inspection scripts, plain console.log output is hard to read for large JSON payloads. chalk (the most downloaded npm package for terminal coloring) combined with Base64 decoding produces readable, scannable terminal output β€” useful for JWT inspection, API response debugging, and config auditing.

bash
npm install chalk
# chalk v5+ is ESM-only β€” use import, not require
Node.js
import chalk from 'chalk'

// Decode and display any Base64 value with smart type detection
function inspectBase64(encoded: string, label = 'Decoded value'): void {
  let decoded: string
  try {
    decoded = Buffer.from(encoded.trim(), 'base64').toString('utf8')
  } catch {
    console.error(chalk.red('βœ— Invalid Base64 input'))
    return
  }

  console.log(chalk.bold.cyan(`\n── ${label} ──`))

  // Attempt JSON pretty-print
  try {
    const parsed = JSON.parse(decoded)
    console.log(chalk.green('Type:'), chalk.yellow('JSON'))
    for (const [key, value] of Object.entries(parsed)) {
      const display = typeof value === 'object' ? JSON.stringify(value) : String(value)
      console.log(chalk.green(`  ${key}:`), chalk.white(display))
    }
    return
  } catch { /* not JSON */ }

  // Plain text fallback
  console.log(chalk.green('Type:'), chalk.yellow('text'))
  console.log(chalk.white(decoded))
}

// Inspect a Base64-encoded JWT payload
const tokenParts = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2U4YjRkIiwicm9sZSI6ImVkaXRvciIsImV4cCI6MTcxNzIwMzYwMH0.SIGNATURE'.split('.')
inspectBase64(tokenParts[0], 'JWT Header')
inspectBase64(tokenParts[1], 'JWT Payload')
// ── JWT Header ──
// Type:   JSON
//   alg:  HS256
//   typ:  JWT
//
// ── JWT Payload ──
// Type:   JSON
//   userId: usr_9f2a1c3e8b4d
//   role:   editor
//   exp:    1717203600
Note:Use chalk only for terminal/CLI output β€” never for content written to files, API responses, or log aggregators. ANSI escape codes corrupt non-terminal consumers: log platforms (Datadog, Splunk), JSON log parsers, and CI log viewers all display them as unreadable character sequences.

Decode Large Base64 Files with Node.js Streams

When a Base64-encoded file exceeds ~50 MB, loading it entirely into memory with readFileSync() becomes a problem. Node.js streams let you decode data in chunks β€” but Base64 requires multiples of 4 characters per chunk (each 4-char group decodes to exactly 3 bytes) to avoid padding errors at chunk boundaries.

Node.js
import { createReadStream, createWriteStream } from 'node:fs'
import { pipeline } from 'node:stream/promises'

async function streamDecodeBase64(inputPath: string, outputPath: string): Promise<void> {
  const readStream  = createReadStream(inputPath, { encoding: 'utf8', highWaterMark: 4 * 1024 * 192 })
  const writeStream = createWriteStream(outputPath)

  let buffer = ''

  await pipeline(
    readStream,
    async function* (source) {
      for await (const chunk of source) {
        buffer += (chunk as string).replace(/\s/g, '') // strip any whitespace/newlines

        // Decode only complete 4-char groups to avoid mid-stream padding issues
        const remainder = buffer.length % 4
        const safe      = buffer.slice(0, buffer.length - remainder)
        buffer          = buffer.slice(buffer.length - remainder)

        if (safe.length > 0) yield Buffer.from(safe, 'base64')
      }
      if (buffer.length > 0) yield Buffer.from(buffer, 'base64')
    },
    writeStream,
  )
}

// Decode a 200 MB video that was stored as Base64
await streamDecodeBase64('./uploads/product-demo.b64', './dist/product-demo.mp4')
console.log('Stream decode complete')
Note:The chunk size must be a multiple of 4 characters when reading Base64 text, so each chunk contains only complete 4-character groups. The example uses 4 Γ— 1024 Γ— 192 = 786,432 characters (768 KB). For files under 50 MB, readFile() + Buffer.from(content.trim(), 'base64') is simpler and fast enough.

Common Mistakes

I've seen these four mistakes in JavaScript codebases repeatedly β€” they tend to stay hidden until a non-ASCII character or a line-wrapped API response reaches the decoding path in production.

Mistake 1 β€” Using atob() without TextDecoder for UTF-8 content

Problem: atob() returns a binary string where each character is one raw byte value. UTF-8 multi-byte sequences (Cyrillic, CJK, accented characters) appear as garbled Latin-1 characters. Fix: wrap the output in TextDecoder.

Before Β· JavaScript
After Β· JavaScript
// ❌ atob() returns the raw UTF-8 bytes as a Latin-1 string
const encoded = '0JDQu9C10LrRgdC10Lkg0JjQstCw0L3QvtCy'
const decoded  = atob(encoded)
console.log(decoded)
// "Π Ρ’Π Β»Π Β΅Π Ρ”Π‘ΠƒΠ Β΅Π β„– Π ΛœΠ Π†Π Β°Π Π…Π Ρ•Π Π†"  ← wrong
// βœ… Use TextDecoder to correctly interpret the UTF-8 bytes
const encoded  = '0JDQu9C10LrRgdC10Lkg0JjQstCw0L3QvtCy'
const binary   = atob(encoded)
const bytes    = Uint8Array.from(binary, ch => ch.charCodeAt(0))
const decoded  = new TextDecoder().decode(bytes)
console.log(decoded) // АлСксСй Иванов βœ“

Mistake 2 β€” Passing URL-safe Base64 directly to atob()

Problem: JWT segments use - and _ instead of + and /, with no padding. atob() may return wrong data or throw. Fix: restore standard characters and re-add padding first.

Before Β· JavaScript
After Β· JavaScript
// ❌ URL-safe JWT segment passed directly β€” unreliable
const jwtPayload = 'eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2UifQ'
const decoded    = atob(jwtPayload) // May produce wrong result or throw
// βœ… Restore standard Base64 chars and padding first
function decodeBase64Url(input: string): string {
  const b64  = input.replace(/-/g, '+').replace(/_/g, '/')
  const pad  = b64 + '==='.slice(0, (4 - b64.length % 4) % 4)
  const bin  = atob(pad)
  const bytes = Uint8Array.from(bin, ch => ch.charCodeAt(0))
  return new TextDecoder().decode(bytes)
}
const decoded = decodeBase64Url('eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2UifQ')
// {"userId":"usr_9f2a1c3e"} βœ“

Mistake 3 β€” Not stripping newlines from line-wrapped Base64

Problem: The GitHub Contents API and MIME encoders wrap Base64 output at 60–76 characters per line. atob() throws InvalidCharacterError on \n characters. Fix: strip all whitespace before decoding.

Before Β· JavaScript
After Β· JavaScript
// ❌ GitHub API content field contains embedded newlines
const data    = await res.json()
const decoded = atob(data.content) // ❌ throws InvalidCharacterError
// βœ… Strip newlines (and any other whitespace) before decoding
const data    = await res.json()
const clean   = data.content.replace(/\s/g, '')
const decoded = atob(clean) // βœ“

Mistake 4 β€” Calling .toString() on decoded binary content

Problem: When the original data is binary (images, PDFs, audio), calling .toString('utf8') replaces unrecognised byte sequences with U+FFFD, silently corrupting the output. Fix: keep the result as a Buffer β€” don't convert to a string.

Before Β· JavaScript
After Β· JavaScript
// ❌ .toString('utf8') corrupts binary content
import { readFileSync, writeFileSync } from 'node:fs'
const encoded   = readFileSync('./uploads/invoice.b64', 'utf8').trim()
const corrupted = Buffer.from(encoded, 'base64').toString('utf8') // ❌
writeFileSync('./out/invoice.pdf', corrupted) // ❌ unreadable PDF
// βœ… Keep the Buffer as binary β€” do not convert to a string
import { readFileSync, writeFileSync } from 'node:fs'
const encoded = readFileSync('./uploads/invoice.b64', 'utf8').trim()
const binary  = Buffer.from(encoded, 'base64') // βœ“ raw bytes preserved
writeFileSync('./out/invoice.pdf', binary)      // βœ“ valid PDF

JavaScript Base64 Decoding Methods β€” Quick Comparison

MethodUTF-8 outputBinary outputURL-safeEnvironmentsRequires install
atob()❌ needs TextDecoderβœ… binary string❌ manual restoreBrowser, Node 16+, Bun, DenoNo
TextDecoder + atob()βœ… UTF-8βœ… via Uint8Array❌ manual restoreBrowser, Node 16+, DenoNo
Buffer.from().toString()βœ… utf8βœ… keep as Bufferβœ… base64url (Node 18+)Node.js, BunNo
Uint8Array.fromBase64() (TC39)βœ… via TextDecoderβœ… nativeβœ… alphabet optionChrome 130+, Node 22+No
js-base64βœ… alwaysβœ… Uint8Arrayβœ… built-inUniversalnpm install

Choose atob() only when the decoded content is guaranteed to be ASCII text. For any user-provided or multi-language text in a browser, use TextDecoder + atob(). For Node.js server-side code, Buffer is the right default β€” it handles UTF-8 automatically and keeps binary data intact. For cross-environment libraries, js-base64 removes all edge cases.

Frequently Asked Questions

Why does atob() return garbled characters instead of readable text?
atob() returns a binary string where each character represents one raw byte (0–255), not a Unicode code point. If the original text was encoded as UTF-8, any character above U+007F β€” Cyrillic, Arabic, CJK ideographs, accented letters β€” will appear as two or more garbled Latin-1 characters. The fix: pass the output through TextDecoder: const bytes = Uint8Array.from(atob(encoded), ch => ch.charCodeAt(0)); const text = new TextDecoder().decode(bytes). In Node.js, use Buffer.from(encoded, 'base64').toString('utf8') which handles this automatically.
How do I decode a JWT token payload in JavaScript?
A JWT has three URL-safe Base64 segments separated by dots: header.payload.signature. To decode the payload: const [, payloadB64] = token.split('.'). In the browser: restore standard chars, add padding, decode with atob() and TextDecoder. In Node.js 18+: Buffer.from(payloadB64, 'base64url').toString('utf8'). Important: decoding only reveals the claims β€” it does NOT verify the signature. Use a proper JWT library (jsonwebtoken, jose) for verified decoding in production.
What is the difference between atob() and Buffer.from() for decoding?
atob() is available in all JavaScript environments (browser, Node.js 16+, Bun, Deno) with no imports, but returns a binary string β€” you need TextDecoder to convert UTF-8 content to readable text. Buffer.from(encoded, 'base64') is Node.js / Bun only, returns an actual Buffer (binary-safe), natively handles UTF-8, and supports 'base64url' in Node.js 18+. For server-side code, Buffer is simpler. For browser code, atob() + TextDecoder is the standard. For cross-environment libraries, js-base64 abstracts the difference.
How do I decode URL-safe Base64 in the browser?
URL-safe Base64 replaces + with -, / with _, and strips = padding. Restore them before calling atob(): const b64 = input.replace(/-/g, '+').replace(/_/g, '/'); const padded = b64 + '==='.slice(0, (4 - b64.length % 4) % 4); const text = new TextDecoder().decode(Uint8Array.from(atob(padded), c => c.charCodeAt(0))). In Node.js 18+: Buffer.from(input, 'base64url').toString('utf8') handles it in one call.
How do I decode Base64 content from the GitHub API in JavaScript?
The GitHub Contents API returns file content as standard Base64 with newline characters every 60 characters. Strip them before decoding: const clean = data.content.replace(/\n/g, ''). In the browser: new TextDecoder().decode(Uint8Array.from(atob(clean), c => c.charCodeAt(0))). In Node.js: Buffer.from(clean, 'base64').toString('utf8'). For binary files (images, PDFs), keep the Buffer without calling .toString() β€” pass it directly to writeFile or the response stream.
Can I decode a Base64-encoded image in JavaScript without a library?
Yes. In the browser: const binary = atob(encoded); const bytes = Uint8Array.from(binary, ch => ch.charCodeAt(0)); const blob = new Blob([bytes], { type: 'image/png' }); const url = URL.createObjectURL(blob). For an img src, build a data URI instead: const src = 'data:image/png;base64,' + encoded β€” this skips the decode step entirely. In Node.js: Buffer.from(encoded, 'base64') followed by writeFileSync('./out.png', buffer). The key rule: never call .toString() on the decoded Buffer when the content is binary.

For a one-click decode without writing any code, paste your Base64 string directly into the Base64 Decoder β€” it handles standard and URL-safe modes with immediate output in your browser.

Also available in:PythonGoJavaC#
AC
Alex ChenFront-end & Node.js Developer

Alex is a front-end and Node.js developer with extensive experience building web applications and developer tooling. He is passionate about web standards, browser APIs, and the JavaScript ecosystem. In his spare time he contributes to open-source projects and writes about modern JavaScript patterns, performance optimisation, and everything related to the web platform.

SL
Sophie LaurentTechnical Reviewer

Sophie is a full-stack developer focused on TypeScript across the entire stack β€” from React frontends to Express and Fastify backends. She has a particular interest in type-safe API design, runtime validation, and the patterns that make large JavaScript codebases stay manageable. She writes about TypeScript idioms, Node.js internals, and the ever-evolving JavaScript module ecosystem.