Base64 Encode JavaScript β€” btoa() & Buffer

Β·Front-end & Node.js DeveloperΒ·Reviewed bySophie LaurentΒ·Published

Use the free online Base64 Encode Online directly in your browser β€” no install required.

Try Base64 Encode Online Online β†’

When you embed an image in a CSS data URI, pass credentials in an HTTP Authorization header, or store a binary certificate in an environment variable, you need to base64 encode JavaScript data reliably across both browser and Node.js. JavaScript provides two distinct built-in APIs:btoa() for browser environments (also available in Node.js 16+) and Buffer.from() for Node.js β€” each with different constraints around Unicode, binary data, and URL safety. For a quick one-off encoding without writing any code, ToolDeck's Base64 Encoder handles it instantly in the browser. This guide covers both environments with production-ready examples: Unicode handling, URL-safe variants, file and API response encoding, CLI usage, and the four mistakes that consistently cause bugs in real codebases.

  • βœ“btoa() is browser-native and available in Node.js 16+ globally, but only accepts Latin-1 (code points 0–255) β€” Unicode input throws a DOMException
  • βœ“Buffer.from(text, "utf8").toString("base64") is the Node.js equivalent and handles Unicode natively with no extra steps
  • βœ“URL-safe Base64 replaces + β†’ -, / β†’ _, and strips = padding β€” use Buffer.from().toString("base64url") in Node.js 18+ for a one-liner
  • βœ“For binary data (ArrayBuffer, Uint8Array, files), use Buffer in Node.js or the arrayBuffer() + Uint8Array approach in the browser β€” never response.text()
  • βœ“Uint8Array.prototype.toBase64() (TC39 Stage 3) is already available in Node.js 22+ and Chrome 130+ and will unify both environments

What is Base64 Encoding?

Base64 converts arbitrary binary data into a string built from 64 printable ASCII characters: A–Z, a–z, 0–9, +, and /. Every 3 bytes of input map to exactly 4 Base64 characters; if the input length isn't a multiple of 3, one or two = padding characters are appended. The encoded output is always about 33% larger than the original.

Base64 is notencryption β€” it provides no confidentiality. Anyone with the encoded string can decode it in one function call. Its purpose is transport safety: many protocols and storage formats were designed for 7-bit ASCII text and can't handle arbitrary binary bytes. Base64 bridges that gap. Common JavaScript use cases include data URIs for inlining assets, HTTP Basic Auth headers, JWT token segments, email MIME attachments, and storing binary blobs in JSON APIs.

Before Β· text
After Β· text
deploy-bot:sk-prod-a7f2c91e4b3d8
ZGVwbG95LWJvdDpzay1wcm9kLWE3ZjJjOTFlNGIzZDg=

btoa() β€” The Browser-Native Encoding Function

btoa() (binary-to-ASCII) has been available in browsers since IE10 and became a global in Node.js 16.0 as part of the WinterCG compatibility initiative. It also works natively in Deno, Bun, and Cloudflare Workers. No import is needed.

The function takes a single string argument and returns its Base64-encoded form. The symmetric counterpart atob() (ASCII-to-binary) decodes it back. Both are synchronous and run in constant memory relative to the input size.

Minimal working example

JavaScript (browser / Node.js 16+)
// Encoding an API credential pair for an HTTP Basic Auth header
const serviceId  = 'deploy-bot'
const apiKey     = 'sk-prod-a7f2c91e4b3d8'

const credential = btoa(`${serviceId}:${apiKey}`)
// β†’ 'ZGVwbG95LWJvdDpzay1wcm9kLWE3ZjJjOTFlNGIzZDg='

const headers = new Headers({
  Authorization: `Basic ${credential}`,
  'Content-Type': 'application/json',
})

console.log(headers.get('Authorization'))
// Basic ZGVwbG95LWJvdDpzay1wcm9kLWE3ZjJjOTFlNGIzZDg=

Decoding with atob()

JavaScript
// Round-trip: encode, transmit, decode
const payload = 'service:payments region:eu-west-1 env:production'

const encoded = btoa(payload)
const decoded = atob(encoded)

console.log(encoded)
// c2VydmljZTpwYXltZW50cyByZWdpb246ZXUtd2VzdC0xIGVudjpwcm9kdWN0aW9u

console.log(decoded === payload) // true
Note:btoa() and atob() are part of the WinterCG Minimum Common API β€” the same spec that governs Fetch, URL, and crypto in non-browser runtimes. They behave identically in Node.js 16+, Bun, Deno, and Cloudflare Workers.

Handling Unicode and Non-ASCII Characters

The most common btoa() pitfall is its strict Latin-1 boundary. Any character with a code point above U+00FF causes an immediate exception:

JavaScript
btoa('Müller & Søren') // ❌ Uncaught DOMException: String contains an invalid character
btoa('rΓ©sumΓ©')         // ❌ 'Γ©' is U+00E9 = 233 β€” within Latin-1, this one actually works
btoa('η”°δΈ­ε€ͺιƒŽ')         // ❌ Throws β€” all CJK characters are above U+00FF

The correct approach is to encode the string to UTF-8 bytes first, then Base64-encode those bytes. JavaScript provides TextEncoder for exactly this purpose:

TextEncoder approach β€” safe for any Unicode input

JavaScript (browser + Node.js 16+)
// Utility functions for Unicode-safe Base64
function toBase64(text: string): string {
  const bytes = new TextEncoder().encode(text)
  const chars = Array.from(bytes, byte => String.fromCharCode(byte))
  return btoa(chars.join(''))
}

function fromBase64(encoded: string): string {
  const binary = atob(encoded)
  const bytes  = Uint8Array.from(binary, ch => ch.charCodeAt(0))
  return new TextDecoder().decode(bytes)
}

// Works with any language or script
const orderNote = 'Confirmed: η”°δΈ­ε€ͺιƒŽ β€” SΓ£o Paulo warehouse, qty: 250'
const encoded   = toBase64(orderNote)
const decoded   = fromBase64(encoded)

console.log(encoded)
// Q29uZmlybWVkOiDnlKjkuK3lpKngQeKAkyBTw6NvIFBhdWxvIHdhcmVob3VzZSwgcXR5OiAyNTA=

console.log(decoded === orderNote) // true
Note:If you're already in Node.js, skip the TextEncoder workaround entirely β€” use Buffer.from(text, 'utf8').toString('base64'). It handles Unicode natively and is faster for large strings.

Buffer.from() in Node.js β€” Complete Guide with Examples

In Node.js, Buffer is the idiomatic API for all binary data operations, including encoding conversions. It predates TextEncoder by years and remains the preferred choice for server-side code. Key advantages over btoa(): native UTF-8 support, binary data handling, and the 'base64url' encoding shortcut available since Node.js 18.

Basic text encoding and decoding

Node.js
// Encoding a server configuration object for storage in an env variable
const dbConfig = JSON.stringify({
  host:           'db-primary.internal',
  port:           5432,
  database:       'analytics_prod',
  maxConnections: 100,
  ssl:            { rejectUnauthorized: true },
})

const encoded = Buffer.from(dbConfig, 'utf8').toString('base64')
console.log(encoded)
// eyJob3N0IjoiZGItcHJpbWFyeS5pbnRlcm5hbCIsInBvcnQiOjU0MzIsImRhdGFiYXNlIjoiYW5h...

// Decoding back
const decoded = Buffer.from(encoded, 'base64').toString('utf8')
const config  = JSON.parse(decoded)

console.log(config.host)           // db-primary.internal
console.log(config.maxConnections) // 100

Encoding binary files from disk

Node.js
import { readFileSync, writeFileSync } from 'node:fs'
import { join } from 'node:path'

// Read a TLS certificate and encode it for embedding in a config file
const certPem     = readFileSync(join(process.cwd(), 'ssl', 'server.crt'))
const certBase64  = certPem.toString('base64')

// Store as a single-line string β€” suitable for env vars or JSON configs
writeFileSync('./dist/cert.b64', certBase64, 'utf8')

console.log(`Certificate encoded: ${certBase64.length} characters`)
// Certificate encoded: 2856 characters

// Restore the binary cert from the encoded value
const restored = Buffer.from(certBase64, 'base64')
console.log(restored.equals(certPem)) // true

Async file encoding with error handling

Node.js
import { readFile } from 'node:fs/promises'

async function encodeFileToBase64(filePath: string): Promise<string> {
  try {
    const buffer = await readFile(filePath)
    return buffer.toString('base64')
  } catch (err) {
    const code = (err as NodeJS.ErrnoException).code
    if (code === 'ENOENT') throw new Error(`File not found: ${filePath}`)
    if (code === 'EACCES') throw new Error(`Permission denied: ${filePath}`)
    throw err
  }
}

// Encode a PDF for an email attachment payload
const reportBase64 = await encodeFileToBase64('./reports/q1-financials.pdf')

const emailPayload = {
  to:          'finance-team@company.internal',
  subject:     'Q1 Financial Report',
  attachments: [{
    filename:    'q1-financials.pdf',
    content:     reportBase64,
    encoding:    'base64',
    contentType: 'application/pdf',
  }],
}

console.log(`Attachment: ${reportBase64.length} chars`)

JavaScript Base64 Functions β€” Parameters Reference

Unlike Python's base64module, JavaScript has no single unified Base64 function. The API depends on the target environment. Here's the complete reference for all native approaches:

FunctionInput typeUnicodeURL-safeAvailable in
btoa(string)string (Latin-1)❌ throws above U+00FF❌ manual replaceBrowser, Node 16+, Bun, Deno
atob(string)Base64 string❌ returns binary string❌ manual replaceBrowser, Node 16+, Bun, Deno
Buffer.from(src, enc) .toString(enc)string | Buffer | Uint8Arrayβœ… utf8 encodingβœ… base64url in Node 18+Node.js, Bun
TextEncoder().encode(str) + btoa()string (any Unicode)βœ… via UTF-8 bytes❌ manual replaceBrowser, Node 16+, Deno
Uint8Array.toBase64() (TC39)Uint8Arrayβœ… binaryβœ… omitPadding + alphabetChrome 130+, Node 22+

The Buffer.from(src, enc).toString(enc) signature accepts several encoding values relevant to Base64:

"base64"
Standard Base64 (RFC 4648 Β§4). Uses + and / with = padding.
"base64url"
URL-safe Base64 (RFC 4648 Β§5, Node.js 18+). Uses - and _ with no padding.
"utf8"
Default for string sources. Use when the source is human-readable text.
"binary"
Latin-1 / ISO-8859-1. Used when the source is a raw binary string (e.g., from atob()).

URL-safe Base64 β€” Encoding for JWTs, URLs, and Filenames

Standard Base64 uses + and /, which are reserved in URLs β€” + is decoded as a space in query strings, and / is a path separator. JWTs, URL parameters, filenames, and cookie values all require the URL-safe variant: + β†’ -, / β†’ _, trailing = removed.

Browser β€” manual character replacement

JavaScript (browser)
function toBase64Url(text: string): string {
  // For ASCII-safe input (e.g., JSON with only ASCII chars)
  return btoa(text)
    .replace(/+/g, '-')
    .replace(///g, '_')
    .replace(/=/g, '')
}

function fromBase64Url(encoded: string): string {
  // Restore standard Base64 characters and padding before decoding
  const base64  = encoded.replace(/-/g, '+').replace(/_/g, '/')
  const padded  = base64 + '==='.slice(0, (4 - base64.length % 4) % 4)
  return atob(padded)
}

// JWT header β€” must be URL-safe Base64
const header  = JSON.stringify({ alg: 'HS256', typ: 'JWT' })
const encoded = toBase64Url(header)
console.log(encoded) // eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9

const decoded = fromBase64Url(encoded)
console.log(decoded) // {"alg":"HS256","typ":"JWT"}

Node.js 18+ β€” native 'base64url' encoding

Node.js 18+
// Node.js 18 added 'base64url' as a first-class Buffer encoding
const sessionPayload = JSON.stringify({
  userId:     'usr_9f2a1c3e8b4d',
  role:       'editor',
  workspaceId:'ws_3a7f91c2',
  exp:        Math.floor(Date.now() / 1000) + 3600,
})

const encoded = Buffer.from(sessionPayload, 'utf8').toString('base64url')
// No + or / or = characters in the output
// eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2U4YjRkIiwicm9sZSI6ImVkaXRvciIsIndvcmtzcGFjZUlkIjoid3NfM2E3ZjkxYzIiLCJleHAiOjE3MTcyMDM2MDB9

const decoded = Buffer.from(encoded, 'base64url').toString('utf8')
console.log(JSON.parse(decoded).role) // editor

Encoding Files and API Responses in JavaScript

In production code, Base64 encoding is most often applied to files being transmitted and to responses from external APIs that deliver binary content. The patterns differ between browser and Node.js, and binary data requires special care.

Browser β€” encode a file from an input element

JavaScript (browser)
// Modern approach: File.arrayBuffer() (Chrome 76+, Firefox 69+, Safari 14+)
async function encodeFile(file: File): Promise<string> {
  const buffer = await file.arrayBuffer()
  const bytes  = new Uint8Array(buffer)
  const chars  = Array.from(bytes, b => String.fromCharCode(b))
  return btoa(chars.join(''))
}

const uploadInput = document.getElementById('avatar') as HTMLInputElement

uploadInput.addEventListener('change', async (e) => {
  const file = (e.target as HTMLInputElement).files?.[0]
  if (!file) return

  try {
    const encoded = await encodeFile(file)
    const dataUri = `data:${file.type};base64,${encoded}`

    // Preview the image inline
    const img   = document.getElementById('preview') as HTMLImageElement
    img.src     = dataUri
    img.hidden  = false

    console.log(`Encoded ${file.name} (${file.size} bytes) β†’ ${encoded.length} Base64 chars`)
  } catch (err) {
    console.error('Encoding failed:', err)
  }
})

Fetching a Base64-encoded binary from an API

JavaScript
// GitHub Contents API returns file content as Base64 with embedded newlines
async function fetchRepoFile(
  owner: string,
  repo:  string,
  path:  string,
  token: string,
): Promise<string> {
  const res = await fetch(
    `https://api.github.com/repos/${owner}/${repo}/contents/${path}`,
    {
      headers: {
        Authorization: `Bearer ${token}`,
        Accept: 'application/vnd.github.v3+json',
      },
    }
  )

  if (!res.ok) throw new Error(`GitHub API ${res.status}: ${res.statusText}`)

  const data = await res.json() as { content: string; encoding: string; size: number }

  if (data.encoding !== 'base64') {
    throw new Error(`Unexpected encoding from GitHub: ${data.encoding}`)
  }

  // GitHub wraps output at 60 chars β€” strip newlines before decoding
  const clean = data.content.replace(/\n/g, '')
  return atob(clean)
}

const openApiSpec = await fetchRepoFile(
  'acme-corp', 'platform-api', 'openapi.json', process.env.GITHUB_TOKEN!
)
const spec = JSON.parse(openApiSpec)
console.log(`API version: ${spec.info.version}`)

When you just need to inspect an encoded response during API debugging without setting up a script, paste the Base64 value directly into the Base64 Encoder β€” it decodes as well, with immediate output. Useful for inspecting GitHub API responses, JWT payloads, and webhook signatures.

Command-Line Base64 Encoding in Node.js and Shell

For CI/CD scripts, Makefile targets, or one-off debugging, you rarely need a full script. Both system tools and Node.js one-liners cover most cases cross-platform.

bash
# ── macOS / Linux system base64 ───────────────────────────────────────
# Standard encoding
echo -n "deploy-bot:sk-prod-a7f2c91e4b3d8" | base64
# ZGVwbG95LWJvdDpzay1wcm9kLWE3ZjJjOTFlNGIzZDg=

# URL-safe variant (replace chars and strip padding)
echo -n "deploy-bot:sk-prod-a7f2c91e4b3d8" | base64 | tr '+/' '-_' | tr -d '='

# Encode a file inline (macOS: -b 0 removes line wrapping; Linux: --wrap=0)
base64 -b 0 ./config/production.json
# or on Linux:
base64 --wrap=0 ./config/production.json

# Decode
echo "ZGVwbG95LWJvdDpzay1wcm9kLWE3ZjJjOTFlNGIzZDg=" | base64 --decode

# ── Node.js one-liner β€” works on Windows too ───────────────────────────
node -e "process.stdout.write(Buffer.from(process.argv[1]).toString('base64'))" "my:secret"
# bXk6c2VjcmV0

# URL-safe from Node.js 18+
node -e "process.stdout.write(Buffer.from(process.argv[1]).toString('base64url'))" "my:secret"
# bXk6c2VjcmV0  (same here since there are no special chars)

# Decode in Node.js
node -e "console.log(Buffer.from(process.argv[1], 'base64').toString())" "ZGVwbG95LWJvdA=="
Note:On macOS, base64 wraps output at 76 characters by default. This breaks downstream parsing. Always add -b 0 (macOS) or --wrap=0 (Linux) when you need a single-line result β€” for example, when writing to an environment variable or a config field.

High-Performance Alternative: js-base64

The built-in APIs are fine for most use cases. The main reason to reach for a library is cross-environment consistency: if you ship a package that runs in both browser and Node.js, using Buffer requires either environment detection or bundler configuration, while btoa() requires the Unicode workaround. js-base64 (100M+ weekly npm downloads) handles both transparently.

bash
npm install js-base64
# or
pnpm add js-base64
JavaScript
import { toBase64, fromBase64, toBase64Url, fromBase64Url, isValid } from 'js-base64'

// Standard encoding β€” Unicode-safe, works in browser and Node.js
const telemetryEvent = JSON.stringify({
  eventId:   'evt_7c3a9f1b2d',
  type:      'checkout_completed',
  currency:  'EUR',
  amount:    14900,
  userId:    'usr_4e2b8d6a5c',
  timestamp: 1717200000,
})

const encoded    = toBase64(telemetryEvent)
const urlEncoded = toBase64Url(telemetryEvent) // No +, /, or = characters

const decoded = fromBase64(encoded)
console.log(JSON.parse(decoded).type) // checkout_completed

// Binary data β€” pass a Uint8Array as second argument
const pngMagicBytes = new Uint8Array([0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a])
const binaryEncoded = toBase64(pngMagicBytes, true) // true = binary mode

// Validation before decoding
const suspicious = 'not!valid@base64#'
console.log(isValid(suspicious)) // false

Under the hood, js-base64 uses native Bufferwhen available and falls back to a pure-JS implementation in the browser. It's 2–3Γ— faster than the TextEncoder+btoa approach for large Unicode strings, and the symmetric API (toBase64 / fromBase64) eliminates the mental overhead of remembering which direction btoa and atob go.

Encoding Large Binary Files with Node.js Streams

When you need to encode files larger than ~50 MB, loading the entire file into memory with readFileSync() becomes a problem. Node.js streams let you process the data in chunks β€” but Base64 encoding has a constraint: you must feed the encoder in multiples of 3 bytes to avoid incorrect padding at chunk boundaries.

Node.js
import { createReadStream, createWriteStream } from 'node:fs'
import { pipeline } from 'node:stream/promises'

// Stream a large binary file to a Base64-encoded output file
async function streamEncodeToBase64(
  inputPath:  string,
  outputPath: string,
): Promise<void> {
  const readStream  = createReadStream(inputPath, { highWaterMark: 3 * 1024 * 256 }) // 768 KB chunks (multiple of 3)
  const writeStream = createWriteStream(outputPath, { encoding: 'utf8' })

  let buffer = Buffer.alloc(0)

  await pipeline(
    readStream,
    async function* (source) {
      for await (const chunk of source) {
        buffer = Buffer.concat([buffer, chunk as Buffer])

        // Encode in complete 3-byte groups to avoid mid-stream padding
        const remainder = buffer.length % 3
        const safe      = buffer.subarray(0, buffer.length - remainder)
        buffer          = buffer.subarray(buffer.length - remainder)

        if (safe.length > 0) yield safe.toString('base64')
      }
      // Flush remaining bytes (may add 1 or 2 '=' padding chars)
      if (buffer.length > 0) yield buffer.toString('base64')
    },
    writeStream,
  )
}

// Usage: encode a 200 MB video attachment
await streamEncodeToBase64(
  './uploads/product-demo.mp4',
  './dist/product-demo.b64',
)
console.log('Stream encoding complete')
Note:The chunk size must be a multiple of 3 bytes to avoid spurious = padding in the middle of the output. The example uses 3 * 1024 * 256 = 786,432 bytes (768 KB) β€” adjust highWaterMark based on your memory budget. For files under 50 MB, readFile() + Buffer.toString('base64') is simpler and fast enough.

Common Mistakes

I've reviewed many JavaScript codebases with Base64 encoding, and these four mistakes appear consistently β€” often undiscovered until a non-ASCII character or a binary file reaches the encoding path in production.

Mistake 1 β€” Passing Unicode directly to btoa()

Problem: btoa() only accepts characters with code points 0–255. Characters like Γ±, emoji, or CJK ideographs cause an immediate DOMException. Fix: encode with TextEncoder first, or use Buffer.from(text, 'utf8').toString('base64') in Node.js.

Before Β· JavaScript
After Β· JavaScript
// ❌ DOMException: The string to be encoded contains
//    characters outside of the Latin1 range
const username = 'АлСксСй Иванов'
const encoded  = btoa(username)  // throws
// βœ… Encode as UTF-8 bytes first
function safeEncode(text: string): string {
  const bytes = new TextEncoder().encode(text)
  const chars = Array.from(bytes, b => String.fromCharCode(b))
  return btoa(chars.join(''))
}
const encoded = safeEncode('АлСксСй Иванов')
// 0JDQu9C10LrRgdC10Lkg0JjQstCw0L3QvtCy

Mistake 2 β€” Forgetting to restore padding before atob()

Problem: URL-safe Base64 strips the = padding. Passing the stripped string directly to atob() produces incorrect output or throws depending on the string length. Fix: restore + and / and re-add the correct amount of padding before calling atob().

Before Β· JavaScript
After Β· JavaScript
// ❌ atob() may return wrong data or throw
//    on URL-safe Base64 without padding
const jwtSegment = 'eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2UifQ'
const decoded    = atob(jwtSegment) // Unreliable
// βœ… Restore characters and padding first
function decodeBase64Url(input: string): string {
  const b64 = input.replace(/-/g, '+').replace(/_/g, '/')
  const pad = b64 + '==='.slice(0, (4 - b64.length % 4) % 4)
  return atob(pad)
}
const decoded = decodeBase64Url('eyJ1c2VySWQiOiJ1c3JfOWYyYTFjM2UifQ')
// {"userId":"usr_9f2a1c3e"}

Mistake 3 β€” Concatenating encoded chunks instead of raw buffers

Problem: Each call to btoa() or .toString('base64') adds its own padding. Concatenating two padded Base64 strings produces invalid output because padding only belongs at the very end. Fix: concatenate the raw data before encoding.

Before Β· JavaScript
After Β· JavaScript
// ❌ Both parts are padded independently β€”
//    the combined string is not valid Base64
const part1 = Buffer.from('webhook-secret').toString('base64')
// d2ViaG9vay1zZWNyZXQ=  ← has padding
const part2 = Buffer.from('-v2').toString('base64')
// LXYy            ← correct in isolation
const combined = part1 + part2 // ❌ Invalid β€” padding in the middle
// βœ… Concatenate raw Buffers before encoding
const combined = Buffer.concat([
  Buffer.from('webhook-secret'),
  Buffer.from('-v2'),
]).toString('base64')
// d2ViaG9vay1zZWNyZXQtdjI= β€” single valid Base64 string

Mistake 4 β€” Using response.text() to read binary API data before encoding

Problem: response.text() interprets the raw bytes as UTF-8 and replaces unrecognised byte sequences with the replacement character U+FFFD. Any binary content β€” images, PDFs, audio β€” is silently corrupted before it reaches btoa(). Fix: use response.arrayBuffer() to get raw bytes.

Before Β· JavaScript
After Β· JavaScript
// ❌ response.text() corrupts binary data
const res     = await fetch('/api/exports/invoice.pdf')
const text    = await res.text()   // ❌ PDF bytes mangled as UTF-8
const encoded = btoa(text)         // ❌ Corrupted Base64
// βœ… arrayBuffer() preserves raw bytes
const res     = await fetch('/api/exports/invoice.pdf')
const buffer  = await res.arrayBuffer()
const bytes   = new Uint8Array(buffer)
const chars   = Array.from(bytes, b => String.fromCharCode(b))
const encoded = btoa(chars.join('')) // βœ… Valid Base64

JavaScript Base64 Methods β€” Quick Comparison

MethodUnicodeBinary dataURL-safeEnvironmentsRequires install
btoa() / atob()❌ Latin-1❌ workaround needed❌ manual replaceBrowser, Node 16+, Bun, DenoNo
TextEncoder + btoa()βœ… UTF-8βœ… via Uint8Array❌ manual replaceBrowser, Node 16+, DenoNo
Buffer.from().toString()βœ… utf8βœ… nativeβœ… base64url (Node 18+)Node.js, BunNo
Uint8Array.toBase64() (TC39)βœ… binaryβœ… nativeβœ… alphabet optionChrome 130+, Node 22+No
js-base64βœ… alwaysβœ… Uint8Arrayβœ… built-inUniversalnpm install

Choose btoa() only when the input is provably ASCII-only β€” hex digests, numeric IDs, or pre-validated Latin-1 strings. For user-provided text in a browser, use TextEncoder + btoa(). For all Node.js server-side code, Buffer is the right default. For libraries that need to run in both environments without bundler configuration, js-base64 removes all the edge cases.

Frequently Asked Questions

Why does btoa() throw "InvalidCharacterError" on my string?
btoa() only accepts characters with code points in the range 0–255 (Latin-1 / ISO-8859-1). Any character above U+00FF β€” including most Cyrillic, Arabic, CJK ideographs, and many emoji β€” causes a DOMException. The fix depends on your environment: in the browser, encode to UTF-8 bytes with TextEncoder first, convert each byte to a character with String.fromCharCode(), then call btoa(). In Node.js, use Buffer.from(text, 'utf8').toString('base64') which handles Unicode natively.
Is btoa() available in Node.js without any import?
Yes, since Node.js 16.0. Both btoa() and atob() are registered as global functions β€” no import required. They behave identically to their browser counterparts, including the Latin-1 restriction. For Node.js server code, Buffer.from() is still preferred over btoa() because it handles UTF-8 natively, supports binary data without workarounds, and has the 'base64url' encoding option added in Node.js 18.
What is the difference between standard Base64 and URL-safe Base64?
Standard Base64 (RFC 4648 Β§4) uses + for value 62, / for value 63, and = for padding. These characters have special meaning in URLs: + is interpreted as a space in query strings, and / is a path separator. URL-safe Base64 (RFC 4648 Β§5) substitutes - for + and _ for /, and typically omits the = padding entirely. JWTs use URL-safe Base64 for all three segments. In Node.js 18+, Buffer.from(text).toString('base64url') produces the URL-safe format directly.
How do I encode an image to Base64 for a CSS data URI in JavaScript?
In a browser: use file.arrayBuffer() to read the binary, convert to Uint8Array, then call btoa(Array.from(bytes, b => String.fromCharCode(b)).join('')). Build the data URI as 'data:' + file.type + ';base64,' + encoded. In Node.js: const encoded = fs.readFileSync('./image.png').toString('base64') and prepend the MIME type. For SVG files you can often skip Base64 entirely and use a URL-encoded data URI instead, which is more readable and slightly smaller.
Can I Base64-encode and decode without any npm library in the browser?
Yes. For ASCII-only input, btoa() and atob() work directly. For Unicode, the TextEncoder / TextDecoder pair gives you the complete toolset β€” both are built into all modern browsers and Node.js 16+. The only case where a library genuinely adds value is cross-environment consistency: if you write a utility that must work identically in both browser and Node.js without bundler configuration, js-base64 removes the environment detection logic.
How do I decode Base64 content from the GitHub API?
The GitHub Contents API returns the file content as Base64 with embedded newline characters (the API wraps output at 60 chars). Strip them before decoding: const clean = data.content.replace(/\n/g, ''); const text = atob(clean);. In Node.js: const text = Buffer.from(data.content.replace(/\n/g, ''), 'base64').toString('utf8');. GitHub always uses standard Base64 (not URL-safe), so no + β†’ - or / β†’ _ substitution is needed.

For a one-click encode or decode without writing any code, paste your string or binary directly into the Base64 Encoder β€” it handles standard and URL-safe modes instantly in your browser.

Also available in:PythonJava
AC
Alex ChenFront-end & Node.js Developer

Alex is a front-end and Node.js developer with extensive experience building web applications and developer tooling. He is passionate about web standards, browser APIs, and the JavaScript ecosystem. In his spare time he contributes to open-source projects and writes about modern JavaScript patterns, performance optimisation, and everything related to the web platform.

SL
Sophie LaurentTechnical Reviewer

Sophie is a full-stack developer focused on TypeScript across the entire stack β€” from React frontends to Express and Fastify backends. She has a particular interest in type-safe API design, runtime validation, and the patterns that make large JavaScript codebases stay manageable. She writes about TypeScript idioms, Node.js internals, and the ever-evolving JavaScript module ecosystem.