SHA-256 Hash Python β hashlib Guide + Code Examples
Use the free online SHA-256 Hash Generator directly in your browser β no install required.
Try SHA-256 Hash Generator Online βEvery deployment pipeline I've built eventually needs to verify a file checksum, sign a webhook payload, or fingerprint a cache key. Python SHA-256 hashing with the built-in hashlib module handles all of those cases β and you already have it installed. hashlib.sha256() wraps OpenSSL's implementation on CPython, so it's fast and FIPS-compliant out of the box. For a quick one-off hash without writing any code, the online SHA-256 hash generator gives you the result instantly. All examples target Python 3.9+.
- βhashlib.sha256(data).hexdigest() is the standard way to hash bytes β part of the stdlib, backed by OpenSSL.
- βStrings must be encoded to bytes first: hashlib.sha256("text".encode("utf-8")).
- βFor file checksums, feed chunks via .update() β never read a large file into memory at once.
- βHMAC-SHA256 requires the hmac module: hmac.new(key, msg, hashlib.sha256) β SHA-256 alone has no key.
What is SHA-256 Hashing?
SHA-256 (Secure Hash Algorithm, 256-bit) takes an arbitrary-length input and produces a fixed 256-bit (32-byte) digest. The same input always yields the same output, but even a single-bit change in the input produces a completely different hash β this property is called the avalanche effect. SHA-256 is part of the SHA-2 family, standardized by NIST, and is the backbone of TLS certificate fingerprints, Git commit IDs, Bitcoin block headers, and file integrity verification. The algorithm uses a Merkle-DamgΓ₯rd construction with 64 compression rounds to produce its 256-bit output.
deployment-v4.2.1
a1f7c3d8e9b2...27ae41e4649b (64 hex chars)
The hex digest above is the standard representation β 64 hexadecimal characters, always the same length regardless of whether you hash a single byte or an entire disk image.
hashlib.sha256() β The Standard Library Approach
The hashlib module ships with every Python installation β no pip install needed. Call hashlib.sha256() with a bytes argument to create a hash object, then retrieve the result with .hexdigest() (hex string) or .digest() (raw bytes). The function name is lowercase: sha256, not SHA256.
import hashlib # Hash a byte string directly digest = hashlib.sha256(b"deployment-v4.2.1").hexdigest() print(digest) # a8f5f167f44f4964e6c998dee827110c3f1de4d0280c68cba98cf70b4b5157db
The most common mistake with hashlib.sha256() is passing a str instead of bytes. Python strings are Unicode, and hash functions operate on raw bytes. You must call .encode("utf-8") before hashing. This trips up almost everyone the first time.
import hashlib
# Strings must be encoded to bytes before hashing
config_key = "redis://cache.internal:6379/0"
digest = hashlib.sha256(config_key.encode("utf-8")).hexdigest()
print(digest)
# 7d3f8c2a1b9e4f5d6c8a7b3e2f1d9c4a5b8e7f6d3c2a1b9e4f5d6c8a7b3e2f1dThe .update() method lets you feed data incrementally. Calling h.update(a); h.update(b) is equivalent to hashlib.sha256(a + b). This is how you hash files in chunks without loading the entire contents into memory.
import hashlib h = hashlib.sha256() h.update(b"request_id=req_7f3a91bc") h.update(b"×tamp=1741614120") h.update(b"&amount=4999") print(h.hexdigest()) # Equivalent to hashlib.sha256(b"request_id=req_7f3a91bc×tamp=1741614120&amount=4999").hexdigest()
.digest() returns raw 32 bytes. .hexdigest() returns a 64-character hex string. Use .digest() when feeding the result into HMAC, Base64 encoding, or binary protocols. Use .hexdigest() for logging, database columns, and checksum comparison.HMAC-SHA256 β Keyed Hashing with the hmac Module
SHA-256 alone has no concept of a secret key β anyone with the same input can compute the same hash. If you need to prove that a message came from a specific sender (webhook verification, API request signing, token authentication), you need HMAC. The hmac module is part of Python's standard library and wraps the key into the hashing process so that only someone with the key can produce or verify the same digest.
import hmac
import hashlib
# Webhook signature verification
secret_key = b"whsec_9f3a7b2e1d4c8a5b"
payload = b'{"event":"invoice.paid","invoice_id":"inv_8d2c","amount":14900}'
signature = hmac.new(secret_key, payload, hashlib.sha256).hexdigest()
print(signature)
# 64-character hex HMAC-SHA256 digestVerifying an incoming HMAC requires hmac.compare_digest() instead of the == operator. The equality operator is vulnerable to timing attacks β it short-circuits on the first mismatched byte, and an attacker can measure response times to guess the correct signature byte by byte. compare_digest() runs in constant time regardless of where the mismatch occurs.
import hmac
import hashlib
def verify_webhook(payload: bytes, received_sig: str, secret: bytes) -> bool:
"""Verify a webhook signature using constant-time comparison."""
expected = hmac.new(secret, payload, hashlib.sha256).hexdigest()
return hmac.compare_digest(expected, received_sig)
# Simulating a Stripe-style webhook verification
incoming_payload = b'{"event":"payment.completed","amount":4999}'
incoming_signature = "a1b2c3d4e5f6..." # from the X-Signature header
webhook_secret = b"whsec_9f3a7b2e1d4c"
if verify_webhook(incoming_payload, incoming_signature, webhook_secret):
print("Signature valid β process the event")
else:
print("Signature mismatch β reject the request")HMAC-SHA256 Request Signing
API request signing follows the same principle: construct a canonical string from the request components (method, path, timestamp, body hash) and sign it with your secret key. AWS Signature V4, Stripe, and GitHub webhooks all use variations of this pattern.
import hmac
import hashlib
import time
def sign_request(method: str, path: str, body: bytes, secret: bytes) -> str:
"""Create an HMAC-SHA256 signature for an API request."""
timestamp = str(int(time.time()))
body_hash = hashlib.sha256(body).hexdigest()
# Canonical string: method + path + timestamp + body hash
canonical = f"{method}\n{path}\n{timestamp}\n{body_hash}"
signature = hmac.new(secret, canonical.encode("utf-8"), hashlib.sha256).hexdigest()
return f"ts={timestamp},sig={signature}"
# Usage
api_secret = b"sk_live_9f3a7b2e1d4c8a5b6e7f"
request_body = b'{"customer_id":"cust_4f2a","plan":"enterprise"}'
auth_header = sign_request("POST", "/api/v2/subscriptions", request_body, api_secret)
print(f"Authorization: HMAC-SHA256 {auth_header}")
# Authorization: HMAC-SHA256 ts=1741614120,sig=7d3f8c2a...Base64-Encoded HMAC-SHA256
Some APIs (AWS Signature V4, various payment gateways) expect the HMAC result as a Base64-encoded string rather than hex. The difference: hex uses 64 characters, Base64 uses 44 characters for the same 32-byte digest.
import hmac
import hashlib
import base64
secret = b"webhook_secret_9f3a"
message = b"POST /api/v2/events 1741614120"
# Hex output: 64 characters
hex_sig = hmac.new(secret, message, hashlib.sha256).hexdigest()
print(f"Hex: {hex_sig}")
# Base64 output: 44 characters (shorter, common in HTTP headers)
raw_sig = hmac.new(secret, message, hashlib.sha256).digest()
b64_sig = base64.b64encode(raw_sig).decode("ascii")
print(f"Base64: {b64_sig}")Hashing datetime, UUID, and Custom Objects
SHA-256 operates on raw bytes, so non-bytes types β datetime, UUID, dataclasses, Pydantic models β must be serialized to bytes before hashing. There is no automatic conversion; you choose the canonical representation. For deterministic hashing across systems, always use an explicit encoding and a stable serialization format (ISO 8601 for datetimes, the standard string form for UUIDs, sorted-key JSON for dicts).
import hashlib
import uuid
from datetime import datetime, timezone
# datetime β use ISO 8601 with explicit UTC offset for portability
event_time = datetime(2026, 3, 28, 12, 0, 0, tzinfo=timezone.utc)
time_hash = hashlib.sha256(event_time.isoformat().encode("utf-8")).hexdigest()
print(f"datetime hash: {time_hash[:16]}...")
# UUID β hash the canonical string form (lowercase, with hyphens)
record_id = uuid.uuid4()
uuid_hash = hashlib.sha256(str(record_id).encode("utf-8")).hexdigest()
print(f"UUID hash: {uuid_hash[:16]}...")For custom objects, serialize to a canonical bytes representation before hashing. JSON with sorted keys works well for dict-like objects:
import hashlib
import json
from dataclasses import dataclass, asdict
@dataclass
class Event:
id: str
type: str
amount: int
timestamp: str
def hash_event(event: Event) -> str:
"""Hash a dataclass instance using sorted-key JSON for determinism."""
canonical = json.dumps(asdict(event), sort_keys=True, separators=(",", ":"))
return hashlib.sha256(canonical.encode("utf-8")).hexdigest()
e = Event(id="evt_4f2a", type="payment.completed", amount=4999, timestamp="2026-03-28T12:00:00Z")
print(hash_event(e)) # stable across runs and machinessort_keys=True) when hashing JSON-serialized objects. Dict insertion order is preserved in Python 3.7+ but may differ across serialization paths, producing different hashes for identical data.SHA-256 File Checksum β Verify Downloads and Artifacts
Computing a SHA-256 checksum of a file is one of the most common uses of the algorithm. You see it everywhere: release pages for Go binaries, Python wheel files, Docker image manifests, firmware updates. The key is to read the file in chunks rather than loading it all at once β a 2 GB ISO image should not require 2 GB of RAM just to hash it.
import hashlib
def sha256_checksum(filepath: str, chunk_size: int = 8192) -> str:
"""Compute SHA-256 hash of a file, reading in chunks to save memory."""
h = hashlib.sha256()
with open(filepath, "rb") as f:
for chunk in iter(lambda: f.read(chunk_size), b""):
h.update(chunk)
return h.hexdigest()
# Hash a release artifact
checksum = sha256_checksum("/tmp/release-v4.2.1.tar.gz")
print(f"SHA-256: {checksum}")Python 3.11 added hashlib.file_digest() which does the chunked reading internally and may use zero-copy optimizations on supported platforms. If you're on 3.11 or later, prefer it over the manual loop.
import hashlib
with open("/tmp/release-v4.2.1.tar.gz", "rb") as f:
digest = hashlib.file_digest(f, "sha256")
print(digest.hexdigest())Verify a Downloaded File Against a Known Checksum
import hashlib
import hmac as hmac_mod # only for compare_digest
def verify_checksum(filepath: str, expected_hex: str) -> bool:
"""Verify SHA-256 checksum using constant-time comparison."""
h = hashlib.sha256()
with open(filepath, "rb") as f:
for chunk in iter(lambda: f.read(8192), b""):
h.update(chunk)
return hmac_mod.compare_digest(h.hexdigest(), expected_hex.lower())
# Verify a release artifact
expected = "a8f5f167f44f4964e6c998dee827110c3f1de4d0280c68cba98cf70b4b5157db"
if verify_checksum("/tmp/release-v4.2.1.tar.gz", expected):
print("Checksum matches β file is intact")
else:
print("Checksum mismatch β file may be corrupted or tampered with")hmac.compare_digest() for checksum comparison, even when there's no secret key involved. The constant-time comparison prevents timing-based information leaks. The == operator works functionally but is not safe for security-sensitive contexts.SHA-256 with Base64 Encoding
Some protocols expect the SHA-256 digest as a Base64 string rather than hex. HTTP headers like Content-Digest and Integrity (Subresource Integrity in browsers) use Base64, and JWT signatures are Base64url-encoded. The trick is to Base64-encode the raw .digest() bytes, not the hex string.
import hashlib
import base64
data = b"integrity check payload"
# Correct: Base64 of raw bytes (32 bytes β 44 Base64 characters)
raw_digest = hashlib.sha256(data).digest()
b64_digest = base64.b64encode(raw_digest).decode("ascii")
print(f"sha256-{b64_digest}")
# sha256-<44 characters>
# Wrong: Base64 of hex string (64 ASCII bytes β 88 Base64 characters β double the size)
hex_digest = hashlib.sha256(data).hexdigest()
wrong = base64.b64encode(hex_digest.encode()).decode()
print(f"Wrong length: {len(wrong)} chars") # 88 β not what APIs expect.digest(), not .hexdigest(), before Base64 encoding.hashlib.sha256() Reference
The constructor and methods on a SHA-256 hash object:
hmac.new() parameters for keyed hashing:
The cryptography Library β An Alternative SHA-256 API
The cryptography package provides a different API for SHA-256 through its hazmat primitives. I rarely reach for it when all I need is a hash β hashlib is simpler and has no external dependency. But if your project already depends on cryptography for TLS, X.509, or symmetric encryption, using its hash API keeps everything under one library and gives you consistent error handling.
from cryptography.hazmat.primitives import hashes from cryptography.hazmat.backends import default_backend digest = hashes.Hash(hashes.SHA256(), backend=default_backend()) digest.update(b"deployment-v4.2.1") result = digest.finalize() # raw 32 bytes print(result.hex()) # 64-char hex string # a8f5f167f44f4964e6c998dee827110c3f1de4d0280c68cba98cf70b4b5157db
cryptography library requires pip install cryptography. The hash object is single-use: calling .finalize() a second time raises AlreadyFinalized. Use .copy() before finalizing if you need to branch the hash state.Hash Data from a File and API Response
Two scenarios come up constantly: hashing a file on disk to verify a release artifact, and hashing an HTTP response body to use as a cache key or verify a webhook.
Read File β Compute SHA-256 β Compare
import hashlib
import sys
def hash_file_safe(filepath: str) -> str | None:
"""Hash a file with proper error handling."""
try:
h = hashlib.sha256()
with open(filepath, "rb") as f:
for chunk in iter(lambda: f.read(16384), b""):
h.update(chunk)
return h.hexdigest()
except FileNotFoundError:
print(f"Error: {filepath} not found", file=sys.stderr)
return None
except PermissionError:
print(f"Error: no read permission for {filepath}", file=sys.stderr)
return None
result = hash_file_safe("/etc/nginx/nginx.conf")
if result:
print(f"SHA-256: {result}")HTTP Response β Hash Body for Cache Key
import hashlib
import urllib.request
import json
def fetch_and_hash(url: str) -> tuple[dict, str]:
"""Fetch JSON from an API and return both the data and its SHA-256 hash."""
try:
with urllib.request.urlopen(url, timeout=10) as resp:
body = resp.read()
content_hash = hashlib.sha256(body).hexdigest()
data = json.loads(body)
return data, content_hash
except urllib.error.URLError as exc:
raise ConnectionError(f"Failed to fetch {url}: {exc}") from exc
# Cache key based on response content
data, digest = fetch_and_hash("https://api.exchange.internal/v2/rates")
print(f"Response hash: {digest[:16]}...")
print(f"EUR/USD: {data.get('rates', {}).get('EUR', 'N/A')}")For a quick one-off check, ToolDeck's SHA-256 generator runs entirely in your browser β no code needed.
Command-Line SHA-256 Hashing
Sometimes you just need a quick hash in the terminal during an incident or deployment. Python's hashlib module has no built-in CLI subcommand (unlike python3 -m json.tool), but you can use a one-liner or system tools.
# Python one-liner echo -n "deployment-v4.2.1" | python3 -c "import hashlib,sys; print(hashlib.sha256(sys.stdin.buffer.read()).hexdigest())" # macOS / BSD echo -n "deployment-v4.2.1" | shasum -a 256 # Linux (coreutils) echo -n "deployment-v4.2.1" | sha256sum # OpenSSL (cross-platform) echo -n "deployment-v4.2.1" | openssl dgst -sha256
# Hash a release tarball sha256sum release-v4.2.1.tar.gz # or openssl dgst -sha256 release-v4.2.1.tar.gz # Verify against a known checksum echo "a8f5f167f44f4964e6c998dee827110c release-v4.2.1.tar.gz" | sha256sum -c - # release-v4.2.1.tar.gz: OK
echo -n (no trailing newline) when hashing strings on the command line. A bare echo appends \n, which changes the hash. This is the number one reason people get different hashes between Python and the shell.High-Performance Alternative β hashlib with OpenSSL and pycryptodome
On CPython, hashlib.sha256() already delegates to OpenSSL's C implementation, so it's fast β typically 500+ MB/s on modern hardware.
If SHA-256 hashing shows up in your profiler β say you're computing checksums for thousands of files in a CI pipeline or hashing every request body in a high-throughput API gateway β two options exist: optimize the hashlib calling pattern, or switch to pycryptodome for a unified crypto API that also covers SHA-3 and BLAKE2:
pip install pycryptodome
from Crypto.Hash import SHA256 h = SHA256.new() h.update(b"deployment-v4.2.1") print(h.hexdigest()) # a8f5f167f44f4964e6c998dee827110c3f1de4d0280c68cba98cf70b4b5157db
For high-throughput parallel file hashing, the bigger gains come from reducing Python overhead through larger chunk sizes and threading:
import hashlib
import os
from pathlib import Path
from concurrent.futures import ThreadPoolExecutor
def hash_file(path: Path) -> tuple[str, str]:
"""Hash a single file and return (path, hex digest)."""
h = hashlib.sha256()
with open(path, "rb") as f:
for chunk in iter(lambda: f.read(65536), b""): # 64 KB chunks
h.update(chunk)
return str(path), h.hexdigest()
def hash_directory(directory: str, pattern: str = "*.tar.gz") -> dict[str, str]:
"""Hash all matching files in parallel using threads."""
files = list(Path(directory).glob(pattern))
results = {}
with ThreadPoolExecutor(max_workers=os.cpu_count()) as pool:
for path, digest in pool.map(hash_file, files):
results[path] = digest
return results
# Hash all release artifacts in parallel
checksums = hash_directory("/var/releases", "*.tar.gz")
for path, digest in checksums.items():
print(f"{digest} {path}")Using 64 KB chunks instead of 8 KB reduces the number of Python-to-C calls by 8x. Threads work well here because the GIL is released during the C-level hashing β the bottleneck is disk I/O, not CPU.
Terminal Output with Syntax Highlighting
The rich library is useful when you need to verify a batch of files and want a table showing pass/fail status per file rather than raw hex output scrolling past.
pip install rich
import hashlib
from pathlib import Path
from rich.console import Console
from rich.table import Table
console = Console()
def hash_and_display(files: list[str], expected: dict[str, str]) -> None:
"""Hash files and display results with color-coded verification."""
table = Table(title="SHA-256 Verification")
table.add_column("File", style="cyan")
table.add_column("SHA-256", style="dim", max_width=20)
table.add_column("Status")
for filepath in files:
h = hashlib.sha256()
with open(filepath, "rb") as f:
for chunk in iter(lambda: f.read(8192), b""):
h.update(chunk)
digest = h.hexdigest()
name = Path(filepath).name
status = "[green]β OK[/green]" if expected.get(name) == digest else "[red]β MISMATCH[/red]"
table.add_row(name, f"{digest[:16]}...", status)
console.print(table)
# Usage
expected_checksums = {
"api-gateway-v3.1.tar.gz": "a8f5f167f44f4964...",
"worker-v3.1.tar.gz": "7d3f8c2a1b9e4f5d...",
}
hash_and_display(
["/var/releases/api-gateway-v3.1.tar.gz", "/var/releases/worker-v3.1.tar.gz"],
expected_checksums,
)console.print(data, highlight=False) or redirect to a file with Console(file=open(...)).Working with Large Files
The chunked .update() pattern handles files of any size with constant memory usage. For very large files (multi-GB disk images, database backups), the main concern shifts from memory to user feedback β hashing 10 GB at 500 MB/s still takes 20 seconds, and silence during that time makes people nervous.
import hashlib
import os
def sha256_with_progress(filepath: str) -> str:
"""Hash a large file with progress reporting to stderr."""
file_size = os.path.getsize(filepath)
h = hashlib.sha256()
bytes_read = 0
with open(filepath, "rb") as f:
while chunk := f.read(1 << 20): # 1 MB chunks
h.update(chunk)
bytes_read += len(chunk)
pct = (bytes_read / file_size) * 100
print(f"\r Hashing: {pct:.1f}% ({bytes_read >> 20} MB / {file_size >> 20} MB)",
end="", flush=True)
print() # newline after progress
return h.hexdigest()
digest = sha256_with_progress("/mnt/backups/db-snapshot-2026-03.sql.gz")
print(f"SHA-256: {digest}")NDJSON / JSON Lines β Hash Each Record Separately
import hashlib
import json
def hash_ndjson_records(filepath: str) -> dict[str, str]:
"""Hash each JSON record in an NDJSON file for deduplication."""
seen = {}
with open(filepath, "r", encoding="utf-8") as f:
for line_num, line in enumerate(f, 1):
line = line.strip()
if not line:
continue
try:
record = json.loads(line)
# Normalize before hashing: sort keys for deterministic output
canonical = json.dumps(record, sort_keys=True, separators=(",", ":"))
digest = hashlib.sha256(canonical.encode("utf-8")).hexdigest()
if digest in seen:
print(f"Line {line_num}: duplicate of line {seen[digest]}")
else:
seen[digest] = line_num
except json.JSONDecodeError:
print(f"Line {line_num}: invalid JSON, skipped")
print(f"Processed {line_num} lines, {len(seen)} unique records")
return seen
hash_ndjson_records("telemetry-events-2026-03.ndjson")hashlib.sha256(data) one-shot to the chunked .update() loop when files exceed 50β100 MB. Below that threshold, reading the entire file with f.read() is fine β memory usage will be roughly equal to the file size.Common Mistakes
Problem: hashlib.sha256('text') raises TypeError: Unicode-objects must be encoded before hashing. The function requires bytes, not str.
Fix: Encode the string first: hashlib.sha256('text'.encode('utf-8')). Or use a b'' literal for hardcoded values.
import hashlib
digest = hashlib.sha256("deployment-v4.2.1").hexdigest()
# TypeError: Unicode-objects must be encoded before hashingimport hashlib
digest = hashlib.sha256("deployment-v4.2.1".encode("utf-8")).hexdigest()
# Works β returns 64-char hex stringProblem: The == operator short-circuits on the first mismatched byte. An attacker can measure response time to guess the correct signature one byte at a time.
Fix: Use hmac.compare_digest() for all security-sensitive comparisons β it runs in constant time regardless of where the mismatch occurs.
received_sig = request.headers["X-Signature"]
expected_sig = hmac.new(key, body, hashlib.sha256).hexdigest()
if received_sig == expected_sig: # timing attack vulnerable
process_webhook(body)received_sig = request.headers["X-Signature"]
expected_sig = hmac.new(key, body, hashlib.sha256).hexdigest()
if hmac.compare_digest(received_sig, expected_sig): # constant-time
process_webhook(body)Problem: base64.b64encode(digest.hexdigest().encode()) produces an 88-character string β double the expected 44 characters. APIs that expect Base64-encoded SHA-256 will reject it.
Fix: Call .digest() (raw bytes) before Base64-encoding, not .hexdigest() (hex string).
import hashlib, base64 hex_str = hashlib.sha256(data).hexdigest() b64 = base64.b64encode(hex_str.encode()) # 88 chars β wrong!
import hashlib, base64 raw = hashlib.sha256(data).digest() b64 = base64.b64encode(raw) # 44 chars β correct
Problem: hashlib.sha256(open('large.iso', 'rb').read()) loads the entire file into memory. A 4 GB file requires 4 GB of RAM just for the hash computation.
Fix: Read in chunks with a loop and .update(). Memory usage stays constant regardless of file size.
import hashlib
# Loads entire 4 GB file into memory
digest = hashlib.sha256(open("disk.iso", "rb").read()).hexdigest()import hashlib
h = hashlib.sha256()
with open("disk.iso", "rb") as f:
for chunk in iter(lambda: f.read(8192), b""):
h.update(chunk)
digest = h.hexdigest() # constant memory usagehashlib vs hmac vs Alternatives β Quick Comparison
For straightforward hashing β checksums, cache keys, content fingerprinting β stick with hashlib.sha256(). Switch to hmac.new() the moment you need a secret key (webhooks, API signatures, token authentication). Reach for the cryptography library only if your project already uses it for encryption or TLS β adding a C extension dependency just for hashing is overkill when hashlib is already backed by OpenSSL.
Can You Decrypt SHA-256? β Hashing vs Encryption
Short answer: no. SHA-256 is a one-way function. The algorithm is designed to be irreversible β you cannot reconstruct the original input from the 256-bit digest. This is not an implementation limitation; it is a mathematical property of the hash function. The 256-bit output space is astronomically large (2256 possible values), and the function discards information during its 64 compression rounds.
Attackers can attempt brute-force or dictionary attacks against weak inputs (common passwords, short strings), but for any input with decent entropy β API keys, random tokens, file contents β reversing SHA-256 is computationally infeasible with current hardware. If you need reversible transformation, use symmetric encryption:
# Hashing β one-way, cannot recover original import hashlib digest = hashlib.sha256(b"secret-config-value").hexdigest() # No way to get "secret-config-value" back from digest # Encryption β two-way, can decrypt with the key from cryptography.fernet import Fernet key = Fernet.generate_key() cipher = Fernet(key) encrypted = cipher.encrypt(b"secret-config-value") decrypted = cipher.decrypt(encrypted) print(decrypted) # b"secret-config-value" β original recovered
For a no-install way to quickly generate a SHA-256 hash, the online tool runs entirely in your browser.
How to Check if a String is a Valid SHA-256 Hash in Python
A valid SHA-256 hex digest is exactly 64 hexadecimal characters (0-9, a-f, A-F). Quick validation before processing untrusted input prevents confusing downstream errors.
import re
def is_sha256_hex(value: str) -> bool:
"""Check if a string matches the SHA-256 hex digest format."""
return bool(re.fullmatch(r"[a-fA-F0-9]{64}", value))
# Test cases
print(is_sha256_hex("e3b0c44298fc1c149afbf4c8996fb924"
"27ae41e4649b934ca495991b7852b855")) # True β SHA-256 of empty string
print(is_sha256_hex("e3b0c44298fc1c14")) # False β too short
print(is_sha256_hex("zzzz" * 16)) # False β invalid hex charsFrequently Asked Questions
How do I hash a string with SHA-256 in Python?
Call hashlib.sha256() with the string encoded to bytes. Strings in Python are Unicode, and hash functions operate on raw bytes, so you must call .encode("utf-8") first. The .hexdigest() method returns the familiar 64-character hex string.
import hashlib
api_key = "sk_live_9f3a7b2e1d4c"
digest = hashlib.sha256(api_key.encode("utf-8")).hexdigest()
print(digest)
# e3b7c4a1f8d2...64 hex charactersCan you decrypt a SHA-256 hash back to the original text?
No. SHA-256 is a one-way function β it maps arbitrary-length input to a fixed 256-bit output and discards structure in the process. There is no mathematical inverse. Attackers can attempt brute-force or rainbow table lookups against weak inputs (short passwords, common words), but for any input with reasonable entropy, reversing SHA-256 is computationally infeasible. If you need reversible transformation, use encryption (AES-GCM, Fernet) instead of hashing.
What is the difference between .digest() and .hexdigest()?
.digest() returns the raw 32 bytes of the hash as a bytes object. .hexdigest() returns the same data encoded as a 64-character lowercase hexadecimal string. Use .digest() when you need binary output β feeding into HMAC, Base64 encoding, or writing to binary protocols. Use .hexdigest() when you need a human-readable string for logging, database storage, or checksum comparison.
import hashlib h = hashlib.sha256(b"deployment-v4.2.1") print(len(h.digest())) # 32 (bytes) print(len(h.hexdigest())) # 64 (hex characters)
How do I compute the SHA-256 checksum of a file in Python?
Open the file in binary mode and feed it to the hasher in chunks with .update(). On Python 3.11+, use hashlib.file_digest() for an even simpler API. Never call f.read() on large files β that loads the entire file into memory.
import hashlib
def sha256_file(path: str) -> str:
h = hashlib.sha256()
with open(path, "rb") as f:
for chunk in iter(lambda: f.read(8192), b""):
h.update(chunk)
return h.hexdigest()
print(sha256_file("release-v4.2.1.tar.gz"))How do I create an HMAC-SHA256 signature in Python?
Use the hmac module with hashlib.sha256 as the digestmod. Pass the secret key and message as bytes. The result is a keyed hash that proves both integrity and authenticity β the receiver needs the same key to verify.
import hmac
import hashlib
secret = b"webhook_secret_9f3a"
payload = b'{"event":"payment.completed","amount":4999}'
signature = hmac.new(secret, payload, hashlib.sha256).hexdigest()
print(signature) # 64-char hex HMACHow do I validate whether a string is a valid SHA-256 hex digest?
A SHA-256 hex digest is exactly 64 hexadecimal characters. Use a regex or simple length + character check. The regex approach is the most readable.
import re
def is_sha256(s: str) -> bool:
return bool(re.fullmatch(r"[a-fA-F0-9]{64}", s))
print(is_sha256("e3b0c44298fc1c149afbf4c8996fb924"
"27ae41e4649b934ca495991b7852b855")) # True
print(is_sha256("not-a-hash")) # FalseRelated Tools
Dmitri is a DevOps engineer who relies on Python as his primary scripting and automation language. He builds internal tooling, CI/CD pipelines, and infrastructure automation scripts that run in production across distributed teams. He writes about the Python standard library, subprocess management, file processing, encoding utilities, and the practical shell-adjacent Python that DevOps engineers use every day.
Maria is a backend developer specialising in Python and API integration. She has broad experience with data pipelines, serialisation formats, and building reliable server-side services. She is an active member of the Python community and enjoys writing practical, example-driven guides that help developers solve real problems without unnecessary theory.