Convert CSV to JSON in Python β€” json.dumps() Guide

Β·Backend DeveloperΒ·Reviewed byPriya SharmaΒ·Published

Use the free online CSV to JSON directly in your browser β€” no install required.

Try CSV to JSON Online β†’

CSV files show up everywhere β€” exported reports, database dumps, log extracts β€” and sooner or later you need to convert that CSV to JSON in Python. The standard library handles this with two modules: csv.DictReader turns each row into a Python dict, and json.dumps() serializes those dicts to a JSON string. For a quick one-off conversion without code, the CSV to JSON converter does it instantly in the browser. This guide covers the full programmatic path: json.dump() vs json.dumps(), writing JSON to files, dataclass serialization, type coercion for CSV values, handling datetime and Decimal, and high-performance alternatives like orjson. All examples target Python 3.10+.

  • βœ“csv.DictReader produces a list of dicts β€” serialize the full list with json.dump(rows, f, indent=2) to write a JSON file.
  • βœ“json.dump() writes directly to a file object. json.dumps() returns a string. Pick the right one and you avoid an unnecessary copy.
  • βœ“CSV values are always strings. Cast numeric columns explicitly (int(), float()) before serializing to JSON.
  • βœ“Pass ensure_ascii=False to json.dumps() to preserve Unicode characters β€” accented names, CJK text β€” in the output.
  • βœ“For datetime, UUID, or Decimal from CSV, use the default= parameter with a custom fallback function.
Before Β· json
After Β· json
order_id,product,quantity,price
ORD-7291,Wireless Keyboard,2,49.99
ORD-7292,USB-C Hub,1,34.50
[
  {
    "order_id": "ORD-7291",
    "product": "Wireless Keyboard",
    "quantity": "2",
    "price": "49.99"
  },
  {
    "order_id": "ORD-7292",
    "product": "USB-C Hub",
    "quantity": "1",
    "price": "34.50"
  }
]
Note:Notice that quantity and price appear as JSON strings ("2", "49.99") in the raw output. CSV has no type system β€” every value is a string. Fixing this is covered in the type coercion section below.

json.dumps() β€” Serialize a Python Dict to a JSON String

The json module ships with every Python installation β€” no pip install required. json.dumps(obj) takes a Python object (dict, list, string, number, bool, or None) and returns a str containing valid JSON. A Python dictionary looks similar to a JSON object, but they are fundamentally different: a dict is a Python data structure in memory, and a JSON string is serialized text. Calling json.dumps() bridges that gap.

Minimal Example β€” Single CSV Row to JSON

Python 3.10+
import json

# A single CSV row represented as a Python dict
server_entry = {
    "hostname": "web-prod-03",
    "ip_address": "10.0.12.47",
    "port": 8080,
    "region": "eu-west-1"
}

# Convert dict to JSON string
json_string = json.dumps(server_entry)
print(json_string)
# {"hostname": "web-prod-03", "ip_address": "10.0.12.47", "port": 8080, "region": "eu-west-1"}
print(type(json_string))
# <class 'str'>

That produces compact, single-line JSON β€” good for payloads and storage, terrible for reading. Add indent=2 to get human-readable output:

Python 3.10+ β€” pretty-printed output
import json

server_entry = {
    "hostname": "web-prod-03",
    "ip_address": "10.0.12.47",
    "port": 8080,
    "region": "eu-west-1"
}

pretty_json = json.dumps(server_entry, indent=2)
print(pretty_json)
# {
#   "hostname": "web-prod-03",
#   "ip_address": "10.0.12.47",
#   "port": 8080,
#   "region": "eu-west-1"
# }

Two more parameters I use on nearly every call: sort_keys=True alphabetizes dictionary keys (great for diffing JSON files across versions), and ensure_ascii=False preserves non-ASCII characters instead of escaping them to \uXXXX sequences.

Python 3.10+ β€” sort_keys and ensure_ascii
import json

warehouse_record = {
    "sku": "WH-9031",
    "location": "MΓΌnchen Lager 3",
    "quantity": 240,
    "last_audit": "2026-03-10"
}

output = json.dumps(warehouse_record, indent=2, sort_keys=True, ensure_ascii=False)
print(output)
# {
#   "last_audit": "2026-03-10",
#   "location": "MΓΌnchen Lager 3",
#   "quantity": 240,
#   "sku": "WH-9031"
# }

Quick note on the separators parameter: the default is (", ", ": ") which adds spaces after commas and colons. For the most compact possible output (useful when embedding JSON in URL parameters or squeezing bytes out of API responses), pass separators=(",", ":").

Note:A Python dict and a JSON object look almost identical when printed. The difference: json.dumps() converts Python True to JSON true, None to null, and wraps strings in double quotes (Python allows single quotes, JSON does not). Always use json.dumps() to produce valid JSON β€” do not rely on str() or repr().

csv.DictReader to JSON File β€” The Complete Pipeline

The most common real-world task is reading an entire CSV file and saving it as JSON. Here is the end-to-end script in under 10 lines. csv.DictReader produces an iterator of dict objects β€” one per row, using the first line as keys. Wrapping it in list() collects all rows into a Python list, which serializes to a JSON array.

Python 3.10+ β€” full CSV to JSON conversion
import csv
import json

# Step 1: Read CSV rows into a list of dicts
with open("inventory.csv", "r", encoding="utf-8") as csv_file:
    rows = list(csv.DictReader(csv_file))

# Step 2: Write the list as a JSON file
with open("inventory.json", "w", encoding="utf-8") as json_file:
    json.dump(rows, json_file, indent=2, ensure_ascii=False)

print(f"Converted {len(rows)} rows to inventory.json")

Two open() calls: one for reading the CSV, one for writing the JSON. That is the whole pattern. Notice this uses json.dump() (without the s) β€” it writes directly to the file handle. Using json.dumps() would return a string that you would then need to write separately with f.write(). json.dump() is more memory-efficient because it streams the output instead of building the entire string in memory first.

When you need the JSON as a string rather than a file β€” for embedding in an API payload, printing to stdout, or inserting into a database column β€” switch to json.dumps():

Python 3.10+ β€” CSV rows as JSON string
import csv
import json

with open("sensors.csv", "r", encoding="utf-8") as f:
    rows = list(csv.DictReader(f))

# Get the JSON as a string instead of writing to a file
json_payload = json.dumps(rows, indent=2)
print(json_payload)
# [
#   {
#     "sensor_id": "TMP-4401",
#     "location": "Building 7 - Floor 2",
#     "reading": "22.4",
#     "unit": "celsius"
#   },
#   ...
# ]

Single row vs. full dataset: if you call json.dumps(single_dict) you get a JSON object ({...}). Call json.dumps(list_of_dicts) and you get a JSON array ([{...}, {...}]). The outer container shape depends on what you pass in. Most downstream consumers expect an array for tabular data.

Handling Non-String Values β€” Type Coercion from CSV

Here is the thing that catches everyone the first time: csv.DictReader returns every value as a string. The number 42 in your CSV becomes the string "42" in the dict. If you serialize that directly with json.dumps(), your JSON will have "quantity": "42" instead of "quantity": 42. APIs that validate types will reject this. You need to cast values explicitly.

Python 3.10+ β€” type coercion for CSV rows
import csv
import json

def coerce_types(row: dict) -> dict:
    """Convert string values to appropriate Python types."""
    return {
        "sensor_id": row["sensor_id"],
        "location": row["location"],
        "temperature": float(row["temperature"]),
        "humidity": float(row["humidity"]),
        "battery_pct": int(row["battery_pct"]),
        "active": row["active"].lower() == "true",
    }

with open("sensor_readings.csv", "r", encoding="utf-8") as f:
    rows = [coerce_types(row) for row in csv.DictReader(f)]

print(json.dumps(rows[0], indent=2))
# {
#   "sensor_id": "TMP-4401",
#   "location": "Building 7 - Floor 2",
#   "temperature": 22.4,
#   "humidity": 58.3,
#   "battery_pct": 87,
#   "active": true
# }

Now temperature is a float, battery_pct is an integer, and active is a boolean in the JSON output. The coercion function is specific to your CSV schema β€” there is no generic way to guess types from CSV data, so I write one function per CSV format.

Serializing Custom Objects and Non-Standard Types

Python's json module cannot serialize datetime, UUID, Decimal, or custom classes out of the box. Calling json.dumps() on any of these raises a TypeError. Two approaches handle this.

Approach 1: The default= Parameter

Pass a function to default= that converts unknown types to something serializable. This function is called only for objects that the JSON encoder does not know how to handle.

Python 3.10+ β€” default= for datetime, UUID, Decimal
import json
from datetime import datetime
from decimal import Decimal
from uuid import UUID

def json_serial(obj):
    """Fallback serializer for non-standard types."""
    if isinstance(obj, datetime):
        return obj.isoformat()
    if isinstance(obj, UUID):
        return str(obj)
    if isinstance(obj, Decimal):
        return float(obj)
    raise TypeError(f"Type {type(obj).__name__} is not JSON serializable")

transaction = {
    "txn_id": UUID("a1b2c3d4-e5f6-7890-abcd-ef1234567890"),
    "amount": Decimal("149.99"),
    "currency": "EUR",
    "processed_at": datetime(2026, 3, 15, 14, 30, 0),
    "gateway": "stripe",
}

print(json.dumps(transaction, indent=2, default=json_serial))
# {
#   "txn_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
#   "amount": 149.99,
#   "currency": "EUR",
#   "processed_at": "2026-03-15T14:30:00",
#   "gateway": "stripe"
# }
Warning:Always raise TypeError at the end of your default= function for unrecognized types. If you return None or silently skip them, you get null in the output with no indication that data was lost.

Approach 2: Dataclasses with asdict()

Python dataclasses give your CSV rows a proper type definition. Use dataclasses.asdict() to convert a dataclass instance to a plain dict, then pass it to json.dumps().

Python 3.10+ β€” dataclass serialization
import json
from dataclasses import dataclass, asdict
from datetime import datetime

@dataclass
class ShipmentRecord:
    tracking_id: str
    origin: str
    destination: str
    weight_kg: float
    shipped_at: datetime

def json_serial(obj):
    if isinstance(obj, datetime):
        return obj.isoformat()
    raise TypeError(f"Not serializable: {type(obj).__name__}")

shipment = ShipmentRecord(
    tracking_id="SHP-9827",
    origin="Rotterdam",
    destination="Singapore",
    weight_kg=1240.5,
    shipped_at=datetime(2026, 3, 12, 8, 0, 0),
)

print(json.dumps(asdict(shipment), indent=2, default=json_serial))
# {
#   "tracking_id": "SHP-9827",
#   "origin": "Rotterdam",
#   "destination": "Singapore",
#   "weight_kg": 1240.5,
#   "shipped_at": "2026-03-12T08:00:00"
# }
Note:asdict() recursively converts nested dataclasses into dicts. If your dataclass contains a list of other dataclasses, the whole tree gets converted β€” no extra code needed.

json.dumps() Parameters Reference

Full list of keyword arguments accepted by json.dumps() and json.dump(). Both functions accept identical parameters β€” json.dump() takes an additional first argument for the file object.

Parameter
Type
Default
Description
obj
Any
(required)
The Python object to serialize β€” dict, list, str, int, float, bool, None
indent
int | str | None
None
Number of spaces (or a string) per indentation level. None = compact single-line output
sort_keys
bool
False
Sort dictionary keys alphabetically in the output
ensure_ascii
bool
True
Escape all non-ASCII characters as \\uXXXX. Set False to emit UTF-8 directly
default
Callable | None
None
Function called for objects not serializable by default β€” return a serializable value or raise TypeError
separators
tuple[str, str] | None
None
Override (item_separator, key_separator). Use (",", ":") for compact output with no spaces
skipkeys
bool
False
Skip dict keys that are not str, int, float, bool, or None instead of raising TypeError
allow_nan
bool
True
Allow float("nan"), float("inf"), float("-inf"). Set False to raise ValueError on these values
cls
Type[JSONEncoder] | None
None
Custom JSONEncoder subclass to use instead of the default

csv.DictReader β€” Reading CSV into Python Dicts

csv.DictReader is the other half of the CSV-to-JSON pipeline. It wraps a file object and yields one dict per row, using the first line as field names. Compared to csv.reader (which yields plain lists), DictReader gives you named access to columns β€” no magic indexes like row[3].

Python 3.10+ β€” DictReader with custom delimiter
import csv
import json

# Tab-separated file from a database export
with open("user_sessions.tsv", "r", encoding="utf-8") as f:
    reader = csv.DictReader(f, delimiter="\t")
    sessions = list(reader)

print(json.dumps(sessions[:2], indent=2))
# [
#   {
#     "session_id": "sess_8f2a91bc",
#     "user_id": "usr_4421",
#     "started_at": "2026-03-15T09:12:00Z",
#     "duration_sec": "342",
#     "pages_viewed": "7"
#   },
#   {
#     "session_id": "sess_3c7d44ef",
#     "user_id": "usr_1187",
#     "started_at": "2026-03-15T09:14:22Z",
#     "duration_sec": "128",
#     "pages_viewed": "3"
#   }
# ]
Warning:csv.DictReader reads the entire file lazily β€” it yields rows one at a time. Calling list(reader) loads all rows into memory. For files with millions of rows, process rows in a streaming fashion instead of collecting them all.

Convert CSV from a File and API Response

Two production scenarios: reading a CSV file from disk and converting it, and fetching CSV data from an API endpoint (plenty of reporting services return CSV). Both need proper error handling.

Read CSV File β†’ Convert β†’ Write JSON

Python 3.10+ β€” file conversion with error handling
import csv
import json
import sys

def csv_to_json_file(csv_path: str, json_path: str) -> int:
    """Convert a CSV file to JSON. Returns the number of rows written."""
    try:
        with open(csv_path, "r", encoding="utf-8") as f:
            rows = list(csv.DictReader(f))
    except FileNotFoundError:
        print(f"Error: {csv_path} not found", file=sys.stderr)
        sys.exit(1)
    except csv.Error as e:
        print(f"CSV parse error in {csv_path}: {e}", file=sys.stderr)
        sys.exit(1)

    with open(json_path, "w", encoding="utf-8") as f:
        json.dump(rows, f, indent=2, ensure_ascii=False)

    return len(rows)

count = csv_to_json_file("fleet_vehicles.csv", "fleet_vehicles.json")
print(f"Wrote {count} records to fleet_vehicles.json")

Fetch CSV from API β†’ Parse β†’ JSON

Python 3.10+ β€” API CSV response to JSON
import csv
import io
import json
import urllib.request

def fetch_csv_as_json(url: str) -> str:
    """Fetch CSV from a URL and return it as a JSON string."""
    try:
        with urllib.request.urlopen(url, timeout=10) as resp:
            raw = resp.read().decode("utf-8")
    except urllib.error.URLError as e:
        raise RuntimeError(f"Failed to fetch {url}: {e}")

    reader = csv.DictReader(io.StringIO(raw))
    rows = list(reader)

    if not rows:
        raise ValueError("CSV response was empty or had no data rows")

    return json.dumps(rows, indent=2, ensure_ascii=False)

# Example: export endpoint that returns CSV
try:
    result = fetch_csv_as_json("https://reports.internal/api/v2/daily-metrics.csv")
    print(result)
except (RuntimeError, ValueError) as e:
    print(f"Error: {e}")

Both examples use explicit encoding="utf-8" on every file open. This matters for CSV files with non-ASCII characters β€” accented names, addresses with special characters, CJK text. Without explicit encoding, Python falls back to the system default, which on Windows is often cp1252 and will silently garble multibyte characters.

Verifying JSON Output with json.loads()

After converting CSV to a JSON string, you can verify the result by parsing it back with json.loads(). This round-trip catches encoding issues, broken escape sequences, or accidental string concatenation that would produce invalid JSON. Wrap the call in a try/except block.

Python 3.10+ β€” round-trip validation
import json

json_string = json.dumps({"order_id": "ORD-7291", "total": 129.99})

# Verify it is valid JSON by parsing it back
try:
    parsed = json.loads(json_string)
    print(f"Valid JSON with {len(parsed)} keys")
except json.JSONDecodeError as e:
    print(f"Invalid JSON: {e}")
# Valid JSON with 2 keys

Command-Line CSV to JSON Conversion

Quick conversions from the terminal β€” no script file needed. Python's -c flag runs inline code, and you can pipe the result through python3 -m json.tool for pretty-printing.

bash β€” one-liner CSV to JSON
python3 -c "
import csv, json, sys
rows = list(csv.DictReader(sys.stdin))
json.dump(rows, sys.stdout, indent=2)
" < inventory.csv > inventory.json
bash β€” pipe CSV file and format with json.tool
python3 -c "import csv,json,sys; print(json.dumps(list(csv.DictReader(sys.stdin))))" < data.csv | python3 -m json.tool
bash β€” convert and validate with jq
python3 -c "import csv,json,sys; json.dump(list(csv.DictReader(sys.stdin)),sys.stdout)" < report.csv | jq .
Note:python3 -m json.tool is the built-in JSON formatter. It reads JSON from stdin, validates it, and prints it with 4-space indentation. Useful for verifying that your CSV-to-JSON conversion produced valid output. If you prefer 2-space indent or need filtering, use jq instead.

High-Performance Alternative β€” orjson

The built-in json module works fine for most CSV files. But if you are processing datasets with tens of thousands of rows in a loop, or your API needs to serialize CSV-derived data on every request, orjson is 5–10x faster. It is written in Rust, returns bytes instead of str, and natively serializes datetime, UUID, and numpy arrays without a custom default= function.

bash β€” install orjson
pip install orjson
Python 3.10+ β€” CSV to JSON with orjson
import csv
import orjson

with open("telemetry_events.csv", "r", encoding="utf-8") as f:
    rows = list(csv.DictReader(f))

# orjson.dumps() returns bytes, not str
json_bytes = orjson.dumps(rows, option=orjson.OPT_INDENT_2)

with open("telemetry_events.json", "wb") as f:  # note: "wb" for bytes
    f.write(json_bytes)

print(f"Wrote {len(rows)} events ({len(json_bytes)} bytes)")

The API is slightly different: orjson.dumps() returns bytes and uses option= flags instead of keyword arguments. Open files in binary write mode ("wb") when writing orjson output. If you need a string, call .decode("utf-8") on the result.

Terminal Output with Syntax Highlighting β€” rich

Debugging CSV-to-JSON conversions in the terminal gets easier with colored output. The rich library renders JSON with syntax highlighting β€” keys, strings, numbers, and booleans each get their own color.

bash β€” install rich
pip install rich
Python 3.10+ β€” rich JSON output
import csv
import json
from rich.console import Console
from rich.syntax import Syntax

console = Console()

with open("deployment_log.csv", "r", encoding="utf-8") as f:
    rows = list(csv.DictReader(f))

json_output = json.dumps(rows[:3], indent=2, ensure_ascii=False)
syntax = Syntax(json_output, "json", theme="monokai", line_numbers=True)
console.print(syntax)
Warning:rich adds ANSI escape codes to the output. Do not write rich-formatted output to a file or an API response β€” it will contain invisible control characters. Use rich only for terminal display.

Working with Large CSV Files

Loading a 500 MB CSV file with list(csv.DictReader(f)) allocates the entire dataset in memory, then json.dump() builds the full JSON string on top of that. For files larger than 50–100 MB, switch to a streaming approach or write NDJSON (newline-delimited JSON) β€” one JSON object per line.

NDJSON β€” One JSON Object Per Line

Python 3.10+ β€” streaming CSV to NDJSON
import csv
import json

def csv_to_ndjson(csv_path: str, ndjson_path: str) -> int:
    """Convert CSV to NDJSON, processing one row at a time."""
    count = 0
    with open(csv_path, "r", encoding="utf-8") as infile, \
         open(ndjson_path, "w", encoding="utf-8") as outfile:
        for row in csv.DictReader(infile):
            outfile.write(json.dumps(row, ensure_ascii=False) + "\n")
            count += 1
    return count

rows_written = csv_to_ndjson("access_log.csv", "access_log.ndjson")
print(f"Wrote {rows_written} lines to access_log.ndjson")
# Each line is a standalone JSON object:
# {"timestamp":"2026-03-15T09:12:00Z","method":"GET","path":"/api/v2/orders","status":"200"}
# {"timestamp":"2026-03-15T09:12:01Z","method":"POST","path":"/api/v2/payments","status":"201"}

Streaming with ijson for Large JSON Input

Python 3.10+ β€” ijson for reading large JSON
import ijson  # pip install ijson

def count_high_value_orders(json_path: str, threshold: float) -> int:
    """Count orders above a threshold without loading the full file."""
    count = 0
    with open(json_path, "rb") as f:
        for item in ijson.items(f, "item"):
            if float(item.get("total", 0)) > threshold:
                count += 1
    return count

# Process a 2 GB JSON file with constant memory usage
high_value = count_high_value_orders("all_orders.json", 500.0)
print(f"Found {high_value} orders above $500")
Note:Switch to NDJSON or streaming when the CSV exceeds 50–100 MB. ijson is for reading large JSON files back β€” for the writing side, the NDJSON pattern above keeps memory usage constant regardless of file size.

Common Mistakes

❌ Using json.dumps() then writing to a file separately

Problem: json.dumps() returns a string. Writing it with f.write() works but creates an unnecessary intermediate string in memory β€” wasteful for large datasets.

Fix: Use json.dump(data, f) to write directly to the file object. It streams the output without building the full string first.

Before Β· Python
After Β· Python
json_string = json.dumps(rows, indent=2)
with open("output.json", "w") as f:
    f.write(json_string)  # unnecessary intermediate string
with open("output.json", "w", encoding="utf-8") as f:
    json.dump(rows, f, indent=2, ensure_ascii=False)  # direct write
❌ Forgetting to cast CSV string values to numbers

Problem: csv.DictReader returns all values as strings. JSON output contains "quantity": "5" instead of "quantity": 5, which breaks typed API consumers.

Fix: Cast numeric columns explicitly with int() or float() before serializing.

Before Β· Python
After Β· Python
rows = list(csv.DictReader(f))
json.dumps(rows)
# [{"port": "8080", "workers": "4"}]  ← strings, not numbers
rows = list(csv.DictReader(f))
for row in rows:
    row["port"] = int(row["port"])
    row["workers"] = int(row["workers"])
json.dumps(rows)
# [{"port": 8080, "workers": 4}]  ← proper integers
❌ Omitting encoding='utf-8' on file open

Problem: On Windows, the default encoding is cp1252. Non-ASCII characters (accented names, CJK text) silently get garbled or raise UnicodeDecodeError.

Fix: Always pass encoding='utf-8' to open() for both CSV reading and JSON writing.

Before Β· Python
After Β· Python
with open("locations.csv", "r") as f:  # uses system default encoding
    rows = list(csv.DictReader(f))
with open("locations.csv", "r", encoding="utf-8") as f:
    rows = list(csv.DictReader(f))
❌ Using str() or repr() instead of json.dumps()

Problem: str(my_dict) produces Python syntax (single quotes, True, None) which is not valid JSON. APIs and JSON parsers reject it.

Fix: Always use json.dumps() to produce valid JSON. It converts True to true, None to null, and uses double quotes.

Before Β· Python
After Β· Python
output = str({"active": True, "note": None})
# "{'active': True, 'note': None}"  ← NOT valid JSON
output = json.dumps({"active": True, "note": None})
# '{"active": true, "note": null}'  ← valid JSON

json.dumps() vs Alternatives β€” Quick Comparison

Method
Output
Valid JSON
Custom Types
Speed
Requires Install
json.dumps()
str
βœ“
via default= param
Baseline
No (stdlib)
json.dump()
writes to file
βœ“
via default= param
Baseline
No (stdlib)
csv.DictReader + json
str or file
βœ“
via default= param
Baseline
No (stdlib)
pandas to_json()
str or file
βœ“
βœ“ native datetime
~2x faster for large data
pip install pandas
orjson.dumps()
bytes
βœ“
βœ“ native datetime/UUID
5–10x faster
pip install orjson
dataclasses.asdict() + json
str
βœ“
via default= param
Baseline
No (stdlib)
polars write_json()
str or file
βœ“
βœ“ native datetime
~3x faster for large data
pip install polars

For most CSV-to-JSON conversions, the standard library csv + json combination is the right choice: zero dependencies, ships with Python, works everywhere. Reach for orjson when profiling shows serialization is a bottleneck β€” the speed difference is real at scale. Use pandas when you also need data cleaning, filtering, or aggregation before converting to JSON. If you just need a quick conversion without writing code, the online CSV to JSON converter handles it instantly.

Frequently Asked Questions

What is the difference between json.dump() and json.dumps() in Python?

json.dump(obj, file) writes the JSON output directly to a file-like object (anything with a .write() method). json.dumps(obj) returns a JSON-formatted string. Use json.dump() when writing to a file, json.dumps() when you need the JSON as a Python string for logging, embedding in a payload, or sending through a socket. Both accept the same keyword arguments (indent, sort_keys, ensure_ascii, default).

How do I convert a Python dictionary to a JSON string?

Call json.dumps(your_dict). The return value is a str containing valid JSON. Add indent=2 for readable output. If your dict contains non-ASCII values, pass ensure_ascii=False to preserve characters like accented letters or CJK text.

Python 3.10+
import json

server_config = {"host": "api.internal", "port": 8443, "debug": False}
json_string = json.dumps(server_config, indent=2)
print(json_string)
# {
#   "host": "api.internal",
#   "port": 8443,
#   "debug": false
# }

How do I save a Python list of dicts as a JSON file?

Open a file in write mode with UTF-8 encoding, then call json.dump(your_list, f, indent=2, ensure_ascii=False). Always use json.dump() (not json.dumps()) for file output β€” it writes directly to the file handle without creating an intermediate string in memory.

Python 3.10+
import json

records = [
    {"order_id": "ORD-4821", "total": 129.99, "currency": "USD"},
    {"order_id": "ORD-4822", "total": 89.50, "currency": "EUR"},
]

with open("orders.json", "w", encoding="utf-8") as f:
    json.dump(records, f, indent=2, ensure_ascii=False)

Why does json.dumps() turn True into true and None into null?

Python booleans (True, False) and None are not valid JSON tokens. The JSON spec uses lowercase true, false, and null. json.dumps() handles this mapping automatically β€” True becomes true, False becomes false, None becomes null. You do not need to convert these manually. Going the other direction, json.loads() maps them back to Python types.

How do I handle datetime objects when converting CSV data to JSON?

Pass a default= function to json.dumps() that converts datetime objects to ISO 8601 strings. The default function is called for any object that json cannot serialize natively. Return obj.isoformat() for datetime instances and raise TypeError for anything else.

Python 3.10+
import json
from datetime import datetime

def json_default(obj):
    if isinstance(obj, datetime):
        return obj.isoformat()
    raise TypeError(f"Not serializable: {type(obj)}")

event = {"action": "login", "timestamp": datetime(2026, 3, 15, 9, 30, 0)}
print(json.dumps(event, default=json_default))
# {"action": "login", "timestamp": "2026-03-15T09:30:00"}

Can I convert CSV to JSON without pandas?

Yes. The Python standard library has everything you need. Use csv.DictReader to read each row as a dictionary, collect the rows into a list, and serialize with json.dump() or json.dumps(). No third-party libraries required. Pandas is only worth adding if you also need data cleaning, type inference, or are already using it elsewhere in the project.

Python 3.10+
import csv
import json

with open("inventory.csv", "r", encoding="utf-8") as csv_file:
    rows = list(csv.DictReader(csv_file))

with open("inventory.json", "w", encoding="utf-8") as json_file:
    json.dump(rows, json_file, indent=2, ensure_ascii=False)

For a one-click alternative without writing any Python, try the CSV to JSON converter β€” paste your CSV data and get formatted JSON output immediately.

MS
Maria SantosBackend Developer

Maria is a backend developer specialising in Python and API integration. She has broad experience with data pipelines, serialisation formats, and building reliable server-side services. She is an active member of the Python community and enjoys writing practical, example-driven guides that help developers solve real problems without unnecessary theory.

PS
Priya SharmaTechnical Reviewer

Priya is a data scientist and machine learning engineer who has worked across the full Python data stack β€” from raw data ingestion and cleaning to model deployment and monitoring. She is passionate about reproducible research, Jupyter-based workflows, and the practical engineering side of ML. She writes about NumPy, Pandas, data serialisation, and the Python patterns that make data pipelines reliable at scale.