
JSON encoding in Python is straightforward when you’re dealing with basic data types like dictionaries, lists, strings, numbers, booleans, and None. The built-in json module handles these perfectly out of the box. But the moment you try to serialize anything more complex — like custom classes, datetime objects, or anything else that isn’t one of those simple types — the default encoder throws its hands up and fails.
Here’s the core issue: the default JSON encoder knows how to convert standard Python types into their JSON equivalents. It doesn’t understand how to convert user-defined objects because there’s no universal rule for that. What does it mean to serialize a Person object? Should it be their name, their entire dictionary of attributes, or maybe just their ID?
Try to dump a custom object without any preparation and you’ll get this familiar error:
TypeError: Object of type <classname> is not JSON serializable
That’s Python’s way of saying, “I have no idea how to turn this into a JSON string.” This default encoder is intentionally basic and only keeps to the standards. It’s by design, because JSON is meant to be a simple, human-readable format, not a catch-all serialization for complex Python objects.
Another limitation is how the default encoder treats certain built-in types that don’t have direct JSON equivalents. For example, datetime.datetime objects are very common in Python apps, but JSON has no native date/time type. The encoder doesn’t know what to do, so it just fails unless you convert these to strings or timestamps manually.
Here’s a quick demonstration of what happens when you try to serialize a datetime directly:
import json from datetime import datetime now = datetime.now() json.dumps(now)
This will raise a TypeError because datetime isn’t one of the supported types. So, with the default encoder, you have to manually convert the datetime to a string first:
json.dumps(now.isoformat())
But when your data contains nested objects or a mix of types, manual conversions become tedious and error-prone. That’s where writing your own encoder subclass becomes useful.
To summarize, the default JSON encoder’s limitations boil down to two things: it only knows how to handle standard JSON types, and it has no built-in way to serialize user-defined or more complex Python objects. To handle anything beyond that, you either preprocess your data into JSON-compatible types or extend the encoder itself, which is exactly what we’ll explore next.
Writing your own JSONEncoder subclass for custom objects
The solution to the default encoder’s inflexibility is to create your own by subclassing json.JSONEncoder. The magic happens by overriding a single method: default(). This method is the encoder’s escape hatch. Whenever the encoder encounters an object it doesn’t recognize, it calls default(self, o) with the object o. Your job in this method is to inspect the object and return a JSON-serializable version of it. If you can’t handle the object, you should call the base class’s default() method to let it raise the standard TypeError.
Let’s start with a simple custom class. Say we have a Complex number class that we want to serialize into a dictionary format like {"real": 1, "imag": 2}.
class Complex:
def __init__(self, real, imag):
self.real = real
self.imag = imag
To handle this, we create a custom encoder. Inside its default() method, we check if the object is an instance of our Complex class. If it is, we return a dictionary of its attributes. Otherwise, we fall back to the parent implementation.
import json
class ComplexEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, Complex):
# Return a serializable dictionary
return {'real': o.real, 'imag': o.imag, '__complex__': True}
# Let the base class default method raise the TypeError
return super().default(o)
Notice the '__complex__': True part. This is a common pattern for adding a hint about the object’s original type, which can be useful later if you want to write a custom decoder to reconstruct the object. Now, to use this encoder, you pass it to the cls parameter of json.dumps().
c = Complex(3.0, -4.5)
json_string = json.dumps(c, cls=ComplexEncoder)
# json_string is now '{"real": 3.0, "imag": -4.5, "__complex__": true}'
This approach is powerful because you can centralize all your custom serialization logic in one place. You can extend the encoder to handle multiple custom types. Let’s add support for datetime.datetime objects, which we struggled with earlier.
import json
from datetime import datetime
class CustomEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, datetime):
return o.isoformat()
elif isinstance(o, Complex):
return {'real': o.real, 'imag': o.imag, '__complex__': True}
return super().default(o)
Now this single encoder can handle both our custom Complex class and Python’s built-in datetime objects. You can build up a library of these conversions inside your encoder, making it a reusable component for your entire application. It’s far cleaner than peppering your code with manual data conversions before every call to json.dumps().
Here’s how you’d use it with a more complex data structure:
# Assuming Complex class and CustomEncoder are defined as above
data = {
'id': 123,
'created_at': datetime.now(),
'value': Complex(1, 2),
'history': [
{'timestamp': datetime(2023, 1, 1, 12, 0, 0), 'change': 'initial'},
{'timestamp': datetime(2023, 1, 2, 15, 30, 0), 'change': 'updated'}
]
}
serialized_data = json.dumps(data, cls=CustomEncoder, indent=4)
print(serialized_data)
The output is a clean, perfectly formatted JSON string, with all the custom types correctly converted according to the rules defined in your CustomEncoder. This is the canonical Python way to handle complex JSON serialization: create a specific tool for the job instead of trying to make a general tool do something it wasn’t designed for.

