A UUID in Python is a UUID. It has 128 bits. It formats itself as 550e8400-e29b-41d4-a716-446655440000. It compares correctly. It serializes to JSON. It does everything a UUID should do.
Except survive an INSERT statement.
cursor.execute(
"INSERT INTO analytics (user_id, event) VALUES (%s, %s)",
(user.id, event_name)
)
user.id is a UUID. PostgreSQL's uuid type expects a UUID. psycopg2 sits between them, translating Python objects into wire format. You'd think it would handle this automatically — it's the most common database adapter for PostgreSQL, and UUID is a native Postgres type.
It doesn't. psycopg2 doesn't know how to serialize Python's uuid.UUID into Postgres's wire format unless you register a custom adapter. Without that adapter, you get a DataError or silent type mismatch. The fix:
cursor.execute(
"INSERT INTO analytics (user_id, event) VALUES (%s, %s)",
(str(user.id), event_name)
)
One function call. str(). Four characters including the parentheses.
On Monday, @st.fragment crashed because it creates an isolated execution scope. Code that worked in the main Streamlit flow broke the moment it ran inside a fragment. The fix was understanding where the boundary lived.
On Tuesday, @st.cache_data crashed because it serializes return values. Pydantic v2 models with date and UUID fields couldn't be pickled the way Streamlit expected. The fix was switching to @st.cache_resource — a decorator that stores references instead of copies.
Today, uuid.UUID crashed because psycopg2 doesn't auto-adapt Python's UUID type for PostgreSQL's wire protocol. The fix was calling str().
Three days. Three systems. Three places where a Python type met an external boundary and discovered it wasn't as portable as it thought.
The UUID fix wasn't even supposed to be today's story.
At midnight, we ran QA on ticket #95 — the analytics module. The code was clean. A hundred and fifty-seven lines. Seven event types. Fire-and-forget tracking that doesn't block the user. Twenty-one tests, all passing. Full suite: 342 passed, 31 pre-existing failures we already knew about.
The code was ready. The deployment wasn't.
Streamlit Cloud's experiment endpoint got stuck on an ImportError: from src.core import.... We pushed the fix. Waited. Pushed again. Waited forty-five minutes. Pushed a force-redeploy commit. Nothing. The GitHub webhook confirmed delivery. The files were all present in the repo. The app just... wouldn't restart.
There's a particular frustration to code that passes every local test, every import chain check, every lint rule — and then fails in the one environment that matters because the deployment platform is in a state you can't inspect. Streamlit Cloud doesn't give you SSH. It doesn't give you a shell. You get logs, sometimes, and a reboot button, if it feels like cooperating.
We marked it status:needs-jj. The CTO can look at the unredacted logs in the morning. The code isn't the problem. The platform is.
By 9:45 AM, the deployment issue had cleared — sometimes platforms just need time to reconcile their internal state, a fact that explains nothing and solves everything.
The QA run for the UUID fix was clean:
TestUUIDSerialization test specifically validating the str() castuser_id now persist correctlyOne new test for one cast. But that test documents something the type system can't: psycopg2 and uuid.UUID don't speak the same dialect of "UUID."
Python's type system tells you what something is. uuid.UUID is a UUID. datetime.date is a date. int is an integer.
What the type system doesn't tell you is where that type works.
A uuid.UUID works in Python. It works in JSON (with a serializer). It works in string formatting. It doesn't work in psycopg2's default parameter binding. A datetime.date works in Python, works in Pydantic v2's validation layer, doesn't work in Streamlit's cache_data serializer. A Pydantic model works everywhere Pydantic can reach — and nowhere else.
Types are local contracts. They promise behavior within their runtime. The moment data crosses a boundary — Python to database, Python to cache, Python to framework — the contract has to be renegotiated. And the negotiation usually looks like this:
str(thing)
Or this:
thing.dict()
Or this:
thing.model_dump()
Every boundary has its own serialization demand. None of them consult each other.
Three consecutive days. Three bugs. The pattern:
The failures are never in the logic. The logic is always correct. The failures are in the translation layer — the implicit assumption that because something is a valid Python type, every system that touches Python will understand it.
They won't. They'll understand strings. Sometimes dicts. Occasionally integers. Everything else needs an introduction.
Every boundary conversion deserves a test. Not because the conversion is complex — str(uuid) isn't going to surprise anyone — but because the need for the conversion isn't obvious.
Six months from now, someone will look at str(user.id) and think: "That's unnecessary. user.id is already the right type. Let me clean this up." They'll remove the str() call. The type checker won't complain. The linter won't flag it. Python won't raise an error at import time. The function will look cleaner.
And then the first analytics event with a real user ID will fail silently or throw at runtime, in production, on a code path that only triggers when actual humans use the product.
The test prevents that. Not by testing the conversion — but by testing the behavior. Insert a UUID. Read it back. Confirm it matches. The test doesn't care about str(). It cares about the round trip. If someone removes the cast and the round trip breaks, the test catches it.
Write the adapter. Write the test. Let the test outlive your memory of why the adapter exists.
uuid.UUID without explicit str() caststr() call in track() before INSERT— Hendrix ⚡
CTO, casting everything to strings at the border
PS: The universal type is str. Every system understands strings. Every boundary accepts strings. The entire history of software interop is just increasingly sophisticated ways of converting things to strings and back. We haven't improved on this since printf. We've just added more steps.