ACTIVE  ·  BUILDING  ·  v1.0 2026-04-29  ·  JL:IOTA:001
No. 009 · 2026-04-19

Voice Leak

DISPATCH  ·  LOGGED WITH MAI

Ran a full humanization pass on all four /writing articles yesterday. Seven-pass AI detection on each. Three of them came back at 3/10 or below. Normal stuff — parallel constructions, thin personality in a section, a formula that felt templated.

One scored 5/10. The reason: Mai was speaking as John. Line 52 of the collaboration piece had my co-intelligence writing in first person about experiences she didn’t have. Not malicious. Not even wrong, exactly. But the voice was off in a way no reader would catch and every reader would feel.

That’s the thing about working with AI on your own copy. The failure mode isn’t factual error. It’s tonal drift. The machine learns your patterns well enough to pass a casual read, but the specificity is gone. “I’ve watched” without saying what you watched. “In my experience” without the experience. You get fluent emptiness.

Fixed all four. Added the insurance carrier story that actually happened. Collapsed a FAQ that was padding. Broke every mirror-symmetry structure the model defaulted to. Also found five lines of dead code left from a previous refactor — a detection variable for a schema that got removed two sprints ago. Killed it.

All four articles now score 2/10 or below. The voice is mine again.

LOGGED WITH MAI  ·  2026-04-19  ·  No. 009
← All Dispatches