How to make AI write like you (not just what you asked for)
By MyDamnVoice in guides, ai-writing
There's a gap that most people don't notice until it's pointed out. You can give ChatGPT or Claude a perfect prompt. Clear instructions, good context, the right tone word. And the output will be exactly what you asked for. It'll also sound nothing like you.
Getting AI to do the right task is a prompting problem. Getting AI to sound like you is a profiling problem. These are different problems with different solutions.
Why "write in a casual tone" doesn't work
This is the most common attempt at voice control. People add "casual tone" or "professional but approachable" or "write like a 30-year-old tech worker" to their prompts. It feels like it should work. It almost never does.
The reason: "casual" is not a style. It's a category containing thousands of styles. This is why AI writing sounds the same no matter who prompts it. Your casual is different from my casual. Your casual has specific sentence lengths, specific word preferences, specific paragraph structures, specific ways of opening and closing points. When you say "casual," the model picks its own version of casual, which is the statistical average of all the casual text in its training data.
Same problem with "professional but approachable." That phrase describes a spectrum wide enough to include a McKinsey deck and a Substack newsletter. The model can't read your mind about where on that spectrum you sit.
What actually needs to be specified
Voice lives in the details that most people never think about. Here's what separates your writing from everyone else's:
Sentence length patterns. Not just average length, but the distribution. Do you write in bursts of short sentences followed by a long one? Do you keep things consistently medium-length? Do you write long complex sentences with multiple clauses? This pattern is as distinctive as a fingerprint.
Word choice habits. You have words you use constantly and words you never touch. Maybe you say "but" where others say "however." Maybe you write "thing" where others write "element" or "component." These micro-preferences accumulate into a recognizable voice.
Opening and closing patterns. How do you start a paragraph? With a claim? A question? A fragment? How do you end a section? These structural habits are consistent across your writing and almost entirely unconscious.
What you avoid. This might matter more than what you include. The words you never use, the structures you never build. Your avoidance patterns define the negative space of your voice.
Paragraph rhythm. Short-short-long. Long-short. All medium. The cadence of your paragraph lengths creates a reading experience that's distinctly yours.
The manual approach
You can try to build a voice profile by hand. Sit down with ten pieces of your writing. Read them carefully. Try to identify your patterns. Write them down as rules.
This works, sort of. You'll catch the obvious stuff. If you tend to write short sentences, you'll probably notice. If you have a favorite transition word, you might spot it.
But you'll miss a lot. Research on self-reported writing style shows about a 40% accuracy rate. People think they write concisely when they average 22-word sentences. People think they vary their sentence length when their standard deviation is tiny. People think they avoid jargon while using it constantly.
Self-knowledge about writing is unreliable because style operates below conscious awareness. You're thinking about ideas while your fingers handle the how. Habits are hard to observe from the inside.
The measured approach
The alternative is to let software do the measuring. Feed in writing samples. Compute the distributions. Count the word frequencies. Map the structural patterns.
This is what MyDamnVoice does. You provide writing samples and it runs the analysis: sentence length mean and variance, vocabulary profiles, structural patterns, word preferences and avoidances. Everything gets measured, not estimated.
The output is a voice profile formatted for whichever AI tool you use. For ChatGPT, it generates Custom Instructions. For Claude, it generates a Custom Style. The format matches what each model expects, with the specific constraints that actually change output behavior.
Why measured data wins
If you wanted to reproduce someone's singing voice, would you ask them to describe it ("I'm a tenor, kind of raspy, with good range") or would you record them singing and analyze the audio? The description gives you a rough category. The recording gives you the actual voice.
Writing works the same way. Your description of your style gives AI a rough category. Measurements of your actual writing give it specific targets that produce recognizable output. Side-by-side, measured profiles consistently produce output closer to the writer's real voice. Not by a little. By a lot.
The practical takeaway
If you're using AI to write and you want it to sound like you, you have two options. Write style instructions based on what you think your voice sounds like (fast, free, roughly 40% accurate). Or measure your voice from actual writing samples (takes five minutes, dramatically more accurate).
I built MyDamnVoice because I was tired of AI that did exactly what I asked in a voice that wasn't mine. The task was right. The voice was wrong. Fixing the voice required data, not intuition.
Try it
Paste in some writing you're proud of. Get a voice profile built from measurements, not guesses. Five minutes and you'll hear the difference in every prompt you run.