The science behind MyDamnVoice
Your voice has a shape.
AI is sanding it smooth.
Everyone writes differently. The differences show up in sentence structure, word choice, punctuation habits. Sixty years of research backs this up. And right now, AI is erasing all of it.
The problem
The AI accent
Open any email thread at work. Half the messages start the same way now. Same hedge phrases, same filler, same polite opening to polite close. People didn't change how they write. They just stopped writing.
Linguists call it convergence. When everyone uses the same tool, the output collapses into one style. Every AI model has its own default voice. And when you paste their output into your Slack, you start sounding like them too.
AI can write. The question is whether your emails still sound like they came from you.
You already know the tells. Every message opens with "I'd be happy to help." Every paragraph closes with "please don't hesitate to reach out." The word "delve" shows up in places no human would put it. That's the AI accent.
The mechanism
How LLMs flatten your voice
A language model predicts the next most likely token. That's literally all it does. And "most likely" means "most common in the training data." So the rare word gets replaced by the common one. Your three-word sentence gets padded to twelve. That weird punctuation habit you have? Gone.
Think of it like a low-pass filter on an audio signal. Your writing has peaks and valleys. Short punchy fragments, long run-on thoughts, odd word choices that are uniquely yours. The model clips the peaks and fills the valleys. What comes out is smoother, sure. But also flatter. The part that sounded like you is gone.
Your sentence rhythm is irregular and personal. LLMs compress it toward the mean. MyDamnVoice encodes your patterns into the system prompt so the model reproduces your rhythm, not its own.
Researchers measured this in academic writing after AI tools launched. Papers started converging. Same hedging patterns, same transitions. Individual variation dropped. The writing got cleaner but nobody could tell whose it was anymore.
The metric for this is called burstiness. It's the coefficient of variation in sentence lengths. Human writing is bursty. You write a three-word sentence, then a forty-word sentence. The rhythm is irregular and personal. AI writing has low burstiness. Sentences come out roughly the same length, same structure. When you use AI to draft your emails, your burstiness drops with it.
What MyDamnVoice measures
When you paste writing samples, MyDamnVoice runs them through synthesizer functions that compute: formality (contraction rate, hedging frequency, pronoun mix), energy (exclamation density, sentence length patterns), vocabulary richness (type-token ratio, hapax legomena), sentence profiles (average length, variance, dominant bucket), punctuation habits, pronoun orientation, and function word deviations from English baselines. These become the hard constraints for profile generation.
The research
Sixty years of writing fingerprints
Writing style as a measurable, unique-per-person thing is not new. None of this is original research. Computational linguists have been studying it since before computers could spell-check.
Mosteller & Wallace, 1963
Inference and Disputed Authorship: The Federalist Papers
They needed to figure out which Founding Father wrote twelve disputed essays. The answer was in the boring words: articles, prepositions, conjunctions. "The," "of," "but," "however." These function words are used unconsciously. You can't fake them. That's what makes them useful.
John Burrows, 2002
Delta: A Measure of Stylistic Difference and a Guide to Likely Authorship
Burrows turned the Mosteller-Wallace insight into an algorithm. His Delta method computes the stylistic distance between two texts by comparing how often they use common words. It's been tested across languages, genres, and centuries of writing. It keeps working.
Abbasi & Chen, 2008
Writeprints: A Stylometric Approach to Identity-Level Identification
Writeprints took stylometry online. They stacked feature layers (lexical patterns, syntax, sentence structure) and built a fingerprint precise enough to identify people across platforms. Even when the author was actively trying to disguise their writing.
How you build a sentence says more about you than what the sentence is about.
The difference
Voice profiles are not humanizers
There are dozens of tools that promise to make AI text sound human. They're solving a completely different problem.
Evade AI detectors
The goal is to fool a classifier. Whether the text sounds like you doesn't matter to them.
Post-process the output
Take finished AI output and paraphrase it. Swap synonyms, restructure sentences, add some noise. Your voice was already gone before they even started.
Makes your voice worse
Random variation is not personal style. The text gets harder to detect, but it also sounds less like you.
It's an arms race
Generators vs detectors. Neither side cares about preserving how you actually write.
Sound like yourself
Make AI output sound like you wrote it. Your sentence rhythm, your word choices, your instincts about what belongs and what doesn't.
Work at the system-prompt level
Your voice gets captured before generation starts. The AI writes in your style from the first word. Nothing to clean up after.
Your voice stays intact
Your writing fingerprint, sentence rhythm, function word patterns, vocabulary range. All of it gets encoded into the profile. The AI writes like you instead of like itself.
Not trying to fool anyone
The output sounds like you because it was generated from your patterns. Not disguised after the fact.
The approach
How MyDamnVoice captures your voice
Interview
A structured interview captures your tone, values, anti-patterns, and stylistic instincts. What words you use. What phrases annoy you. How you open an email vs how you close one.
Synthesizer functions
Your writing samples run through a set of deterministic synthesizer functions that compute a skeleton: formality score (from contraction rate, hedging frequency, pronoun mix), energy level, vocabulary richness, sentence length profiles, punctuation habits, pronoun orientation, and function word deviations from English baselines. If the measurements contradict what you said in the interview, the skeleton flags it. Measurements always win over self-reporting.
Profile generation
The interview data and skeleton go to an AI model, which generates a full voice profile. The skeleton constrains the output. The model fills in the qualitative parts (your voice DNA description, tone, quirks) but can't contradict the numbers. If the skeleton says your formality is 3/10, the profile respects that.
Cross-model verification
The generated profile gets verified by two independent AI models against the skeleton. Different model families catch different biases. Each one scores every dimension and flags inconsistencies. If both verifiers agree the profile drifted from the skeleton, something went wrong.
Five minutes. Your voice, preserved.
Take the interview, paste a few writing samples, and get a voice profile you can use with any AI.
Start the interview