I ran my hand-written essay through 4 AI detectors.
3 flagged it as AI.

This essay was written entirely by a human. No AI tools. No paraphrasing software. Just a student, a reading list, and a deadline. Three out of four detectors still said it was machine-generated.

The experiment

Here's what we did. We took a genuine 1,500-word essay written for a second-year sociology module at a UK university. The student who wrote it confirmed — in writing — that no AI tools were used at any stage. No ChatGPT for ideas. No Grammarly for rewrites. Not even a spell-checker beyond Word's built-in one.

The essay was well-structured, cited properly using Harvard referencing, and received a 2:1 grade from the module tutor. By every reasonable standard, this was a legitimate piece of student work.

We submitted it — unchanged, with no modifications — to four AI detection tools that UK students commonly encounter or use themselves. We wanted to see whether a clearly human essay would pass every detector cleanly.

It didn't.

What each detector said

DetectorVerdictConfidenceCorrect?
Detector A (free tier)"Likely AI-generated"72%✗ Wrong
Detector B (free tier)"Mixed — AI and human"58%✗ Wrong
Detector C (paid)"Human-written"84%✓ Correct
Detector D (free tier)"Likely AI-generated"67%✗ Wrong

Three out of four tools gave the wrong answer. One of them was 72% confident that a genuinely human essay was written by AI. That's not a marginal error — that's a number high enough for a university to open an investigation.

Only one detector got it right, and it was the only paid tool in the test.

Why false positives happen

AI detectors work by measuring how "predictable" your writing is. The theory: AI text is more predictable than human text because language models optimise for probable word sequences. If your writing scores as highly predictable, the detector assumes a machine wrote it.

The problem is that good academic writing is also predictable. When you follow a clear essay structure, use proper academic vocabulary, write in formal register, and avoid slang — you're doing exactly what your lecturers tell you to do. But you're also matching the statistical profile of AI-generated text.

Students who write well are penalised by the very tools designed to catch people who didn't write at all. It's a cruel irony, and it's not hypothetical — it's happening now at UK universities.

The specific triggers in this essay

When we analysed the false-positive essay ourselves, we found three features that likely triggered the detectors:

Consistent formality. The student maintained academic register throughout. No contractions, no colloquialisms, no first-person asides. This is exactly what good academic writing looks like — and exactly what AI produces by default.

Well-structured paragraphs. Each paragraph opened with a topic sentence, developed the point with evidence, and closed with analysis. Textbook essay structure. Also textbook AI structure.

Smooth transitions. The student used connective phrases between sections — "building on this argument," "a contrasting perspective suggests" — which raised the text's predictability score. Human students who write fluently get caught by this constantly.

Worried about being falsely flagged?
SafeGrade shows exactly which sections of your essay look AI-generated — and which look genuinely human. Check before you submit.
Scan my essay →

What this means for students

If you're a competent writer, you're at higher risk of a false positive than someone who writes casually. That sounds absurd, but the data supports it. Students who follow their lecturers' advice — write formally, structure clearly, cite properly — produce text that looks more like AI output than students who write messily.

And here's the problem: you may not get the chance to explain. Many UK universities have automated AI detection built into their submission pipelines. If Turnitin flags your essay, your tutor may see an "AI probability" score before they even read your work. That number creates a bias — even subconsciously — that's difficult to reverse.

Some universities require you to attend a formal academic misconduct hearing if the AI score exceeds a threshold. You'll be asked to prove you wrote your own work. How do you prove that? Draft versions help. Browser history helps. But not every student saves their drafts, and nobody expects to be interrogated for writing well.

How to protect yourself before submitting

Keep your drafts. Save multiple versions of your essay as you work. Name them with dates. If you're questioned, a clear revision history — from messy notes to polished final — is your strongest defence. AI doesn't produce drafts; it produces finished text in one go.

Write in your voice. This doesn't mean writing informally. It means including observations, opinions, and reflections that are clearly yours. Refer to specific lectures. Mention a tutorial discussion. Use a phrase or metaphor that comes from your own thinking. These are things AI can't convincingly replicate.

Vary your rhythm. If every paragraph is roughly the same length and every sentence flows smoothly into the next, consider breaking the pattern deliberately. A short, blunt sentence after a long analytical one. A two-sentence paragraph that makes a point and moves on. Rhythm variation is one of the strongest signals that human writing is genuine.

Check before you submit. Run your essay through a reliable detection tool yourself — before your university does. SafeGrade's Deep Scan shows you section-by-section which parts of your essay look human and which carry AI signals. If something's flagged, you have time to revise it. If everything's clean, you submit with confidence.

The goal isn't to game detection. It's to make sure your genuine work is recognised as genuine — because right now, the tools aren't reliable enough to do that on their own.

Know what your university's detector will see
before they see it.
SafeGrade's Deep Scan analyses your essay the same way institutional detectors do — but shows you the results first. Your first scan is free.
Scan my essay free →