Is This Written by ChatGPT? 5 Quick Checks

You have a document in front of you and something feels off. Here is how to check in under 60 seconds โ€” five fast manual signals that require no tools and no accounts.

Gut instinct is a reasonable starting point when reading suspicious text, but instinct alone is not actionable. What you need is a rapid systematic check โ€” something you can run in under a minute that either confirms your suspicion or gives it enough signal to warrant a more thorough investigation. These five checks are ordered from fastest to slightly slower. Run them in sequence and stop as soon as you have enough signal.

The 60-Second Manual Check

Check 1: Search for "delve" or "meticulous" (10 seconds)

Open the document in any word processor or text editor. Use Ctrl+F (or Cmd+F on Mac) to search for "delve." If it appears even once in a non-literary context โ€” a business email, a student essay, a product description, a report โ€” that is a significant signal. Follow up by searching for "meticulous." Finding both in a single document that is not academic prose puts the probability of ChatGPT authorship well above 70%.

Why these two words specifically? The Kobak et al. (2025) research in Science Advances tracked word frequencies across millions of academic abstracts before and after ChatGPT's November 2022 launch. "Delve" increased approximately 28 times its baseline rate. "Meticulous" showed similar dramatic increases. Before ChatGPT, these words were confined to niche formal writing. Now they appear in everything AI touches. See our full ChatGPT writing patterns guide for the complete list.

Check 2: Read the last paragraph (10 seconds)

Skip to the final paragraph. Does it begin with any of these phrases?

  • "In conclusion,"
  • "In summary,"
  • "To summarize,"
  • "Ultimately,"
  • "To conclude,"

If the document is not a formal academic essay and the final paragraph begins with one of these, you are almost certainly looking at ChatGPT output. Human writers do not write formal conclusions in emails, blog posts, cover letters, or casual reports. This reflex is trained into ChatGPT from the academic writing that dominated its training corpus. It is one of the most reliable single-signal tells for GPT-series models.

Check 3: Count paragraph-opening transitions (15 seconds)

Skim through the document and note how many paragraphs begin with words like "Furthermore," "Moreover," "Additionally," "Consequently," or "Nevertheless." In a document of five paragraphs or more, having three or more paragraphs open with formal transitions is extremely unusual for a human writer and very typical for ChatGPT.

Human writers use these words mid-sentence, mid-paragraph, or not at all in casual contexts. ChatGPT uses them as paragraph openers because it was rewarded during RLHF training for producing text that sounds logically structured โ€” and explicit transition markers are the surface-level signal of structure.

Check 4: Check for sentence length uniformity (15 seconds)

Read any three consecutive paragraphs and pay attention to sentence endings. Do most sentences feel roughly the same length? Count a few: if they all land around 15โ€“25 words, that is a strong signal. Human writers naturally produce fragments, very long compound sentences, and everything in between within a single paragraph. AI prose has a metronomic rhythm that becomes obvious once you're listening for it.

A quick trick: paste the text into a free sentence-length visualiser (several exist online). Human text produces a jagged histogram with high variance. AI text produces a smooth, narrow distribution. If you do not have a tool handy, simply read three sentences aloud and note whether they feel roughly the same "weight" โ€” that feeling of uniformity is what the statistics show.

Check 5: Ask for a specific personal example (10 seconds, requires access to the author)

If you have access to the person who supposedly wrote the document, ask them to tell you about a specific real example from their experience that relates to the content. Something like: "In your report you mention supply chain challenges โ€” can you give me one specific situation from your own work where this happened, with a rough date and company name?" or "You wrote about the importance of exercise โ€” what's an actual workout you did this week?"

ChatGPT cannot supply real personal memories, so a document that claims personal experience will fail this test immediately. The person either produces a real specific example or cannot โ€” and their inability to do so is diagnostic. Even if the original document was AI-generated, the author will often admit it when directly pressed with a specific factual question they cannot answer.

When to Use a Tool Instead

The manual checks above are designed for speed and work best on medium-length documents (300โ€“2,000 words). There are three situations where you should skip straight to a tool:

  • Short texts under 100 words โ€” Statistical signals need sample size. A single paragraph is too short for reliable manual pattern recognition. Paste it into our free AI detector which uses sentence-level statistical analysis better suited to short text.
  • When you need a documented score โ€” If you're an educator or manager who needs to record evidence, a detector score is more defensible than "I ran a word search." Use the tool and note the score.
  • When the manual check is inconclusive โ€” If checks 1โ€“4 are all borderline, the text may be humanised AI, AI with extensive human editing, or just a formal human writer. A full statistical analysis will usually resolve the uncertainty. Our ChatGPT detection page explains how the tool handles these edge cases.

What if the Score is 50โ€“70%?

The uncertain zone โ€” 50% to 70% โ€” is where interpretation matters most. A score in this range typically indicates one of three things: lightly humanised AI text, AI text that was then substantially edited by a human, or a human writer whose style happens to be unusually formal. In this zone, you should:

  1. Run the manual vocabulary check (check 1) to see if any specific tells are present
  2. Look for personal specifics โ€” dates, names, places, numbers from personal experience
  3. Check if the overall structure is suspiciously comprehensive (covers every angle without clear prioritisation)
  4. If the context warrants it, ask the author check 5

Do not treat a 50โ€“70% score as proof of AI authorship. Treat it as a prompt for closer manual review. AI detection is probabilistic, and this range is genuinely uncertain territory.