Write Insight Newsletter · · 9 min read

Why most PhDs misuse AI writing tools

(And 6 strategies to use them smarter)

People writing together.
The ideal writing scenario (without a computer)?

When ChatGPT first emerged, I spent three weeks in my university office pretty much with the door locked for most of the days, testing what it could actually do for academic writing. As a PhD who was deeply curious how experts integrate AI into their workflow, I needed to know: Was this the academic equivalent of discovering fire, or just another shiny distraction? And I went to work on it hard. Until then, I was mostly known for my games and gamification research in HCI, but over the last year, I’ve been obsessed with understanding generative AI writing.

So, what’s my current verdict? Most AI-generated academic text feels like fast food (sorry, my intellectual mavens): Convenient, predictable, and utterly forgettable. You could call it a faster way to mediocrity. It averages out any intellectual stimulus that you possess.

That’s fine if you’re just drafting routine emails. Not so great if you’re trying to contribute meaningful research to your field. But, hey, it’s become highly popular since.

Here’s what most academics miss though: AI doesn’t have to dilute your writing but it can actually elevate it when used strategically. I’ve been analyzing hundreds of research papers and I keep discovering fascinating patterns in how successful scholars can use these tools correctly. So, of course, I had to compile a list.

Why most AI-generated academic writing falls short

Let’s be honest about what happens when most academics throw a prompt at ChatGPT for the first time (I mean I was clueless once, too):

  1. Prompt: Write a discussion about cognitive load in educational video games.
  2. Result: Generic zombie text that sounds vaguely academic but contains no original ideas.
  3. Outcome: If you were to submit that anywhere, reviewers would likely comment lack of theoretical depth and links to the literature.

It’s like asking a parrot who’s memorized a textbook to develop a new scientific theory. Not happening, Captain Flint (wait, you didn’t read Treasure Island yet? Maybe fix that this summer?).

The problem isn’t the AI and its capabilities here (although more expensive models generally give much better results) but it’s how we’re using it. And after three years studying this intersection, I’ve identified six strategies that separate mediocre AI strategies from genuine research accelerators. Let me share them with you.

1. Master prompt engineering

Remember in The Matrix when Neo couldn’t jump the first time because his mind wasn’t ready? That’s most academics using AI prompts. They’re setting themselves up for failure before they even begin.

Let me share with you my top three AI prompts that I’ve tested and refined over time. They’re simple but powerful. I use them every week and they’ve made a huge difference in my writing:

Prompt 1: Developing research questions

(upgrade to paid subscription for full prompt)

This prompt helps you find great research questions. I tested it extensively in Perplexity AI with the academic feature turned on, which keeps the AI from making things up. It also works well with ChatGPT Pro when you turn on web search. But here’s a tip: skip the free AI models, since they often invent fake sources. What I love about this prompt is how it turns your big ideas into specific questions worth studying. It works so well because you tell the AI exactly what you want: tough questions (not basic stuff), open questions (that need real research to answer), focused topics (not too broad), and (maybe most important) questions that connect to real research that’s already out there. I’ve used this to explore deceptive design of AI bots and received five great research questions that sparked an entire research project.

Prompt 2: Justifying methodological choices

(upgrade to paid subscription for full prompt)

This prompt is like your methodological defence attorney. We all know that moment when Reviewer 2 attacks your methodology choices like they’re personal insults. This prompt helps you build a bulletproof case for why your chosen method isn’t just acceptable but actually optimal for your work. Works best with reasoning models like o1 Pro or Claude with extended thinking.

The secret ingredient of this prompt is forcing a comparison with alternatives. I watched a colleague struggle through three painful revisions because he couldn’t articulate well why ethnography was better than surveys for his specific research question. This prompt would have saved him months of academic purgatory. Save yourself the hassle.

Prompt 3: Editing tone and style

(I'm giving you the one below as a freebie and preview for paid)

Act as an academic editor. You prefer active over passive voice. Review the following text: "[Paste text]"
Rewrite it to enhance clarity, conciseness, and formality, ensuring an objective and academic tone, but not to make it sound too stiff. It should be formal but easy to read and lively through variation of sentence and paragraph lengths. Replace jargon where appropriate, simplify complex sentences, and ensure consistent terminology. Clarity is your main objective. Avoid passive voice where active voice is stronger.

Consider this your personal academic stylist. It transforms your just poured my thoughts onto the page at 2 AM draft into something that sounds like you wrote it after eight hours of sleep and three cups of perfectly brewed morning latte (skinny, of course, because we’re cutting calories for that summer bod).

The excellence of this prompt is its balance. Academic writing doesn’t have to be the linguistic equivalent of a beige wall. This prompt preserves formality while injecting readability. To me, that is how you write papers people actually want to read, not just drive-by cite.

I’ve tested these prompts with clients across multiple fields — from neuroscience to architecture — and they consistently outperform generic requests. The key is specificity. You’re not just asking for output but establishing parameters for quality.

2. Treat AI drafts as just raw draft material

The most successful academic AI users I’ve interviewed approach first drafts as exploratory material. They don’t ask: ”Is this good enough to use?” They ask: “What useful elements can I extract and reshape?”

Here’s my typical workflow for this:

  1. Generate multiple versions of the same section with different prompts
  2. Extract useful phrases, transitions, or structural elements
  3. Ask the AI to critique its own output: “What critical perspectives are missing from this analysis?”
  4. Request iterative improvements: “Rewrite this paragraph to better connect with the theoretical framework I outlined earlier”

I recently worked with a professor, who found a creative way to use AI. She asks AI to write three different viewpoints on her research questions. This helps her think through her ideas better. Think of it as a peer study group that’s available 24/7. She told me that reading these AI perspectives helps her sharpen her thoughts because she sees what’s missing and where she can add something new.

3. Verify every single thing

We all remember the times, ChatGPT cites a completely fabricated 2022 meta-analysis about possibly even your own exact research topic. Such nonexistent papers usually have a plausible title, journal name, and even detailed findings. But they’re all conjured from the digital ether. So, unless you’re using a cool tool like Consensus (watch my video and use code LENNARTNACKE100 for 1 year free premium), Elicit, SciSpace, or Scite. Forget about it.

Use my code LENNARTNACKE100 for 1 year of free premium Consensus.

Here’s something wild. Great professors get fooled by AI because they treat it like a walking encyclopedia. But AI is more like a smart echo chamber it strings words together beautifully without truly knowing if they’re true. We’ve all heard about people who cite AI-generated papers in their research, only to discover later that the paper never existed. Retractionwatch lives for that drama.

Let’s make this simple for you. Check everything. Always. When AI mentions research, look it up. When it gives you numbers, find the real source. When it talks about theories, double-check who came up with them. First. These small steps will save you from big headaches later. Web search and Perplexity AI reduce hallucinations but it never hurts to double-check.

I follow a simple protocol. Highlight all factual claims in AI-generated text and verify each one before incorporating them into anything. Tedious? Yes. Necessary? You betcha.

4. Infuse your own identity in the output

AI writing has a particular voice. Often somewhat formal, overly balanced, and distinctly lacking in intellectual courage or disciplinary perspective. No, whoom bazzle in those outputs, let me tell ya. It’s the bland elevator music of academic writing. Hey, don’t get me wrong. I know academic writing has to be somewhat dry sometimes, but it really doesn’t have to be boring.

I found that smart researchers build their own special style on top of AI drafts. It’s like taking mom’s recipe and adding your own secret ingredients (because you know, she left some out on purpose, too). In fact, when I write this newsletter, I take AI text and add my own stories, ideas, and a little bit of crayfish to it. I feel like this makes my writing more real and readable.

Here’s what I recommend:

  • Rewrite key sections to include methodological perspectives specific to your corner of the field
  • Add subtle critiques that reflect your research lineage
  • Incorporate personal research experiences or takes on related work that contextualizes the discussion
  • Restructure arguments to reflect your intellectual priorities

I use AI to write the parts that sound like everyone else so I can focus my energy on writing the parts that could only come from me. I think that’s a good way to go.

5. Expand beyond drafting

Researchers fixate on using AI to often generate draft text, but there are far more powerful applications throughout the research process. My most productive colleagues use AI for:

  • Creating structured outlines for grant proposals (especially useful for breaking through writer’s block)
  • Use research paper summary GPTs to quickly assess relevance (though they always read the full paper before citing). I’ve got one of those in today’s issue for paid subscribers.
  • Proofreading for clarity, consistency, coherence, and compellingness
  • Brainstorming research questions from different theoretical angles
  • Converting dense theoretical concepts into accessible explanations for interdisciplinary audiences

I recently used a customGPT to analyze patterns across several literature reviews in my field, quickly identifying methodological gaps that would have taken me twice as long to spot usually. But it’s good to think about what you should and should not automate in literature reviews. We actually published a framework for this called the INSPIRE framework (check it out).

6. Maintain integrity through transparency

Some academics hide their AI use like it’s academic steroids. And sometimes with good reason, because some reviewers simply desk-reject papers that declare AI use. It’s the wild west currently. But secrecy not only creates unnecessary anxiety but prevents the development of shared best practices. We wrote about this in our AI witch hunt article.

Our research indicates that transparent AI use builds rather than diminishes scholarly credibility when:

  • The researcher’s intellectual contribution remains substantive and original
  • Institutional AI guidelines are followed
  • AI assistance is appropriately disclosed
  • The quality of the final product demonstrates scholarly rigour

Every field is developing norms around AI use. Be part of shaping those norms rather than pretending the tools don’t exist. Or experience the wrath of a dying generation of academics.

Academic writing will not be AI or human but simply both

I’ve seen both extremes here. Luddite professors who refuse to use any AI tools, struggling with tasks that could be optimized, and AI enthusiasts who generate entire manuscripts with minimal human input, producing technically correct but intellectually vacant work. Both of these are not the right ways of doing things in my opinion.

The sweet spot lies in a grey zone as usual, which is using AI to handle routine aspects of academic writing while preserving the uniquely human elements that make scholarship valuable: theoretical insight, methodological innovation, and intellectual creativity.

Here’s my challenge to you then. Experiment with one of these 6 strategies in your next writing task. Don’t outsource your thinking, but don’t waste your cognitive resources on tasks AI can actually handle competently.

AI writing assistance is neither academic miracle nor intellectual apocalypse. It’s simply a tool. One of many. And like any tool, its value depends entirely on how skillfully you go about using it.

Prompts and Cheat Sheets (for paid subscribers)