Write Insight Newsletter · · 12 min read

The Brutal Truth About Rigour in HCI

The seven rigour upgrades every HCI researcher needs but most still ignore

A young researcher with the committee in a bubble watching if he's rigorous in his approach.
Becoming a rigorous researcher is not easy.

Your paper got rejected again from your favourite high-quality venue, and you’re wondering why.

Let me tell you the brutal truth. In my field, human-computer interaction (HCI), most papers fail peer review because they lack methodological rigour. It’s everyone’s favourite reason for rejection. Many junior researchers make the mistake to think they can just fix this by working harder, when in reality, they just have to fully understand first what rigour actually means to HCI reviewers. Most researchers think rigour means being thorough, but that’s only about 20% of the equation. The real definition? Rigour is the systematic application of methodological principles that guarantee your research is credible, trustworthy, and reproducible. Without it, even trailblazing innovative ideas get relegated to the rejection pile, your h-index stays flat, and that tenure clock keeps ticking louder and louder.

What Is Rigour in HCI Research?

Rigour in HCI is the systematic application of methodological principles that ensure your research is credible, trustworthy, and reproducible. Unlike simply following checklists, rigour in HCI requires methodological sophistication that demonstrates you understand the complexity of human-centred research.

Key Definition

Rigour = Credibility + Transparency + Reflexivity + Context-Sensitive Quality

To help you combat that frustration, I’m showing you 7 concrete ways to inject rigour into your next paper, the same tactics that help me and my team consistently publish 3–5 high-quality papers in top-tier venues every year. Let’s check it out.

1. Start with a positionality statement for reflexivity that acknowledges your biases upfront

Most researchers skip positionality or reflexivity statements unless they’re doing marginalization research, but that’s a rookie mistake that signals methodological naivety to HCI reviewers. Yes, I know I’m getting field-specific here, but at CHI this is extremely common and I feel many social science venues will follow.

Every HCI paper, whether it’s about algorithm performance, user interfaces, or social justice, now needs a positionality statement that addresses four critical dimensions. Some claim this is political correctness in action, but really the purpose is demonstrating you understand how your perspective shapes every research decision, from problem formulation to data interpretation. It can’t hurt to document this.

Here’s what to include in your positionality statement:

  • Values driving your work: “We approach this research valuing efficiency and scalability in system design.”
  • Ethical points beyond the ethics board review: “We considered how our facial recognition research might enable surveillance.”
  • Biases affecting interpretation: “Our industry backgrounds may privilege commercial viability over community needs.”
  • Identity disclosure (but only if comfortable): “As researchers from Western elite institutions, we acknowledge our distance from resource-constrained contexts.”

Place this 150-word statement early in your methodology section. For example: “Our team’s computer science training emphasizes quantitative metrics, potentially undervaluing qualitative user experiences. We consciously adopted mixed methods to counteract this bias, though we acknowledge our comfort with statistical analysis may still privilege numerical findings.” or “As researchers embedded in Western academic traditions, we acknowledge our interpretive lens may influence our analysis of global user behaviours.”

Most HCI reviewers gobble that right up. They want us as authors to be as transparent as we can about the assumptions that strengthen our work’s contribution. Reviewers see this as a sign you understand the complexity of knowledge production, not just data collection.

2. Build an explicit audit trail that reviewers can actually follow

Your methods section probably says “we analyzed the data thematically,” but that tells reviewers nothing about your actual process. Even if you’re citing Braun & Clarke.

Instead, create a supplementary materials document with your complete audit trail. Include details of your coding process, your affinity diagrams, excerpts from any research memos, and decision logs showing how themes evolved. Upload this to OSF or GitHub and link it prominently in your paper. Even we don’t do this for every paper even though we should.

Top researchers maintain three types of documentation:

  • Raw data files with original timestamps
  • Process memos documenting decisions and insights
  • Analysis evolution showing how codes became themes

Quantitative researchers may think audit trails are just for qualitative work, but that’s wrong. Your quantitative audit trail should document stuff like:

  • Why you removed those 47 data points (with the specific criteria used)
  • How you handled missing data and why you chose that method
  • Alternative analyses you tried and why you rejected them
  • Power analysis calculations and assumption checks

For example: “We initially planned ANOVA but switched to Kruskal-Wallis after normality tests failed (Shapiro-Wilk, p<0.001). See the “Quantitative Decisions” document for distribution plots and test statistics.

This transparency gets you that rigorous methodology praise. Reviewers want to see your messy research process and double-check it, not just your clean results. Always provide documentation of analytical decisions, including abandoned approaches, to demonstrate exceptional methodological rigour.

3. Triangulate everything but do it strategically, not randomly

Triangulation isn’t just when you use multiple methods. Instead you strategically combine approaches to address different aspects of your research question.

Map each research question to at least two data sources. If you’re studying user behaviour, combine system logs (what people do) with interviews (why they do it) and observations (how they do it). But here’s the trick: explicitly state in your paper how each method addresses a specific weakness of the others.

For instance: “While interviews show participants’ preferences, system logs provided behavioural ground truth, which addressed potential social desirability bias.”

Don’t look at this as making more work for you, but as you doing smarter work that changes decent studies into rigorous investigations.

4. Replace vague quality criteria with paradigm-specific standards.

Stop using generic terms like “validity” for qualitative research. It screams “I don’t understand my paradigm.”

Instead, use paradigm-appropriate criteria. For qualitative HCI, implement these four:

  • Credibility (internal validity equivalent): Establish this through member checking: Send interview summaries back to participants asking “Did I capture your experience accurately?” Also use prolonged engagement (minimum 3-6 months in field studies) and persistent observation (multiple visits to research sites).
  • Transferability (external validity equivalent): Achieve through thick description (i.e., document context so thoroughly that readers can judge applicability to their settings). Include participant demographics, cultural norms, technological infrastructure, organizational structures, and temporal factors.
  • Dependability (reliability equivalent): Demonstrate via audit trails showing how your analysis evolved. Document every coding decision, theme merger, and interpretive pivot.
  • Confirmability (objectivity equivalent): Establish through reflexivity and data triangulation. Show how findings are formed from data, not researcher assumptions.

For design research, operationalize these three:

  • Relevance: Does your solution address genuine user needs? Show evidence from formative studies and stakeholder feedback.
  • Legitimacy: Is your design process credible to practitioners? Document design rationale, iteration history, and expert reviews.
  • Effectiveness: Does it work in practice? Provide deployment data, usage metrics, and outcome assessments.

For quantitative work, address all four validities:

  • Internal validity: Show causal relationships are real, not confounded. Document randomization procedures, control variables, and manipulation checks.
  • External validity: Demonstrate generalizability through diverse samples, replication studies, or ecological validity arguments.
  • Construct validity: Prove you’re measuring what you claim. Include validation studies, factor analyses, and convergent/discriminant validity evidence.
  • Statistical conclusion validity: Make certain that your statistical inferences are sound. Report power analyses, assumption checks, effect sizes, and correction procedures.

Then (and this is crucial) operationalize each criterion. Don’t just claim “transferability,” but explain it like this: “We provide thick descriptions of our research context, including participant demographics, technological infrastructure, and cultural factors, which our readers can use to assess the applicability of our research to their contexts.”

Once you pick a paradigm, make sure to use the specific criteria I mentioned here, or find others that you can operationalize for your context. As a result, you‘ll get fewer methodological concerns in your reviews.

5. Report your constraints as design decisions, not limitations

Every study has constraints, but framing matters immensely for the perception of rigour of your study.

People perceive papers as having the highest rigour if they can demonstrate a clear awareness of their position within their paradigm. Change “We only studied 12 participants” into “We deliberately selected 12 information-rich cases for deep analysis, consistent with phenomenological traditions prioritizing depth over breadth.” It’s not a spin on your research when you frame this study as aligned with phenomenological traditions, because you’re showing reviewers that you understand different research paradigms have different standards for rigour. That’s accurate methodological positioning. Always use paradigm-appropriate criteria to reduce methodological concerns. Also, calibrate your certainty (i.e., how confident you are that your findings are presented in their appropriate context) to indicate to reviewer how rigorous your approach is. When you explicitly state “we deliberately selected” rather than “we only had,” you demonstrate the intentional, theory-driven nature of your methodological choices.

Structure your constraints discussion in three parts:

  • The methodological tradition you’re following and its assumptions
  • Why your choices align with that tradition
  • What your approach shows that other approaches might miss

So, rigour, in this context, acknowledges what each approach shows you, meaning what you gain from it and what you might have missed by using it. In this case, you’re not hiding any limitations of any approach, but you’re indicating to your reviewers that you understand the epistemological trade-offs that any methodological choice brings along with it.

For example, you can discuss trade-offs between:

  • Depth vs. breadth (qualitative vs. quantitative samples)
  • Internal vs. external validity (controlled vs. naturalistic settings)
  • Precision vs. relevance (laboratory vs. field studies)

Instead of: “Due to resource constraints, we interviewed 8 participants.” Write: “Following information-rich case selection principles (Smith, 2002), we interviewed 8 participants who represented maximum variation across our key dimensions of interest, which allowed us to deeply explore the phenomenon while maintaining analytical manageable data volume for rigorous thematic analysis.”

Remember that rigorous research isn’t research without constraints. Instead, you aim to help reviewers understand not just what you did, but why those choices were theoretically and practically justified within your research paradigm. Explain that you’re doing research where constraints are thoughtfully chosen and explicitly justified.

6. Create a reproducibility package before you even submit

Most researchers scramble to create reproducibility materials after acceptance, but rigorous papers ship with complete packages.

Before submission, prepare: analysis scripts with inline documentation, anonymized raw data, detailed protocol documents, and instrument validation evidence. Host everything on a repository like OSF, figshare, FRDR (if you’re in Canada) or Zenodo with a DOI, and include a “Reproducibility Statement” section in your paper.

Your reproducibility statement should specify:

  • What materials are available and where
  • What can and cannot be reproduced and why
  • Environmental dependencies and version requirements

Reviewers increasingly see pre-registration and reproducibility packages as a baseline expectation, not a bonus. So it’s good to get into the habit early.

7. Write certainty-calibrated claims that match your evidence

The fastest way to seem non-rigorous? Just overclaim your findings. Ok, I’m not suggesting that you have to hedge every single statement in your discussion section. Rather, you should think carefully about how you discuss the impact of your findings, what you can say with certainty, and what you cannot claim.

Develop a certainty vocabulary: “strongly suggests” for converging evidence, “indicates” for clear patterns, “may suggest” for emerging insights, and “warrants further investigation” for preliminary findings. Then map each claim properly to specific evidence in your results.

Never write “proves” in HCI research. That’s like drenching your hand in high-fructose apple juice and then trying to smash a wasps nest with it. Enjoy the stings. Never write “all users” when you mean “all participants.” Never claim generalization without explicit evidence of transferability.

Here’s a rigour-enhancing trick you can use: create a claims table in your appendix or supplementary materials that maps each discussion point to supporting evidence, certainty level, and alternative explanations. Reviewers love this because it shows methodological maturity.

We often wish that rigour was just a checklist that we can follow, and I am certainly providing some checklists here in this issue for paid subscribers, but more often than not, rigour at its heart is demonstrating that you have reached methodological sophistication through systematic transparency. Ideally, you implement these seven strategies in your next paper, and, as a result, your rejection rate will plummet and your paper quality will increase.

Let me know if you feel I’ve missed anything or if rigour has even more specific properties in your own field.

P.S.: Curious to explore how we can tackle your research struggles together? I've got three suggestions that could be a great fit: A seven-day email course that teaches you the basics of research methods. Or the recordings of our ​AI research tools webinar​ and ​PhD student fast track webinar​.

8 LLM Prompts, 4 Templates, 2 Checklists

For paying subscribers, I have 4 statement templates (qualitative research quality, design research quality, reflexivity for individual researcher and research team, ), 8 LLM prompts (quality criteria selection, reflexivity/positionality statements, audit trail planning, constraint reframing, reproducibility package planning, calibrating research claims, study rigour assessment, method section review) and 2 checklists (audit trail, general paper rigour) today:

Read next