
How to Document UX Research Sessions and User Interview Synthesis Reports
A practical guide for UX researchers on structuring session documentation from first note to final report. Covers user interview note-taking, usability test documentation, synthesis templates, affinity mapping notes, and insight readouts. Includes fictional examples and a documentation checklist.
Why UX Research Documentation Is a Different Kind of Problem
Most documentation guides assume you are writing about a single, bounded event: one therapy session, one medical visit, one legal consultation. UX research does not work that way. A single study can generate twelve user interviews, three usability sessions, forty pages of raw notes, and hundreds of sticky notes from an affinity mapping workshop, all of which need to become a coherent insight readout that a product team can act on by Friday.
The documentation problem is not a shortage of information. It is the opposite. UX researchers are often drowning in qualitative data and under pressure to produce structured, stakeholder-ready outputs faster than the synthesis process naturally allows.
UX research documentation covers the full arc from session capture to deliverable, and each stage has its own structural requirements. This guide walks through each one: how to take notes during user interviews, how to document usability sessions, how to build a synthesis report, how to capture affinity mapping work, and how to structure insight readouts that actually get used. Every stage is illustrated with fictional examples.
If you are a solo freelance researcher or an in-house researcher working without a dedicated research ops function, this guide is written for you. The examples assume you are doing this work yourself, not delegating to a team of note-takers.
Stage 1: User Interview Note-Taking
The Core Tension
The hardest part of taking notes during a user interview is staying present while also capturing enough detail to work with later. Researchers who try to transcribe everything end up with walls of text that are nearly impossible to code. Researchers who take sparse notes end up with impressionistic summaries that lose the specificity that makes qualitative data useful.
The middle path is a structured note-taking template that prompts you to capture specific types of information without becoming a transcript. Every interview in a study should use the same template so that comparison across participants is possible during synthesis.
Recommended Interview Note Structure
A practical user interview note template has four zones:
Zone 1: Session header Capture participant ID (never real names in your working documents), session date, interviewer, observer if any, and the participant's relevant attributes (for example: "P07, mobile-first, 3+ years using expense tools, mid-size company").
Zone 2: Verbatim quotes Reserve a column or section specifically for direct quotes. When a participant says something in their own words that captures an insight, attitude, or experience precisely, mark it as a quote. Do not paraphrase here. These verbatim passages become your evidence in synthesis.
Zone 3: Observations Capture what you noticed that the participant did not say. Tone shifts, moments of hesitation, places where they lost the thread, body language if it was visible, any divergence between what they said and what they did. Observations are data too.
Zone 4: Emerging questions and prompts A running sidebar for questions that occur to you mid-interview but are not yet in your discussion guide. These feed back into future sessions.
A Fictional Example
Study: Freelance invoicing workflow research. Participant: P04, a freelance graphic designer, 6 years in business, invoices using a spreadsheet.
Zone 2 (verbatim quote): "I send the invoice and then I just kind of hold my breath until I see the money. There's no actual visibility into what's happening."
Zone 3 (observation): Described the payment wait as something that physically affected her. Used "hold my breath" twice. This is a visceral anxiety signal, not just a workflow complaint. Worth probing in the next session with a similar participant.
Zone 4 (follow-up prompt for future sessions): Ask: "When you say you don't have visibility, what would visibility actually look like for you?"
This level of specificity is what separates notes you can actually use for synthesis from notes that just remind you roughly what was said.
What Not to Capture
Avoid summarizing judgment into your notes during the session. "She seemed frustrated with the product" is an interpretation, not an observation. "She clicked three times in the wrong place and then said 'wait, where did it go?'" is an observation. Keep inference out of the capture layer. You will do interpretation during synthesis.
Stage 2: Usability Test Documentation
What Makes Usability Notes Different
In a user interview, you are primarily listening. In a usability test, you are watching someone interact with a product or prototype in real time. The note-taking task shifts accordingly. You are documenting task performance, not just participant perception.
A usability session note template needs to track outcomes per task, not just impressions across the session.
Recommended Usability Session Structure
Session header: Same fields as user interview (participant ID, date, device/environment context, prototype version tested).
Task log: One section per task in the test protocol. For each task, capture:
- Task name and start time
- Task completion: yes, no, or partial (with what "partial" means in this context)
- Time on task (rough, if you are not using screen capture software)
- Errors and error recovery: what went wrong and how the participant responded
- Verbal commentary: key quotes from think-aloud narration
- Observations: hesitations, unexpected paths, moments of confusion or delight
Post-task rating: If you are using a rating scale like the Single Ease Question (SEQ) or task-level confidence rating, capture the raw score and any spontaneous explanation the participant offered.
End-of-session summary: A brief narrative (three to five sentences) capturing the session's overall character and any themes that felt significant. Write this immediately after the session ends, not the next morning.
A Fictional Example
Study: E-learning platform navigation test. Task: "Find a course you started last week and pick up where you left off."
Completion: Partial. Participant found the correct course but landed on the course overview page rather than the last lesson viewed.
Time on task: Approximately 2 minutes 40 seconds (no automation; timed manually).
Errors: Clicked "My Learning" first, then "Dashboard," then "Browse" before returning to "My Learning." Tried three navigation options before landing on the correct section.
Verbal commentary (think-aloud): "I know I was doing something last week... I feel like there should be a 'continue where I left off' thing somewhere? This feels like I have to remember where I was, which I don't."
Observation: Scrolled past the "Resume" button on the dashboard twice without registering it. Button is labeled in gray text at 12px, visually underweighted relative to the surrounding course cards.
SEQ score: 3 out of 7. Participant said, "It works, I guess, but I had to work for it."
This format makes it possible to aggregate findings across participants systematically. When you have eight sessions documented this way, finding the most frequent task failures is a matter of scanning the task log columns, not re-reading eight narrative summaries.
Stage 3: Building a Synthesis Report
What Synthesis Actually Is
Synthesis is the step where raw session data becomes research findings. It is also the step most researchers find hardest to document well, because synthesis is a process, not just a product. The temptation is to skip straight to the output, a slide deck or a report, without documenting the reasoning that got you there.
Documenting your synthesis process matters for two reasons. First, it makes your findings defensible. When a stakeholder asks "how do you know that?", your synthesis documentation is the answer. Second, it makes your research reusable. If someone on your team needs to revisit this study six months from now, a documented synthesis trail is infinitely more useful than a final slide deck.
Synthesis Report Structure
A research synthesis report should include:
Study overview section
- Research questions the study was designed to answer
- Methodology: number of sessions, participant criteria, session format, dates
- Limitations: what the study could not answer, known biases in the sample
Data summary
- Total sessions, total participants, key demographic breakdown
- Overview of tasks (usability) or topics (interview), whichever applies
- Raw data location (link to session notes, recordings if applicable, or note archive)
Findings section This is the core. Each finding should be stated as a declarative sentence, not a category label. Not "navigation issues" but "Participants could not reliably locate the resume function because it is visually indistinct from surrounding elements." Each finding needs:
- Evidence: the specific data points, quotes, or observations that support it
- Frequency: how many participants showed this behavior or expressed this sentiment
- Severity (for usability studies): how much does this finding affect task completion?
- Implication: what does this finding suggest the team should investigate or change?
Recommendations section Keep this separate from findings. Findings are what you observed. Recommendations are what you suggest. Conflating them makes the report harder to use: product teams may agree with a finding and disagree with the recommendation, or vice versa.
Open questions Document what this study could not answer and what follow-up research would clarify. This is often more valuable than the recommendations section, because it shows the team what they are still operating without evidence for.
A Fictional Example
Study: Freelance invoicing workflow research (same study as Stage 1).
Finding 3: Freelancers experience payment tracking as emotionally, not just operationally, disruptive.
Evidence: P04: "I just hold my breath until I see the money." P07: "It's the worst part of the job. The doing is fine. The waiting to get paid is when I feel like I made a mistake going freelance." P11 described checking her bank account "probably twenty times" after sending an invoice.
Frequency: 8 of 12 participants described the post-invoice wait in language that indicated anxiety or uncertainty, not just inconvenience.
Implication: Payment status visibility is not a feature request from power users. It is an emotionally salient pain point for the majority of the target segment. This finding warrants investigation of notification and status-tracking design.
Notice that the finding is stated in the participant's own terms ("emotionally disruptive"), supported by direct evidence, and scoped by frequency before any recommendation appears.
Stage 4: Affinity Mapping Documentation
Why Affinity Maps Need Their Own Documentation
Affinity mapping (also called affinity diagramming) is the process of organizing individual observations, quotes, or data points into clusters that reveal patterns. It is one of the most powerful synthesis techniques in qualitative research, and one of the most poorly documented.
A physical affinity map on a whiteboard disappears when the workshop ends. A digital affinity map in a tool like FigJam or Miro is better, but it is still a visual artifact that is hard to interpret outside the context of the session. Without documentation, the reasoning behind your clusters, why certain notes belong together, what each cluster is named and why, is lost.
What to Document During and After Affinity Mapping
Before the session:
- The data set being used: which session notes, which quotes, which observations are being mapped
- The research questions you are using affinity mapping to answer
- Who is participating in the session and their roles
During the session:
- Photographs or screenshots of the map at each stage: before clustering, during clustering, after clusters are named
- A running log of cluster names as they emerge, with brief rationale notes ("we called this 'invisible progress' because three participants said they could not tell if their action had worked")
After the session:
- A written summary of each cluster: its name, the data points it contains (list them or link to them), and the team's interpretation of what this cluster represents
- Any clusters that were debated or split: document the disagreement, not just the resolution
- The hierarchy if you used one: which clusters are major themes and which are sub-themes
A Fictional Example
Study: E-learning platform research, affinity mapping session.
Cluster: "I don't know where I am" Data points: 14 notes across 8 sessions. Includes P02: "I opened the app and I couldn't remember where to go"; P05 clicking back to the homepage three times during task 2; P09: "There should be something that tells me what I was doing"; observer note from P11's session ("searched 'continue' in the in-app search bar before checking navigation").
Rationale: This cluster emerged quickly and had the most nodes of any cluster in the map. It is distinct from "navigation is hard to understand" (a separate cluster) because these data points are specifically about continuity across sessions, not about single-session navigation. Participants are not confused about how to use the app; they are confused about where they left off.
Team debate: One researcher argued that this cluster should merge with "progress tracking." We kept them separate because "progress tracking" notes were about learning outcomes ("have I completed this? have I improved?") while "I don't know where I am" notes are about session continuity. The distinction matters for design implications.
This level of documentation turns an affinity mapping session from a whiteboard exercise into a defensible, reusable research artifact.
Stage 5: Insight Readouts
The Difference Between a Finding and an Insight
A finding is what happened in your data. An insight is what it means for the product. Findings are research outputs. Insights are the translation of those outputs into language that moves product teams toward decisions.
An insight readout (sometimes called a research readout or research presentation) is where your documentation work pays off. If you have documented your synthesis well, the readout almost writes itself: your findings become slides, your evidence becomes speaker notes, your recommendations become a discussion agenda.
Structuring the Readout Document
A written insight readout (separate from any presentation) should have:
Context section: Why this research was done, when, what questions it was designed to answer. One paragraph. This section saves you from re-explaining the study's origin every time someone reads the document six months later.
Method summary: Two to three sentences. Enough for a stakeholder to understand the data source without reading the full synthesis report.
Key insights: Three to five insights, no more. Each insight is one sentence. Below each insight: two to three data points as evidence, a severity or confidence rating if applicable, and a "so what" sentence connecting the insight to a product or design implication.
What we still do not know: The honest acknowledgment of what this study did not cover. Researchers who include this section build more credibility with product teams than researchers who imply their study answered every relevant question.
Recommended next steps: Separated from the insights. Specific and actionable, not vague.
A Fictional Example
Study: E-learning platform insight readout.
Insight 2: Users treat the dashboard as a starting point, not a home base.
Evidence: P03 navigated away from the dashboard within 10 seconds in all three tasks. P06 said "I always just search for what I want, the dashboard doesn't really help me." P08's session log shows zero dashboard interactions after the first 30 seconds.
Confidence: High. Consistent across 7 of 9 participants.
So what: The dashboard is absorbing design investment but not delivering navigation utility. The team should investigate whether the "Resume" and "Recommended" elements on the dashboard are solving a real user need or a product team assumption.
What we still do not know: Whether power users (participants with 50+ hours on the platform) use the dashboard differently from newer users. This study included only participants with 5-30 hours of usage history.
Common UX Research Documentation Mistakes
Mixing Observation and Interpretation in Session Notes
The single most damaging documentation habit is writing interpretations into session notes as if they are observations. "The participant was frustrated" is an interpretation. "The participant said 'this is so annoying' and closed the tab" is an observation. When interpretation appears in raw notes, it is impossible to separate what the participant actually did and said from the researcher's in-session reading of it. Synthesis becomes contaminated before it starts.
Using Generic Cluster Names in Affinity Maps
Naming a cluster "navigation issues" or "user needs" means the cluster could contain almost anything. Good cluster names are specific enough to be wrong: "users cannot locate the resume function" is a cluster name that tells you exactly what is in the cluster and could be disproved by evidence. "Navigation issues" cannot be disproved by anything. The specificity of your cluster names determines the specificity of your findings.
Reporting Frequency Without Evidence
Saying "most participants" or "many users" without the underlying data is one of the most common readout weaknesses. "Seven of twelve participants" is a finding. "Most participants" is an impression. When your findings are written with counts and evidence, they survive stakeholder scrutiny. When they are written with hedged language, they invite reinterpretation by everyone in the room.
Writing Recommendations Into Findings
A finding is what the data showed. A recommendation is what you think should happen as a result. When these are combined ("we found that users struggle with the dashboard, so we should redesign it"), the team cannot disagree with the recommendation without appearing to reject the finding. Keeping them separate is not just structural neatness. It creates space for productive disagreement about the right response to a genuine finding.
Losing the Synthesis Trail
The most common long-term documentation failure is delivering a polished readout with no record of the synthesis process that produced it. Six months later, no one can explain why a particular finding was prioritized, what evidence was set aside, or how the affinity clusters were constructed. Building a synthesis trail, even a lightweight one, is the difference between research that can be defended and research that has to be re-done.
Documentation Checklist for UX Research Studies
User Interview Notes
- Participant ID used (not real name)
- Session header complete: date, interviewer, observer, participant attributes
- Verbatim quotes captured in a dedicated section, marked as direct quotes
- Observations kept separate from interpretations in the notes
- Emerging follow-up questions logged for future sessions
- Notes filed in a consistent location immediately after the session
Usability Session Notes
- Task log includes one entry per test task
- Each task entry has: completion status, time on task, errors, verbal commentary, observations
- Post-task ratings captured with raw scores
- End-of-session summary written immediately after the session
- Prototype version and device context documented
Synthesis Report
- Study overview includes research questions, methodology, and limitations
- Each finding stated as a declarative sentence, not a category label
- Each finding has evidence, frequency count, and implication
- Recommendations appear in a separate section from findings
- Open questions documented
Affinity Mapping
- Data set and research questions documented before the session
- Photographs or screenshots captured at each stage of the map
- Each cluster has a written summary with: name, data points, rationale
- Debated clusters documented with the disagreement noted, not just the outcome
- Cluster hierarchy (major themes vs. sub-themes) made explicit
Insight Readout
- Context section explains why the research was done
- Key insights limited to three to five, each stated in one sentence
- Each insight has specific evidence (counts, quotes), not hedged language
- "What we still do not know" section included
- Recommendations clearly separated from insights
Good documentation does not slow down research. It is what makes research usable past the week it was conducted. A study with thorough session notes, a documented synthesis trail, and a structured readout stays useful for months. A study that exists only as a presentation deck is useful until the presenter leaves the room.
If you are managing research documentation across multiple studies, NotuDocs lets you build reusable templates for each stage of the research process, so your interview note structure, synthesis report format, and insight readout template stay consistent across every project without rebuilding them from scratch.


