Workflows
Synthesize User Research Into Findings Your Team Will Act On
You did the interviews. Now 8 hours of notes are sitting in a folder. Paste them into Claude and get structured insights with supporting evidence, design implications, and an executive summary your team will actually use.
On this page (9 sections)
Capture meetings without lifting a finger
Granola uses AI to transcribe and summarize your meetings automatically. Pair it with Claude Code via MCP to turn meeting notes into action items, tickets, and code.
Try Granola freeHow it works
Step 1 of 6
You ran the interviews. The notes are sitting untouched.
Five sessions. Eight hours. A folder full of raw transcripts that nobody has looked at since the calls ended.
Step 2 of 6
You paste the notes into Claude.
All of it. Transcripts, observation notes, survey responses, whatever you captured. Raw is fine.
Step 3 of 6
Claude asks what question the research was meant to answer.
Not everything you heard is equally relevant. The research question focuses the synthesis.
Step 4 of 6
Claude writes a structured findings report.
3 to 5 themes, each with supporting quotes, an interpretation, and a design implication. Plus an executive summary.
Step 5 of 6
You dispute, add nuance, cut the nonsense.
Claude will sometimes over-generalize or miss context you have. Push back. The conversation is the editor.
Step 6 of 6
You walk into design review knowing exactly what to say.
Not 'users said a lot of things.' Real findings. Specific evidence. Clear implications.
The prompt
Paste this into Claude.ai with your notes in the placeholder. The more raw your notes, the better; Claude handles messy input well.
I need to synthesize user research into a findings report. The research question we were trying to answer: [one sentence] Here are my notes: [paste interview transcripts, session notes, survey responses, or a plain-English summary of what you observed and heard] Please: 1. Group what you find into 3 to 5 major themes 2. For each theme, write: what users said (with 1 to 2 direct quotes if they exist), what it means (your interpretation), and what we should consider doing about it 3. Add an executive summary at the top (3 to 4 sentences: what we learned and what it means for our decisions) 4. Flag anything where different users said contradictory things. Don't resolve those tensions; just surface them clearly. Keep it to one page per theme maximum. This needs to be shareable with a team that won't read long documents.
What you get back
Executive summary
Users can complete the core task, but not without anxiety. The friction isn't in the flow; it's in the feedback: they don't know if what they did worked until well after they expected confirmation. Three of five participants described feeling like they "weren't sure if it went through." The clearest design implication: earlier, more explicit confirmation at every consequential action.
Theme 1: Confirmation anxiety is the dominant experience
What users said:
"I submitted it and then just sat there waiting. I didn't know if I should refresh or if it was already done." (P2)
"There was a spinner, but it went away really fast and I still wasn't sure." (P4)
What it means: Users aren't failing to complete the task. They're completing it and then second-guessing themselves. The problem isn't discoverability or flow; it's the gap between action and confirmed outcome.
Design implication: Introduce a persistent success state that stays visible until the user navigates away. The spinner resolving is not enough.
Theme 2: Users don't read the confirmation email
What users said: All five participants ignored or forgot about the confirmation email. Two checked it later only when they wanted proof of completion.
What it means: Email is a receipt, not a primary signal. Designing for email confirmation as the main source of truth sets up friction for users who act immediately.
Design implication: In-product confirmation needs to be the primary signal. Email is secondary.
Theme 3: Contradictory findings on error recovery
Tension to surface: P1 and P3 said error messages were clear and they recovered easily. P4 said the error message on the same step "didn't tell me what to fix." This may be a prior-experience effect: P1 and P3 had used similar tools before; P4 had not. Worth a follow-up round with less experienced users before assuming error messages are working.
Variations
Same shape, different research types.
I ran a usability test on [product or feature]. Here are my observation notes: [paste notes] The task participants were trying to complete: [describe the task] For each participant, note whether they succeeded, where they got stuck, and what they said while working through it. Then synthesize across participants: what were the 3 most common friction points, and what's the severity of each (did it block completion, slow them down, or just cause confusion)?
I've gathered public reviews and feedback about [competitor or product category]. Here's what I collected: [paste review excerpts, support forum posts, Reddit threads, app store reviews] Synthesize these into: the top 3 things users love about these products, the top 3 frustrations or gaps, and any patterns that suggest an unmet need our product could address. Be honest about the limitations: this is public review data, not a controlled study. Flag where you're inferring intent versus quoting directly.
My research findings contradict a strong team belief about [assumption or hypothesis]. Here's what the team believes: [describe the assumption] Here's what I found: [describe what your research surfaced] Help me write a findings report that presents this honestly without triggering defensiveness. I want to lead with what we learned, not with "we were wrong." Suggest how to frame the implication section so it reads as an invitation to reconsider, not an indictment.
Tips and gotchas
Start with the contradictions, not the consensus. The themes where all users agreed are rarely where the insight lives. The contradictions are where your research has something real to say.
Keep findings and recommendations separate. "Users struggled to find the submit button" is a finding. "We should make the button larger" is a recommendation. Claude will sometimes blend these: push back and ask it to separate them. They belong in different sections because different people make those decisions.
Three interviews is not enough for "users feel." If your sample is small, tell Claude and ask it to caveat accordingly. "Two of three participants said" is honest. "Users feel" from three people is not. The report will be more credible if it's accurate about its own limits.
Granola captures the raw material. If you're running meetings or interviews and want automatic transcripts to paste into this workflow, Granola records and summarizes sessions in the background. The transcript is what you paste here.
Ready to try?
I need to synthesize user research into a findings report. The research question we were trying to answer: [one sentence] Here are my notes: [paste interview transcripts, session notes, survey responses, or a plain-English summary of what you observed and heard] Please: 1. Group what you find into 3 to 5 major themes 2. For each theme, write: what users said (with 1 to 2 direct quotes if they exist), what it means (your interpretation), and what we should consider doing about it 3. Add an executive summary at the top (3 to 4 sentences: what we learned and what it means for our decisions) 4. Flag anything where different users said contradictory things. Don't resolve those tensions; just surface them clearly. Keep it to one page per theme maximum. This needs to be shareable with a team that won't read long documents.
Need somewhere to deploy?
Railway gives you one-click deploys from GitHub with generous free tier. Perfect for shipping what Claude Code builds.
Try Railway freePick the right Claude plan for your workflow
Use our side-by-side comparison to match plan to workload so you never hit limits mid-sprint.
Open plan comparisonNew guides, when they ship
One email, roughly weekly. CLAUDE.md templates, workflows I actually use, and the cut-for-length stuff that does not make the public guides. One-click unsubscribe.
Or follow on Substack
Write a PRD Your Engineering Team Will Actually Read
Write a Design Brief Before Discovery Runs Off the Rails