Page Laubheimer
Senior UX Research Leader

Research that
moves products
forward.

Nearly 15 years leading mixed-methods UX research across complex products and domains — from healthcare to AI to developer tools. I translate messy, ambiguous user behavior into decisions teams can act on. Now seeking an embedded team where research drives real outcomes.

PL
15
Years of Research
Qual · Quant · Mixed-Methods
15+
Countries
Research conducted globally
25
Studies Per Year
At peak: 15–25 annually
50+
VP-Level Presentations
Roadmap-shaping findings

I make complexity
understandable.

I'm a senior UX researcher with nearly 15 years of experience leading research across some of the most complex, high-stakes digital products in the world — spanning healthcare, eCommerce, developer tools, enterprise SaaS, and AI.

My work spans the full research spectrum: from in-depth contextual inquiry with expert users in specialized domains, to large-scale quantitative benchmarking studies analyzed in R. I've led research in healthcare, developer tools, eCommerce, enterprise SaaS, finance, AI, and more — often in the same year.

What makes me different isn't just methodological fluency. It's that I know how to turn findings into action. I've presented to VP-level stakeholders at 50+ organizations, built research programs from scratch, mentored junior researchers through their first independent studies, and developed training courses that have shaped how thousands of practitioners think about research.

"This is an exceptional piece of work. It will define our roadmap for the next two years."

After more than 10 years consulting, I'm ready to go deep. I want to be embedded in a product team, see my research implemented, own a research practice over time, and build something lasting — whether as a staff-level IC or a research manager growing the next generation of researchers.

I'm based in Portland, Oregon and hold an M.S. in Library Science from UNC Chapel Hill — a background that fundamentally shaped how I think about information architecture, user mental models, and knowledge organization.

Research that changed
what teams built.

Three studies across different domains — each illustrating a different dimension of how I work.

Quantitative Benchmarking
Stakeholder Verdict
"This is an exceptional piece of work. It will define our roadmap for the next two years."
— VP of Product
⚙️ Developer Tools 📊 40-participant quant study 🔬 + Competitive analysis

Benchmarking Usability for a Low-Code IDE

A developer-tools client needed to measure their low-code IDE's user experience and build the business case for UX investment. Mid-project, the platform turned out to be incompatible with unmoderated testing tools, and the legal team blocked recruiting existing users — requiring a full study redesign without losing momentum or rigor.

40
Participants in quant study
2yr
Roadmap shaped by findings
3
Researchers trained on new protocol
Quant Usability Testing Competitive Benchmarking SUS & UMUX-Lite Attitudinal Survey Statistical Analysis in R Subgroup Analysis

The Challenge

The client's IDE was incompatible with unmoderated testing platforms — a limitation discovered after the project began. Their legal team also prohibited recruiting existing users, forcing a pivot in the study population. And an open-ended IDE poses an inherently difficult task-definition problem: what counts as a representative workflow?

My Approach

  • Pivoted to moderated testing not in the original scope — trained junior researchers on a facilitation protocol I developed to ensure consistency across 40 sessions
  • Reframed the population: shifted from existing users to professional developers exploring low-code for the first time, aligning with the product's growth-phase goals
  • Ran a collaborative workshop with the client team to surface the top developer workflows for the first hour of use — used these to define representative tasks
  • Coordinated a zero-state test environment with the technical team so every participant started with the same conditions
  • Led qualitative pilot (n=10) to surface mental models and pain points, which also served as a live training ground for junior researchers before the quant phase

Research Process

  • Qualitative pilot study: 10 participants, mental model elicitation + think-aloud
  • Quantitative study: 40 participants, task success, time-on-task, error rates, SUS scores, desirability word selection
  • Attitudinal survey: competitive benchmarking against comparable platforms
  • Statistical analysis in R: subgroup analysis across developer experience levels, tool preferences, attitudinal clusters

Outcomes & Impact

  • Established baseline UX metrics and a repeatable benchmarking framework for tracking future improvements
  • Delivered a nuanced competitive analysis triangulating SUS scores, behavioral data, and attitudinal responses
  • Roadmap prioritization directly influenced: redesigned feedback system and clearer terminology shipped
  • VP of Product: "This is an exceptional piece of work. It will define our roadmap for the next two years."
  • Research operations benefit: refined junior researcher training process now used across NN/g client engagements

This project is a good example of how I approach constraint: not as a blocker but as a design problem. Every limitation — the testing platform, the recruitment ban, the open-ended interface — had a methodological solution that still served the research goals. The result was more rigorous, not less.

Consultative Research & Culture Change
Scale of Impact
Redesign deployed to 18 registry locations — on a 20-year-old platform that had never seen UX research.
🏥 Healthcare / Public Health 🔄 Iterative research & design 👥 Engineering culture change

Transforming a Public Health Cancer Data Platform

A 20-year-old cancer data reporting system used by trained ICD coders had never undergone usability research. The engineering-led team was skeptical of UX, PHI constraints ruled out conventional research approaches, and the domain was highly specialized. I led the end-to-end research and redesign — and left behind a team that now runs research on their own.

18
Registry locations on deployment
20yr
Platform age — first UX research ever
1
Day bootcamp built for product team
Contextual Inquiry Stakeholder Interviews Iterative Prototype Testing Heuristic Evaluation Workshop Facilitation Simulated Data (PHI workaround)

The Challenge

Three simultaneous hard problems: (1) A domain so specialized that observing real work meant understanding ICD coding at a clinical level. (2) PHI constraints that made conventional contextual research legally off-limits. (3) An engineering-led product team that didn't believe UX would tell them anything useful. Stakeholder buy-in was far from guaranteed going in.

My Approach

  • Built domain expertise first: invested heavily in relationships with ICD coding experts and co-created the research approach with them rather than around them
  • PHI workaround: used contextual inquiry for real-world observation, then switched to carefully constructed simulated data for iterative prototype testing — maintaining research validity while staying compliant
  • Turned skeptics into champions: brought engineers into the research process directly, making them observers and collaborators rather than audience members
  • Built a one-day research bootcamp for the product team — they left able to participate meaningfully in future studies
  • Mentored a junior researcher through their first end-to-end client engagement

Research Process

  • Stakeholder interviews to scope the project and map organizational priorities
  • Contextual inquiry: observed real coders working in their actual environment
  • Heuristic evaluation of the existing interface to catalog friction points
  • Iterative prototype testing using simulated PHI-compliant data across multiple design rounds
  • Workshop facilitation to align the team on findings and co-prioritize solutions

Outcomes & Impact

  • New visual design system replacing a 20-year-old interface — shipped to 18 registry locations nationally
  • Redesigned workflows reduced reliance on working memory and measurably reduced error rates
  • Post-launch feedback showed significant improvements in user satisfaction across the registry network
  • Produced a 3–5 year strategic roadmap for continued UX investment
  • Cultural shift: a previously skeptical engineering org now runs iterative UX research as standard practice

The most durable impact here wasn't the redesign — it was the culture change. Turning an engineering-led team into genuine UX advocates required meeting them where they were, not where I wished they were. That's a skill I've developed across many client engagements: making the research process legible and valuable to people who didn't ask for it.

Independent Research Initiative
Research Focus
Investigating how interaction design patterns shape user trust in generative AI — building new methods where established ones fall short.
🤖 Generative AI / LLMs 🔬 Novel methodology development 📝 Publication track

AI & User Trust: Independent Research Study

AI interfaces are proliferating faster than the field's ability to evaluate them. Traditional usability heuristics don't map cleanly onto probabilistic, open-ended systems. I launched this independent initiative to fill the methodological gap — developing new frameworks for studying trust calibration, mental model formation, and prompt-generation strategies in generative AI.

New
Research methods for AI evaluation
Mixed
Usability testing + longitudinal diary
Pub.
Series of articles in development
Mental Model Elicitation Think-Aloud Protocol Diary Studies Controlled Prompting Scenarios Novel Method Development

The Challenge

Generative AI poses fundamental research challenges: output is probabilistic and variable, making consistent stimuli difficult to construct. User mental models of AI are often wildly inaccurate. And much of the most consequential design territory — explainability features, confidence indicators, failure-mode transparency — involves design patterns that don't yet widely exist. How do you study user reactions to interfaces that haven't been built yet?

My Approach

  • Developed controlled prompting scenarios to give participants consistent experiences despite the probabilistic nature of AI output
  • Combined mental model elicitation with think-aloud usability sessions to capture both beliefs and behavior
  • Designed forward-looking research protocols that test reactions to hypothetical explainability features and confidence indicators
  • Planned longitudinal diary phase to capture how trust in AI features changes over real-world use across multi-week integration
  • Co-researcher collaboration with structured alignment checkpoints to maintain consistency

Research Design

  • Phase 1 (current): Usability testing and mental model elicitation — foundational trust beliefs and prompt-generation strategies
  • Phase 2 (planned): Longitudinal diary study — how users integrate AI features into real information-seeking over time
  • Methodological contribution: sample-size guidelines for AI UX research, handling output variability, designing stimuli for non-deterministic systems

Anticipated Outputs

  • Design principles for interaction patterns that foster appropriate trust calibration
  • Mental model taxonomy: common user beliefs about AI that designers need to design for (or against)
  • Research methodology contributions: reusable frameworks for AI UX studies
  • Publication series building on my existing NN/g AI writing (hallucinations, explainability, agentic interfaces)

This project sits at the edge of what UX research currently knows how to do — which is exactly why I find it compelling. The methodological challenges here are the same ones in-house researchers at AI companies are wrestling with right now. I'm working on them independently because I think the field needs better answers than it currently has.

Writing the field
reads.

80+ articles and videos published at Nielsen Norman Group. 1.1 million unique views. 11 professional training courses delivered across 6 continents.

11 Courses Created & Taught

Designed and delivered globally to 5,000+ UX professionals in-person and virtually across North America, Europe, Asia, and the Middle East.

Designing AI Experiences Complex Application Design Information Architecture Analytics & UX Survey Design Measuring UX & ROI Personas Lean UX & Agile UX Team of One Application Design UX Deliverables

Industry-Defining Reports

Authored NN/g's foundational eCommerce UX research series — used as the standard reference by product and design teams worldwide.

  • Ecommerce UX: Shopping Carts, Checkout & Registration
  • B2B Website Usability for Converting Leads
  • Effective Agile UX Product Development
  • Agentic AI UI Framework (in development)

15 years of building
research that lands.

Senior UX Research Consultant
2025 – Present
Nielsen Norman Group & Select Clients
  • Conduct strategic research for eCommerce, healthcare, and AI platforms on a consulting basis
  • Deliver global training seminars on eCommerce UX, AI experience design, and research methodology
  • Advise enterprise product teams on research strategy and insight operationalization
Senior User Experience Specialist
2020 – 2025
Nielsen Norman Group
  • People manager: conducted year-end evaluations, compensation conversations, development plans, and guided direct reports through PIP processes; served on hiring committee
  • Team lead across analytics, exam systems, and learning technology functions in a matrixed org structure
  • Led 15–25 research studies per year across eCommerce, healthcare, developer tools, and enterprise SaaS using qual, quant, and mixed-methods approaches
  • Authored NN/g's definitive eCommerce UX series (Shopping Carts, Checkout & Registration) — industry-standard references used by product teams globally
  • Pioneered AI experience research at NN/g: led studies on trust calibration, transparency, and failure modes; created the Designing AI Experiences training course
  • Presented strategic research to VP-level stakeholders at 50+ organizations, directly influencing product roadmaps and feature prioritization
  • Developed Agentic AI UI Framework — research-backed guidelines for designing agentic AI interfaces
User Experience Specialist
2015 – 2020
Nielsen Norman Group
  • Designed and executed end-to-end research programs across 15+ countries in diverse industries
  • Developed and taught NN/g training courses attended by thousands of UX professionals globally
  • Published 80+ articles and videos with 1.1M+ unique views — establishing thought leadership on mobile UX, data visualization, information architecture, and AI design
  • Pioneered approaches for evaluating cross-device experiences, complex enterprise workflows, and emerging AI interfaces
  • Led international research engagements across North America, Europe, Asia, and the Middle East
Project Manager & UX Strategist
2012 – 2015
Newfangled · Durham, NC
  • Managed end-to-end digital projects for B2B clients: UX design, content strategy, IA, and development workflows
  • Led stakeholder interviews, competitive analyses, and user research to inform website redesigns
  • Coordinated cross-functional teams of designers, developers, and content specialists

Education & recognition.

🎓
University of North Carolina at Chapel Hill

M.S. in Library Science

Focus on information organization, retrieval, and user behavior — foundational training in research methodology, taxonomy design, and human-information interaction.

🎤
NN/g UX Conference

Regular Speaker & Workshop Leader

Delivered research-backed presentations on eCommerce UX, AI experience design, and research methodology to audiences of 100–500+ practitioners across US and international events.

🌍
Global Research Leadership

15+ Countries, 6 Continents

Led research engagements and training across North America, Europe, Asia, and the Middle East — adapting methods for cultural and linguistic contexts.

🔬
Methodology Innovation

Frameworks Built for the Field

Created the CASTLE framework for workplace UX measurement, developed the Agentic AI UI Framework, and authored the statistical foundations for 40-participant quantitative usability studies.

Let's talk about
your research needs.

I'm actively exploring full-time Staff / Principal Researcher and Research Manager roles. If you're building a research practice and want someone who can both do the work and grow the team, I'd love to connect.

Currently Seeking

Open to full-time roles in the US (Portland-based, open to remote or hybrid).

Staff / Principal UX Researcher
Research Manager / Director
Product Owner / Product Manager
Consulting / Advisory (selective)

Industries of greatest interest: AI / ML products, healthcare, enterprise SaaS, developer tools, eCommerce — anything with genuine complexity and a team that takes research seriously.