Project snapshot
• My role:
UX Research
• Team:
Research, Product Strategy, AI Product Team
• Timeline:
Feb - Mar 2024
• Platforms:
Developer tools, Gen AI APIs, Infrastructure Platforms
• Tools:
Dscout, Figma, Miro, Google Workspace
Context
As Generative AI was becoming part of mainstream development stacks, there was still little clarity on how developers actually worked with these technologies beyond headlines and product demos. The team wanted to move past assumptions and understand, at a system level, how developers were integrating Gen AI into their applications, what was working, and where friction still remained.
I led the end-to-end research study, partnering with PMs, AI platform teams, and research ops. The goal was to generate actionable insights that could guide both product direction and infrastructure investments, ultimately helping the team design better, more developer-centered AI tooling.
The challenge
While many companies were investing heavily in Gen AI, there was still limited understanding of the real developer experience behind adoption. The team identified gaps around:
Through 60-minute qualitative interviews with web and mobile developers, we collected first-hand narratives on both the technical and cognitive frictions developers faced. Patterns began to emerge around configuration complexity, documentation gaps, testing variability, and the high dependency on manual customization for use case-specific models.
How might we...
Research & Discovery
The study combined qualitative depth with pattern-driven synthesis to capture how Gen AI integration is navigated today. We recruited a diverse sample of 20 developers across industries, seniority levels, and AI maturity, from early adopters to teams already scaling AI-powered products.
Methods applied:
• 60-minute in-depth interviews (remote, moderated)
• Task flow deconstruction of actual implementation workflows
• Pain point mapping using journey audits
• Cross-case behavioral analysis for pattern synthesis
Through this approach, we uncovered three developer archetypes:
Key pain points included:
• Heavy trial-and-error loops during model tuning and evaluation.
• Fragmented documentation across models, APIs, and use cases.
• Limited sandbox environments for safe experimentation.
• Lack of accessible benchmarking or performance guidance for domain-specific models.
Using friction mapping and behavioral audits, we identified both short-term design fixes and longer-term product opportunities to support more intuitive Gen AI integration across developer segments.
Strategy & Ideation
Building on the distinct mindsets identified, we structured the findings into an actionable adoption framework that mapped how needs and friction evolve across key integration stages. Rather than treating pain points as isolated usability fixes, we framed the problem as one of progressive complexity management.
This led us to define four priority opportunity spaces:
Each recommendation was carefully balanced to preserve user autonomy while introducing scaffolding where friction was highest. These became the foundation for internal product briefs used to guide roadmap discussions.
Design & Execution
While the team didn’t produce high-fidelity UI at this stage, the research was translated into design enablement assets that allowed product and design teams to explore solutions grounded in validated user behavior.
• Flow maps illustrating simplified developer onboarding, evaluation journeys, and safe experimentation loops.
• Experience blueprints outlining how support content, evaluation tools, and configuration wizards could reduce friction at different integration stages.
• Behaviorally-informed design principles to guide future interaction design decisions, balancing flexibility with scaffolding where needed.
Rather than prescribing fixed interaction patterns, these assets empowered teams to experiment within a shared, research-backed structure, ensuring alignment between user needs, technical feasibility, and product strategy.
These recommendations were delivered as part of the current state audit report, providing Google’s design team with actionable direction for the next stages of Drive’s mobile experience evolution.
Testing & Iteration
Since the study focused on early-stage product strategy, iteration centered on internal alignment and validation loops rather than direct usability testing.
We conducted multiple feedback sessions with product managers, AI engineers, and design leads, using the adoption framework and flow models as facilitation tools to test for feasibility, completeness, and alignment with known technical constraints. These conversations surfaced edge cases related to:
• Variability of developer expertise across different industries.
• Trade-offs between opinionated onboarding vs. preserving flexibility.
• Technical dependencies that could complicate safe sandboxing approaches.
As a result, we refined some of the early recommendations, for example, shifting from fixed onboarding templates to more modular, opt-in integration scaffolds that could adapt to different user profiles and use cases.
This iterative refinement ensured that the strategic direction remained grounded, actionable, and adaptable as product exploration continued.
Outcomes & Impact
The study gave the product team a much sharper lens on how developers experience Gen AI beyond initial integration, revealing where platform design needed to shift from simply providing access to actively supporting adoption. The behavioral segmentation and adoption framework established a shared decision-making foundation that continues to guide both product and infrastructure conversations.
Key impacts included:
Learnings
Working in an emerging space like Gen AI challenged me to structure research when both the technology and use cases are rapidly evolving. This experience strengthened my ability to balance strategic ambiguity while still delivering actionable frameworks that help teams make early product decisions.