Project snapshot

• My role:

UX Research

• Team:

Research, Product Strategy, AI Product Team

• Timeline:

Feb - Mar 2024

• Platforms:

Developer tools, Gen AI APIs, Infrastructure Platforms

• Tools:

Dscout, Figma, Miro, Google Workspace

Context

As Generative AI was becoming part of mainstream development stacks, there was still little clarity on how developers actually worked with these technologies beyond headlines and product demos. The team wanted to move past assumptions and understand, at a system level, how developers were integrating Gen AI into their applications, what was working, and where friction still remained.

I led the end-to-end research study, partnering with PMs, AI platform teams, and research ops. The goal was to generate actionable insights that could guide both product direction and infrastructure investments, ultimately helping the team design better, more developer-centered AI tooling.

What's working

Go beyond assumptions

Possible frictions

Understand the integration

Lack of real clarity

What's working

Go beyond assumptions

Possible frictions

Understand the integration

Lack of real clarity

What's working

Go beyond assumptions

Possible frictions

Understand the integration

Lack of real clarity

The challenge

While many companies were investing heavily in Gen AI, there was still limited understanding of the real developer experience behind adoption. The team identified gaps around:

Lack of clear visibility into end-to-end developer workflows when integrating Gen AI.

Uncertainty about technical blockers, learning curves, and real-world use cases across industries.

Few data points on what infrastructure choices (APIs, SDKs, frameworks, databases) were actually being used, and why.

Lack of clear visibility into end-to-end developer workflows when integrating Gen AI.

Uncertainty about technical blockers, learning curves, and real-world use cases across industries.

Few data points on what infrastructure choices (APIs, SDKs, frameworks, databases) were actually being used, and why.

Lack of clear visibility into end-to-end developer workflows when integrating Gen AI.

Uncertainty about technical blockers, learning curves, and real-world use cases across industries.

Few data points on what infrastructure choices (APIs, SDKs, frameworks, databases) were actually being used, and why.

Through 60-minute qualitative interviews with web and mobile developers, we collected first-hand narratives on both the technical and cognitive frictions developers faced. Patterns began to emerge around configuration complexity, documentation gaps, testing variability, and the high dependency on manual customization for use case-specific models.

How might we...

Simplify early-stage adoption for developers without specialized ML backgrounds?

Reduce the configuration and trial-and-error burden during model integration?

Simplify early-stage adoption for developers without specialized ML backgrounds?

Reduce the configuration and trial-and-error burden during model integration?

Simplify early-stage adoption for developers without specialized ML backgrounds?

Reduce the configuration and trial-and-error burden during model integration?

Research & Discovery

The study combined qualitative depth with pattern-driven synthesis to capture how Gen AI integration is navigated today. We recruited a diverse sample of 20 developers across industries, seniority levels, and AI maturity, from early adopters to teams already scaling AI-powered products.

Methods applied:
60-minute in-depth interviews (remote, moderated)
• Task flow deconstruction of actual implementation workflows
• Pain point mapping using journey audits
• Cross-case behavioral analysis for pattern synthesis

Through this approach, we uncovered three developer archetypes:

Explorers: experimenting, testing, often overwhelmed by tooling choices.

Integrators: actively embedding models into production apps but facing scaling and testing gaps.

Optimizers: refining models and pipelines but dealing with edge cases and model governance.

Explorers: experimenting, testing, often overwhelmed by tooling choices.

Integrators: actively embedding models into production apps but facing scaling and testing gaps.

Optimizers: refining models and pipelines but dealing with edge cases and model governance.

Explorers: experimenting, testing, often overwhelmed by tooling choices.

Integrators: actively embedding models into production apps but facing scaling and testing gaps.

Optimizers: refining models and pipelines but dealing with edge cases and model governance.

Key pain points included:
Heavy trial-and-error loops during model tuning and evaluation.
• Fragmented documentation across models, APIs, and use cases.
• Limited sandbox environments for safe experimentation.
• Lack of accessible benchmarking or performance guidance for domain-specific models.

Using friction mapping and behavioral audits, we identified both short-term design fixes and longer-term product opportunities to support more intuitive Gen AI integration across developer segments.

Strategy & Ideation

Building on the distinct mindsets identified, we structured the findings into an actionable adoption framework that mapped how needs and friction evolve across key integration stages. Rather than treating pain points as isolated usability fixes, we framed the problem as one of progressive complexity management.

This led us to define four priority opportunity spaces:

1.

Lowering early setup barriers with pre-built integration kits tailored to common use cases.

2.

Providing guided evaluation pathways to help developers compare model suitability for domain-specific needs.

3.

Consolidating fragmented documentation and resources into unified, cross-platform knowledge centers.

4.

Enabling safe experimentation environments to de-risk early iteration cycles before production deployment.

1.

Lowering early setup barriers with pre-built integration kits tailored to common use cases.

2.

Providing guided evaluation pathways to help developers compare model suitability for domain-specific needs.

3.

Consolidating fragmented documentation and resources into unified, cross-platform knowledge centers.

4.

Enabling safe experimentation environments to de-risk early iteration cycles before production deployment.

1.

Lowering early setup barriers with pre-built integration kits tailored to common use cases.

2.

Providing guided evaluation pathways to help developers compare model suitability for domain-specific needs.

3.

Consolidating fragmented documentation and resources into unified, cross-platform knowledge centers.

4.

Enabling safe experimentation environments to de-risk early iteration cycles before production deployment.

Each recommendation was carefully balanced to preserve user autonomy while introducing scaffolding where friction was highest. These became the foundation for internal product briefs used to guide roadmap discussions.

Design & Execution

While the team didn’t produce high-fidelity UI at this stage, the research was translated into design enablement assets that allowed product and design teams to explore solutions grounded in validated user behavior.

Flow maps illustrating simplified developer onboarding, evaluation journeys, and safe experimentation loops.
• Experience blueprints outlining how support content, evaluation tools, and configuration wizards could reduce friction at different integration stages.
• Behaviorally-informed design principles to guide future interaction design decisions, balancing flexibility with scaffolding where needed.

Rather than prescribing fixed interaction patterns, these assets empowered teams to experiment within a shared, research-backed structure, ensuring alignment between user needs, technical feasibility, and product strategy.

These recommendations were delivered as part of the current state audit report, providing Google’s design team with actionable direction for the next stages of Drive’s mobile experience evolution.

Testing & Iteration

Since the study focused on early-stage product strategy, iteration centered on internal alignment and validation loops rather than direct usability testing.

We conducted multiple feedback sessions with product managers, AI engineers, and design leads, using the adoption framework and flow models as facilitation tools to test for feasibility, completeness, and alignment with known technical constraints. These conversations surfaced edge cases related to:

• Variability of developer expertise across different industries.
• Trade-offs between opinionated onboarding vs. preserving flexibility.
• Technical dependencies that could complicate safe sandboxing approaches.

As a result, we refined some of the early recommendations, for example, shifting from fixed onboarding templates to more modular, opt-in integration scaffolds that could adapt to different user profiles and use cases.

This iterative refinement ensured that the strategic direction remained grounded, actionable, and adaptable as product exploration continued.

Outcomes & Impact

The study gave the product team a much sharper lens on how developers experience Gen AI beyond initial integration, revealing where platform design needed to shift from simply providing access to actively supporting adoption. The behavioral segmentation and adoption framework established a shared decision-making foundation that continues to guide both product and infrastructure conversations.

Key impacts included:

Enabled product leads to confidently prioritize onboarding, guided model evaluation, and sandboxing as roadmap focus areas

Helped engineering teams identify where developer autonomy should be preserved vs. where additional scaffolding would create leverage.

Provided a validated framework used internally across teams to shape upcoming Gen AI developer experience prototypes and platform evolution discussions.

Enabled product leads to confidently prioritize onboarding, guided model evaluation, and sandboxing as roadmap focus areas

Helped engineering teams identify where developer autonomy should be preserved vs. where additional scaffolding would create leverage.

Provided a validated framework used internally across teams to shape upcoming Gen AI developer experience prototypes and platform evolution discussions.

Enabled product leads to confidently prioritize onboarding, guided model evaluation, and sandboxing as roadmap focus areas

Helped engineering teams identify where developer autonomy should be preserved vs. where additional scaffolding would create leverage.

Provided a validated framework used internally across teams to shape upcoming Gen AI developer experience prototypes and platform evolution discussions.

Learnings

Working in an emerging space like Gen AI challenged me to structure research when both the technology and use cases are rapidly evolving. This experience strengthened my ability to balance strategic ambiguity while still delivering actionable frameworks that help teams make early product decisions.

What I would do differently:

In future studies, I would seek deeper longitudinal engagement with developers to track how friction evolves over multiple product cycles, capturing not only first-time integration, but also how teams scale, monitor, and optimize models in real-world conditions.

What’s next for the product:

The research continues to serve as a foundation for upcoming prototypes focused on reducing integration friction, improving model evaluation workflows, and shaping a more guided onboarding experience for Gen AI platforms

What I would do differently:

In future studies, I would seek deeper longitudinal engagement with developers to track how friction evolves over multiple product cycles, capturing not only first-time integration, but also how teams scale, monitor, and optimize models in real-world conditions.

What’s next for the product:

The research continues to serve as a foundation for upcoming prototypes focused on reducing integration friction, improving model evaluation workflows, and shaping a more guided onboarding experience for Gen AI platforms

What I would do differently:

In future studies, I would seek deeper longitudinal engagement with developers to track how friction evolves over multiple product cycles, capturing not only first-time integration, but also how teams scale, monitor, and optimize models in real-world conditions.

What’s next for the product:

The research continues to serve as a foundation for upcoming prototypes focused on reducing integration friction, improving model evaluation workflows, and shaping a more guided onboarding experience for Gen AI platforms