Case Study: Experimenting with

AI to Guide Better Care

The Challenge

When people turn to Dialogue for care, they often don’t know exactly how to describe their health concern or what kind of help they need. Our intake flow was structured around clear-cut categories and multiple-choice options, but this rigid format didn’t work for everyone.

We started exploring whether AI could create a more natural, conversational way for members to explain their needs. The goal was to see if this approach would help people feel more understood from the very beginning.

The idea was exciting, but the challenge was clear: how could we explore this new direction without disrupting the core experience or over-investing in an unproven idea?

The Approach

1) Framing the Experiment

We launched a small-scale test available only to Dialogue employees. It was designed to explore desirability, not deliver a full AI intake solution. Before launch, we aligned on what success would look like: a strong majority of users finding the input helpful, expressing interest in using it again, and reaching the same endpoint as the structured flow.

2) Designing the Flow

We added a screen inviting members to describe what was going on in their own words. Behind the scenes, an AI model would interpret their input and recommend a care path, while the structured intake remained as a fallback.

We also consulted with our clinical and legal teams early on to ensure the AI’s suggestions aligned with medical guidance and privacy expectations.

3) Gathering Feedback

After using the feature, members were prompted to answer a few questions to help us learn:

  • Was this helpful?

  • Did it feel more natural?

  • Would you want to use it again?

We paired this qualitative data with funnel analytics to understand where people dropped off, where they converted, and whether they followed the AI’s suggestion.

Learnings

  • Most users gave brief, symptom-based inputs rather than detailed narratives

  • When the AI recommendation felt accurate, it built trust and confidence

  • If the suggestion missed the mark, it created confusion

  • Members wanted more clarity about what the AI was doing and what would happen next

  • The prompt framing influenced how much users shared, suggesting tone and context matter more than we expected

Outcomes

This wasn’t something we rolled out widely, and that was the point. It gave us early insights into how our members think, type, and trust AI, which helped us refine our direction for the future.

The experiment also revealed key areas to improve, like adding more context, rethinking tone, and showing clearer next steps. It helped us define our longer-term vision for hybrid intake experiences that blend structure with the flexibility of natural conversation.

What this demonstrates

This project was a strong example of designing with intention, not just excitement around new tech.

By starting small and listening closely, we uncovered the right questions to ask before going bigger. This work laid the foundation for more meaningful AI experiences, rooted in user trust, emotional clarity, and thoughtful experimentation.

Next time, I’d test multiple prompt styles and tone variations to better understand what unlocks more natural sharing. This experiment reminded me that language design is just as important as functionality, especially when people are talking about their health.