How We Harnessed AI to Amplify, Not Replace, the Human Touch

Role: Sole Senior UX Researcher - I independently designed, recruited, facilitated, synthesized, and delivered this study. No research team.

The Problem I Was Asked to Solve

Incomplete caregiver profiles were a major bottleneck in our marketplace, leading to fewer successful matches. We saw an opportunity to leverage AI to help caregivers craft compelling bios at scale. However, this introduced a high-stakes risk: the care industry is built on human trust and authenticity.

The challenge wasn't technical; it was deeply human. My task was to answer the critical question:

“How do we harness the efficiency of AI without sacrificing the authenticity that is the very currency of our platform?”

My Research Approach

To get rapid, behavioral feedback on this disruptive technology, I conducted an unmoderated usability test with 10 caregivers, using a high-fidelity Figma prototype. This method was the fastest way to observe visceral, unfiltered reactions to using AI for such a personal task.

My research focused on assessing the usability, effectiveness, and overall user perception of the AI tool, from the initial questions to the final consent screen.

Prototype of the AI Flow

The 'Aha!' Moment: The Authenticity Paradox

The research revealed a powerful paradox: while an overwhelming 9 out of 10 users preferred the AI-assisted option, they fundamentally rejected the idea of AI as an author. They wanted a partner, not a ghostwriter.

1. The Value: AI as an Efficiency Catalyst

Caregivers saw the AI as a powerful tool to overcome writer's block and save valuable time in their demanding schedules. They viewed the generated bio not as a finished product, but as an excellent "starting point" that empowered them to build upon.

"My first impression is that it's great. It's a great starting point but it doesn't have my personal Flair to it. So, what I would do is Click, yes, it sounds like me, but I hope that there is a space for me to edit this."

2. The Risk: AI as an Authenticity Threat

The single greatest fear was that the AI would create a "generic" persona, stripping away the unique voice that builds trust with families. Users were deeply concerned that the label "Crafted with help from AI" could act as a 'scarlet letter,' undermining their credibility and making families question their honesty.

Concerned about losing the voice and personality, which is valued in the care industry... but definitely more favorable with the time saving aspect.

My Recommendations: From Automation to Collaboration

My findings provided a clear strategy to resolve this paradox: empower the caregiver to collaborate with the AI, giving them full control and ownership of the final product.

  • I recommended repositioning the feature as a "Bio Co-pilot." My research showed users wanted to feel in control, so I recommended adding an explicit "Edit" step immediately after generation.

  • Users found the initial questions too generic. I recommended expanding the question set and allowing for multiple-choice answers to better capture the nuance of their experience.

  • The "Crafted with help from AI" label was perceived negatively. I provided strong, user-backed evidence to the product and legal teams to remove this public-facing label.

Impact

This rapid usability study provided critical insights that fundamentally shaped the product's direction before a single line of code was written. My research directly influenced the product strategy to pivot from a simple "automation" feature to a more sophisticated "collaboration" tool. This user-centered approach de-risked the project by ensuring the final feature would enhance, not compromise, the authenticity that drives trust and successful matches on our platform.

Why This Transfers:

The 'Authenticity Paradox' I surfaced here; Users wanting AI collaboration, not AI authorship is the defining tension in enterprise AI adoption. The same trust dynamics that govern caregiver bio generation govern how knowledge workers adopt AI-assisted drafting, decision-support, and agentic tools. This study demonstrates my ability to research novel AI behaviors before established frameworks exist, which is the exact capability senior AI product teams are hiring for right now.

Previous
Previous

De-Risking the 2026 Monetization Roadmap

Next
Next

Transactional to Trusted