Not Just a Chatbot: What It Means to Design an AI Agent
Designing Sprounix taught us that agentic AI isn’t just about chatbots—it’s about building trust, supporting users, and rethinking the entire experience around their intent. This post shares key lessons from that journey.
Words
Xiaoshi Dai
Product Designer
Product
/
Jun 2, 2025
Through designing Sprounix—an AI agent that supports job seekers and professionals—I’ve gained firsthand insight into what it truly means to create an agentic user experience. This wasn’t just about adding a chatbot or layering AI into existing flows. It was about rethinking the foundation: redefining the AI’s role, reframing user interactions, and reshaping the entire product architecture around trust, intention, and support.
This piece captures a few core reflections that guided our design decisions. I hope these ideas resonate with others designing in this space—and spark deeper discussion on what Agentic AI could and should become.
Agents as Support, Not a Screen
From the beginning, we defined the role of our AI agent as a supporter—not a screener. This core principle shaped everything from tone to flow across the entire design process.
In the real world, job seekers face constant pressure. They're endlessly evaluated—moving through one filter after another: from applications and recruiter calls to multiple interview rounds and the stress of negotiating offers. The experience is often built around judgment and exclusion.
Our commitment to support over scrutiny shaped the user journey at every touchpoint:
Set the tone: During onboarding, the agent clearly communicates its purpose—to understand and support, not to judge.
Listen, see, understand: Our AI interviews aren’t designed to detect flaws. They’re designed to surface strengths.
Do the job for them: The agent doesn’t just advise—it acts. It connects users with matching roles, proactively surfaces profiles to companies, and opens doors.
Stay adaptive: The agent grows with the user. Whether someone is looking for a first job or a career shift, the agent offers context-aware guidance and encouragement.
This mindset—designing an AI agent that empowers, not critiques—should apply far beyond job search. I’ve heard people express frustration with financial advisors who are overly critical of their decisions—so much so that they’d prefer using an AI advisor. Users want to feel supported and confident, not judged or diminished. In any domain, Agentic AI must uplift, not override.
Designing Agents for Intention
One of the most complex challenges in designing an agentic system is enabling the AI to understand what users actually mean. People rarely show up with neatly defined tasks. Instead, they arrive with loosely formed goals, vague frustrations, or incomplete thoughts. They say things like, “I’m looking for something more stable,” or “This resume just doesn’t feel right.” These aren’t form inputs—they’re expressions of intent.
Rather than designing around structured inputs, we reframed our approach around intention-to-action flows. I developed a mental model that looked like:
Intent → Clarification → Data Collection → Action.
Instead of asking “What field does this go into?” We now ask:
What is the user really trying to achieve?
How would they express that naturally?
What kind of response will actually help them move forward?
Through this lens, we stop treating users as data providers, and it allows us to position the AI agent, from a passive tool waiting for commands to an active collaborator working alongside the user. The result was a more fluid, human-centered experience—one where the agent could listen, interpret, and act. And in doing so, the experience didn’t just feel smarter—it felt more human.
Beyond Chat vs. GUI
Throughout the process, we kept returning to one core vision: create interactions that feel natural—like working with a human agent. We quickly realized that designing for Agentic AI isn’t simply a matter of choosing between conversation or GUI.
One of our longest-standing debates centered on a deceptively simple task: How should users update their resume and job preferences? We explored multiple approaches. In a purely conversational model, users would tell the agent what they wanted to change, and the AI would respond with follow-up questions. This felt natural and flexible but quickly became inefficient when specific, structured information was needed. At the other end of the spectrum, we considered traditional form-based editing. This made precise input easier but broke the flow of the conversation.
We ultimately landed on a hybrid approach. For open-ended or subjective content—like describing preferred team culture or long-term goals—the chat worked best. For critical, high-accuracy inputs like job location or salary expectations, structured forms embedded in the chat provided clarity. And for reviewing and editing an entire profile, a direct panel with contextual guidance from the agent gave users a high-level view without breaking the experience.
This same principle led us to rethink our entire information architecture. Initially, we followed standard SaaS conventions—side menus, tabs, and task-specific pages. But we found these patterns clashed with a conversation-first model. Users would engage meaningfully with the agent, only to be pulled out into static screens to take action—breaking context and flow.
So we pivoted. What if every workflow began within the conversation? What if the agent didn’t just assist, but orchestrated the experience end-to-end? We replaced traditional navigation with contextual panels triggered by chat. The result was not only more coherent—it felt like a truly unified system. This wasn’t a chatbot bolted onto a product. It was the product—a real Agentic experience.
Final Thoughts
Designing for Agentic AI means thinking beyond screens and interfaces. It’s about shaping a relationship—one that feels intuitive, supportive, and trustworthy.
There’s still so much we’re learning about what makes a great agent:
How can AI better interpret human intention and act in users’ best interest?
Where do we draw the line between AI autonomy and user control?
What level of transparency builds trust without overwhelming the experience?
And as we move toward an AI-to-AI (A2A) future—where agents interact with other agents—how do we design for those invisible, behind-the-scenes dynamics?
These are not just UX questions. They’re ethical, strategic, and deeply human. And, they’re shaping the next era of product design. If you’re working on Agentic AI, I hope these thoughts help you feel a little more seen—and maybe a little more inspired.
Related reads for you
Discover more blogs that align with your interests and keep exploring.