Agentic AI

CLIENT:

Dify

YEAR:

2025-2026

MY ROLE

Product Design Consultant

Case Study: Agentic AI & LLM Orchestration

As a consulting initiative for an AI startup, this project faced typical high-growth constraints: limited time and restricted access to direct user interviews. To navigate these challenges, I pioneered an AI-Augmented Research workflow. By leveraging LLMs to synthesize fragmented online user feedback and technical documentation, I rapidly mapped out the user's mental model and identified friction points. This agile approach allowed me to move from raw data to actionable design strategy in a fraction of the traditional research cycle.

Cognitive Gap: Error feedback relies on raw system logs (JSON) that provide no actionable guidance for business users.

Context Loss: Directing users to "Tracing" creates a fragmented workflow, forcing them to exit the design context to find solutions.

Poor Observability: The lack of visual mapping between error messages and node states makes it difficult to pinpoint issues quickly.

About this Project.

Stage 1: Seamless Preview (Focus on Output)

Minimalist validation environment. Interact with the bot immediately to verify the business outcome without being distracted by backend logic.

Stage 2: On-Demand Configuration (Contextual Editing)

Surgical logic adjustment. When the output misaligns with expectations, the user can call up the specific node configuration. Advanced settings remain hidden to maintain clarity.

Stage 3: Deep Debugging (Guided Troubleshooting)

Transparent Error Handling. If a system failure occurs, the "Debug" tab provides granular insights. We transform raw JSON into actionable data, mapping errors directly to node states to speed up recovery.

Context Synthesizer: I acted as the Curator and Strategist. While I used AI to automate the heavy lifting—such as scanning competitor feature sets and synthesizing user archetypes from raw community data—my focus remained on Synthesizing Insights. I refined the AI-generated persona, "Alex Chen," by injecting real-world empathy and business constraints that AI often overlooks, ensuring the final redesign solved the actual "trust" issue in AI automation.

Brainstorming Collaborator: To gain a competitive edge in the fast-moving AI Agent space, I leveraged LLMs to conduct a Rapid Feature Synthesis. This allowed me to analyze the agentic workflows of three major players simultaneously, focusing my human energy on identifying the 'Mental Model Gap' that these tools failed to address

Prompt: Act as a Senior Product Strategist specializing in No-code/AI platforms.

Task: Conduct a competitive analysis of Coze, Make.com, and Zapier, focusing specifically on their 'Agentic Workflow' UX.

Analysis Dimensions:

  1. User Barrier: How do they handle 'technical noise' (e.g., JSON, API keys) for non-tech users?

  2. Error Recovery: What is the UX when an Agentic loop fails? (Self-healing vs. manual debug)

  3. Observability: How visually clear is the 'thought process' of the AI Agent during execution?

  4. Flexibility vs. Rigidity: Is the workflow a linear DAG (Directed Acyclic Graph) or a true autonomous loop?

Output Requirement: Provide a comparison table and a summary of 'Design Patterns' used by each to bridge the gap for business users.

Through AI-assisted competitive analysis, I uncovered a strategic divergence: while Coze prioritizes its 'plugin ecosystem' and Zapier masters 'app connectivity,' Dify’s true competitive edge lies in 'Visual Orchestration.' This insight directly informed my redesign of the Debug Workflow, pivoting the focus toward making complex logic transparent and manageable for non-tech users.

Current UI Problems: 

Visual Clutter: Non-essential technical variables distracting users from primary input tasks.

Suggested Solution:

Empowerment through Configuration: Implementing a toggle-based control to enable or disable chat features, allowing users to customize the interface based on their specific use case.

Current UI Problems: 

High Cognitive Barrier: Technical features are difficult to interpret for non-tech personas, leading to confusion and lack of confidence.

Functional Overwhelm: An excessive number of advanced settings are exposed simultaneously, cluttering the workspace and paralyzing decision-making.


Suggested Solution:

Strategic Pruning: Removed or defaulted to "Hidden" for low-frequency features to reduce visual noise.

Taxonomy Reorganization: Grouped features based on User Mental Models, ensuring logically connected tools are physically adjacent.

Progressive Disclosure: Implemented a tiered interface where advanced configurations are tucked away behind an "Advanced Settings" fold.

Contextual Guidance: Integrated clear, in-place tooltips and semantic descriptions for complex features to bridge the knowledge gap.

Prompt: Please read through a list of raw user feedback from Discord and GitHub. Your task is to act as a UX Researcher.

  1. Identify users who demonstrate 'Business Logic' but struggle with 'Technical Implementation' (e.g., complaining about JSON, API, or coding terms).

  2. Filter out comments from advanced developers or engineers.

  3. Categorize their pain points into: UI Confusion, Terminology Barrier, and Mental Model Gap.

AI Integrated User Research

Build Users' Mental Modal

Design Explorations

Suggested Workflow

Current Workflow

Thank you for scrolling:)

If you want to see more detail about this project, feel free to reach out!