← All articles

Agentic UI: Building Frontend Interfaces for Multi-Step AI Agent Workflows

Single-turn LLM calls — send a prompt, stream a response — are a solved UI problem. Agent workflows, where an LLM plans and executes multi-step tasks using tools, are not. The frontend challenges are different in kind, not just in degree.

The previous article on LLM integration covered streaming chat responses, abort controllers, and optimistic UI for single-turn interactions. That article is still relevant — those patterns are still the majority of what ships. But in the past year, the products we build at Symfio have started to include agent workflows: the user gives a high-level instruction, and the model plans, calls tools, and produces a structured outcome across multiple steps. The frontend for these workflows is a genuinely different design problem.

What Makes Agentic UI Different

In a single-turn interaction, the latency is predictable (a few seconds), the output is a text stream, and the failure modes are limited: the model gives a bad answer, or the request times out. The UI maps naturally onto a chat bubble with a streaming cursor.

In an agent workflow:

  • Latency can be 30 seconds to several minutes
  • The output is a sequence of tool calls, each with their own inputs and outputs
  • Some tool calls have side effects — writing files, calling external APIs, modifying data
  • The model can get stuck, loop, or call the wrong tool entirely
  • Users need to understand what is happening and why, and they need a way out

Each of these changes how you build the interface.

Streaming Tool Call Progress

The server-sent event stream for an agent workflow contains more than text tokens. It contains structured events: tool calls starting, tool results arriving, planning steps, errors. The frontend needs to handle all of these as first-class events, not as text to be displayed:

// types/agent-events.ts

type AgentEvent =
  | { type: 'thinking';    content: string }
  | { type: 'tool_call';   id: string; name: string; input: unknown }
  | { type: 'tool_result'; id: string; output: unknown; error?: string }
  | { type: 'text';        content: string }
  | { type: 'done' }
  | { type: 'error';       message: string };

// hooks/use-agent.ts

export function useAgent() {
  const [events, setEvents] = useState<AgentEvent[]>([]);
  const abortRef = useRef<AbortController | null>(null);

  const run = async (instruction: string) => {
    abortRef.current = new AbortController();
    setEvents([]);

    const response = await fetch('/api/agent/run', {
      method: 'POST',
      body: JSON.stringify({ instruction }),
      signal: abortRef.current.signal,
    });

    const reader = response.body!.getReader();
    const decoder = new TextDecoder();

    while (true) {
      const { done, value } = await reader.read();
      if (done) break;

      const lines = decoder.decode(value, { stream: true }).split('\n');
      for (const line of lines) {
        if (!line.startsWith('data: ')) continue;
        const event = JSON.parse(line.slice(6)) as AgentEvent;
        setEvents(prev => [...prev, event]);
      }
    }
  };

  const cancel = () => abortRef.current?.abort();
  return { events, run, cancel };
}

The UI renders each event type differently. A tool_call event renders as a "calling tool" indicator; a tool_result renders its output inline; a thinking event may be shown as a collapsible reasoning block:

// components/AgentTimeline.tsx

function AgentTimeline({ events }: { events: AgentEvent[] }) {
  return (
    <ol aria-label="Agent steps" aria-live="polite" aria-atomic="false">
      {events.map((event, i) => (
        <li key={i}>
          {event.type === 'tool_call' && (
            <ToolCallCard name={event.name} input={event.input} status="running" />
          )}
          {event.type === 'tool_result' && (
            <ToolResultCard id={event.id} output={event.output} error={event.error} />
          )}
          {event.type === 'text' && (
            <p className="agent-text">{event.content}</p>
          )}
        </li>
      ))}
    </ol>
  );
}

Human-in-the-Loop Confirmations

Some tool calls should not execute automatically. Deleting records, sending emails, making payments — these deserve an explicit user confirmation before the agent proceeds. The server pauses the workflow and emits a confirmation_required event; the UI renders a confirmation card and waits:

// types/agent-events.ts — extended
type AgentEvent =
  | { type: 'confirmation_required';
      id: string;
      tool: string;
      description: string;
      consequence: string }  // "This will delete 47 records permanently."
  // ... other event types

// components/ConfirmationGate.tsx
function ConfirmationGate({
  event,
  onConfirm,
  onCancel,
}: {
  event: Extract<AgentEvent, { type: 'confirmation_required' }>;
  onConfirm: (id: string) => void;
  onCancel: () => void;
}) {
  return (
    <div role="alertdialog" aria-labelledby="confirm-title" aria-modal="false">
      <h3 id="confirm-title">Confirm: {event.tool}</h3>
      <p>{event.description}</p>
      <p><strong>{event.consequence}</strong></p>
      <button onClick={() => onConfirm(event.id)}>Proceed</button>
      <button onClick={onCancel}>Cancel agent</button>
    </div>
  );
}
Design rule Always describe the consequence in plain language, not the tool name. "Delete 47 records permanently" is a confirmation prompt. "execute_bulk_delete with ids=[...]" is not. The user confirming an action needs to understand what they are confirming, not what the model decided to call.

Cancellation Mid-Agent

Cancellation for agent workflows is more complex than for single-turn streams. An abort signal cancels the HTTP request, but the server-side agent may still be running. You need to:

  1. Abort the client-side stream (abort controller)
  2. Send a separate cancellation signal to the server to stop the agent execution
  3. Handle the partial state on the client — show what ran and what did not
const cancel = async () => {
  abortRef.current?.abort(); // stop reading the stream

  // Tell the server to stop the agent
  await fetch('/api/agent/cancel', {
    method: 'POST',
    body: JSON.stringify({ runId: currentRunId }),
  });

  // Mark remaining running tool_calls as cancelled
  setEvents(prev => prev.map(e =>
    e.type === 'tool_call'
      ? { ...e, status: 'cancelled' }
      : e
  ));
};

Not Misleading Users About "Thinking"

The temptation is to show a generic "AI is thinking..." spinner for the duration of an agent run. This is harmful for two reasons: it makes a 90-second wait feel like a malfunction, and it hides information the user might need to understand whether the agent is doing something sensible.

Better patterns:

  • Show the current step — "Searching for recent invoices (step 2 of 4)" is more trustworthy than a spinner
  • Show elapsed time — a visible timer signals that time is passing intentionally, not that the request is stalled
  • Make thinking expandable, not invisible — the model's reasoning chain can be shown in a collapsed section; users who want to verify the logic can, users who do not can ignore it
  • Show what has already completed — completed tool calls should render their results immediately, not wait for the whole run to finish
Accessibility note Use aria-live="polite" on the agent timeline so screen reader users receive updates as steps complete. Use aria-live="assertive" only for confirmation prompts and errors that require immediate attention.

Error Handling When a Tool Fails

Agent tool failures are different from API errors. A tool can fail while the overall agent run is still proceeding — the model may retry, use a fallback tool, or decide the task is uncompletable. The UI should reflect this nuance:

  • A failed tool call that the model recovers from: show the failure inline on the tool card, do not interrupt the flow
  • A failed tool call that stops the agent: show the error prominently and offer a "retry from this step" option if the server supports it
  • A timeout: distinguish between "the tool timed out" and "the whole agent timed out" — they have different recovery paths

Key Takeaways

  • Agent workflow events are structured (tool calls, tool results, thinking) — parse and render them as typed events, not raw text
  • Human-in-the-loop confirmations should describe the consequence in plain language, not the internal tool name
  • Cancellation requires both aborting the client stream and signalling the server to stop execution
  • Never show a generic spinner for long agent runs — show current step, elapsed time, and completed results progressively
  • Distinguish recoverable tool failures from agent-stopping failures — they require different UI treatment