HIGH PRIORITY: Bug Report: Manual Test Interface - Agent Transfer Transition Failure

Bug Report: Manual Test Interface - Agent Transfer Transition Failure

Date: December 20, 2025
Reporter: Ken Corse
Severity: High (blocks manual testing workflow for multi-agent systems)

Summary

The manual test interface incorrectly handles node transitions after agent-to-agent transfers, causing conversations to route to “Begin” node instead of following configured transitions. Voice calls work correctly with identical agent configurations, confirming this is a platform UI bug, not an agent configuration issue.

Environment

  • Platform: Retell.ai Conversation Flow

  • Test Method: Manual Test Interface (web-based testing tool)

  • Agents Involved:

    • Main Agent (ID: agent_9ee27e595a6ef9c72bb6efc0d9)

    • FAQ Agent (ID: agent_5f3085ed225e54ec9b63a109d8)

Expected Behavior

When a user responds to a question in a conversation node after being transferred from another agent:

  1. Node should evaluate the transition condition

  2. Flow should proceed to the next configured node

  3. Manual test should behave identically to voice calls

Actual Behavior (Manual Test Interface)

After agent transfer, when user responds to conversation node:

  1. Transition condition is not evaluated

  2. Flow incorrectly routes to “Begin” node instead of configured next node

  3. Debug panel shows contradictory information (see evidence below)

Steps to Reproduce

  1. Start manual test in Main Agent

  2. Trigger FAQ transfer:

    • User provides name

    • At phone number prompt, say: “I have a question”

    • Main Agent global node captures user_input

    • Main Agent transfers to FAQ Agent via agent_swap

  3. FAQ Agent receives control:

    • Says: “One moment, please”

    • Extracts user_input and resume_node from context

    • Routes through logic flow

    • Reaches “Offer Callback” node

    • Asks: “If I can’t provide all the information you need, would you like to have someone get in touch with you?”

  4. User responds: “Yes” or “No”

  5. Bug occurs:

    • Flow goes to “Begin” instead of “OK. Got It!” node

    • Conversation effectively ends/restarts

Evidence

1. Debug Panel Shows Contradictory Information

Screenshot evidence (Screenshot_2025-12-20_at_5_07_25_PM.png):

  • Debug panel shows: “Node Transition: OK. Got It!” with frequency 10/10

  • This indicates the system knows it should transition to “OK. Got It!”

  • However, actual conversation shows: “new node: begin”

This is definitive proof of a platform bug - the system is logging the correct transition internally but executing the wrong one.

2. Manual Test Conversation Log (Screenshot_2025-12-20_at_5_07_35_PM.png)

Agent: If I can’t provide all the information you need, would you like to have someone get in touch with you?

User: yes

Node Transition

previous node: Offer Callback

new node: begin ← INCORRECT (should be “OK. Got It!”)

3. Voice Call Works Correctly

Call ID: call_723ff620dbce28657006c3356de
Timestamp: 2025-12-20

Transcript excerpt:

Agent: If I can’t provide all the information you need, would you like to have someone get in touch with you?

User: Yes.

Agent: What would you like to know? ← CORRECT flow continuation

User: I’d like to know how long the company has been in business.

Agent: I found the information. The company has been in business for 12 years, since 2013.

Voice call demonstrates the agent configuration is correct - the same exact agents that fail in manual testing work perfectly in production voice calls.

Node Configuration

Offer Callback Node:

  • Type: Conversation (static text)

  • Prompt: “Say EXACTLY: ‘If I can’t provide all the information you need, would you like to have someone get in touch with you?’ -Repeat question ONLY if needed to get either an affirmative or negative reply.”

  • Transition condition: “User replies.”

  • Destination: “OK. Got It!” node (ID: node-1766233519228)

OK. Got It! Node:

  • Type: Conversation (static text)

  • Text: “OK. Got it!”

  • Skip Response Edge → Variable-Set callback_requested node

Troubleshooting Attempted

We attempted multiple variations to isolate the issue:

Transition Condition Variations (all failed in manual test):

  • “User replies.”

  • “User provides any response.”

  • “User says yes”

  • “User says no”

  • “Always”

  • Blank/empty condition

Node Structure Variations (all failed in manual test):

  • Adding intermediate conversation node between Offer Callback and EV node

  • Converting to Skip Response node

  • Various combinations of edges and conditions

Result: All variations failed in manual test but worked in voice calls, confirming platform bug.

Impact

High severity because:

  1. Blocks efficient development workflow - developers cannot reliably test multi-agent flows manually

  2. Wastes development time - requires publishing and making voice calls for every test iteration

  3. Reduces debugging capability - voice calls lack the debug panel, step-through, and detailed inspection available in manual tests

  4. Creates false negatives - developers may abandon working configurations thinking they’re broken

  5. Affects confidence - contradictory debug information undermines trust in the platform

Workaround

Until fixed, developers must:

  1. Use manual tests for basic flow validation only

  2. Use voice calls for validating any agent transfer functionality

  3. Accept slower iteration cycles and reduced debugging visibility

  4. Document that manual test results are unreliable for multi-agent scenarios

Requested Action

  1. Investigate why manual test interface handles node transitions differently than voice calls after agent transfers

  2. Fix the transition evaluation logic in manual test interface to match production behavior

  3. Fix the debug panel to show accurate information (or explain why it shows “OK. Got It!” while routing to “begin”)

  4. Test across multiple agent transfer scenarios to ensure comprehensive fix

  5. Document any known limitations of manual test interface vs. voice calls

Additional Context

  • This issue specifically manifests after agent_swap tool invocations

  • Direct testing within the FAQ Agent (without transfer) works correctly in manual tests

  • Both agents use Conversation Flow (not Retell LLM type)

  • Issue is reproducible 100% of the time in manual tests

  • Issue has 0% occurrence in voice calls (works perfectly)

Supporting Files

Available upon request:

  • Complete agent JSON exports (Main Agent & FAQ Agent)

  • Additional screenshots of configuration

  • Full conversation logs from manual tests

  • Call recordings demonstrating correct behavior

Contact: Ken Corse
Priority Request: High - blocking development workflow

Based on your detailed bug report, this appears to be a known issue with node transitions in Conversation Flow agents, particularly after agent transfers. The documentation and community discussions provide several relevant insights and troubleshooting approaches.

Known Node Transition Issues

Your issue matches patterns documented in the community where nodes transition incorrectly or fail to evaluate conditions properly(1). Specifically, cases where “a flow like: A → B → C → D” jumps unexpectedly, or where transition conditions are met but the agent doesn’t advance(2).

Agent Transfer Behavior

When using Agent Transfer, the destination agent receives the full conversation history(3). However, the documentation notes that certain settings from the first agent persist throughout the call, while others (like language, voice, and voiceModel) reflect the currently active agent(3).

Debugging Node Transitions

For transition issues, the documentation recommends(4):

Review transition conditions - Ensure they precisely match your intended triggers. Consider prompt engineering or breaking down complex conditions into multiple simpler ones(4).

Add transition finetune examples - Provide examples to help the model understand your expectations(4)(2). For your specific case, you would provide a transcript context showing the user responding “yes” or “no” to the callback question, and specify “OK. Got It!” as the transition result(2).

Make conditions more specific - Instead of generic conditions like “User replies,” try more explicit conditions such as “User responds to question of their age” or “User indicates they want to book a meeting”(5).

Split nodes - If a node contains multiple responsibilities, split it into separate nodes to improve consistency(4).

Testing Differences

You can test agent transfer in both web call and playground(3). However, the discrepancy you’re seeing between manual testing and voice calls suggests a platform-specific issue with how the manual test interface handles post-transfer transitions.

Recommended Actions

Since your voice calls work correctly with identical configurations, this confirms your agent setup is valid. For the manual testing issue:

  1. Document the working voice call behavior as your baseline
  2. Add transition finetune examples to the “Offer Callback” node showing the exact user responses and expected “OK. Got It!” transition(2)
  3. Tighten the transition condition - replace “User replies” with something more specific like “User responds affirmatively or negatively to callback question”(1)
  4. Check for global nodes that might be interfering with transitions(1)
  5. Review call logs for the specific condition that fired during the incorrect transition(1)

Reporting the Bug

Since this is a platform UI discrepancy between manual testing and production calls, you should create a ticket in the bug report channel with your call ID (call_723ff620dbce28657006c3356de) and agent IDs to help the support team investigate the manual test interface behavior(6).

The contradictory debug information (showing “OK. Got It!” internally while routing to “begin”) is particularly valuable evidence for the support team to diagnose the root cause.

Hello @kenjcorse2,

Please write a quick summary without using LLM, because it’s not always accurate.

Also, share the call ID for the call that has this issue.