Has anyone experienced node transitions completely stopping when switching from GPT-4o to GPT-4.1?
We have a multi-node agent that works perfectly on GPT-4o — all transition tool calls fire as expected and the agent moves through nodes correctly. But when we use the exact same prompts on GPT-4.1, the agent never calls the transition tool at all. It just treats everything as one big conversation and flows through all sections without ever transitioning between nodes.
The prompts haven’t changed, so it seems like something specific to how GPT-4.1 handles tool calls on Retell’s platform.
Anyone running into the same thing, or found a workaround? Would love to hear if this is a known issue or just us.
Based on the available documentation, tool calling behavior differences between models is a known consideration in Retell AI.(1)
When tools aren’t being triggered as expected, the recommended approach is to make your tool-calling instructions more explicit in your prompts.(1)(2) For Single/Multi Prompt agents, you should define clear triggers for tool usage rather than relying on tool descriptions alone.(1)
Specify exact conditions for tool usage - Reference tools by their exact names and list specific trigger words or phrases that should activate them.(1)
Define sequences - Clarify when tools should be called and in what order.(1)
Set boundaries - Explain when NOT to call certain tools.(1)
Provide context - Explain why each tool is being called.(1)
For example, structure your tool instructions like this:(1)
///## Tool Usage Instructions
1. Gather initial information about the customer's issue.
2. Determine the type of request:
- If customer mentions "refund" or "money back":
→ Call function `transfer_to_support` immediately
- If customer needs order status:
→ Call function `check_order_status` with order_id
Our LLM does not have “tool_call_strict_mode” set. We’d like to enable it. The states and edges are correctly configured, but the model is not triggering any state transitions during calls. Could enabling tool_call_strict_mode resolve this?
Yes, enabling tool_call_strict_mode could potentially help with your state transition issues. This setting is available when creating or updating a Retell LLM and affects how tool calls are handled.(1)(2)
According to the documentation, tool_call_strict_mode is described as “Whether to use strict mode for tool calls. Only applicable when using certain supported models.”(1)(2) When set to true, it ensures stricter adherence to the tool calling schema.(1)(2)
How to enable it:
You can enable tool_call_strict_mode by updating your existing Retell LLM using the Update Retell LLM API:(2)
This feature is “only applicable when using certain supported models”(1)(2)
Since you’re using GPT-4.1, which is a supported model option in Retell AI, this should be available to you(1)(2)
State transitions in multi-prompt agents rely on tool calls (edges between states are implemented as tools), so stricter tool calling behavior could improve transition reliability(1)
However, while tool_call_strict_mode may help, it primarily ensures the LLM sticks to the JSON parameter schema rather than guaranteeing the tool will be called more frequently.(3) If your transitions still aren’t triggering after enabling this, you’ll likely need to make your edge descriptions and state prompts more explicit about when transitions should occur.