Anyone else having node transitions not working on GPT-4.1?

Hey everyone,

Has anyone experienced node transitions completely stopping when switching from GPT-4o to GPT-4.1?

We have a multi-node agent that works perfectly on GPT-4o — all transition tool calls fire as expected and the agent moves through nodes correctly. But when we use the exact same prompts on GPT-4.1, the agent never calls the transition tool at all. It just treats everything as one big conversation and flows through all sections without ever transitioning between nodes.

The prompts haven’t changed, so it seems like something specific to how GPT-4.1 handles tool calls on Retell’s platform.

Anyone running into the same thing, or found a workaround? Would love to hear if this is a known issue or just us.

Thanks!

Based on the available documentation, tool calling behavior differences between models is a known consideration in Retell AI.(1)

When tools aren’t being triggered as expected, the recommended approach is to make your tool-calling instructions more explicit in your prompts.(1)(2) For Single/Multi Prompt agents, you should define clear triggers for tool usage rather than relying on tool descriptions alone.(1)

Here’s what to include in your prompt:(1)

Specify exact conditions for tool usage - Reference tools by their exact names and list specific trigger words or phrases that should activate them.(1)

Define sequences - Clarify when tools should be called and in what order.(1)

Set boundaries - Explain when NOT to call certain tools.(1)

Provide context - Explain why each tool is being called.(1)

For example, structure your tool instructions like this:(1)

///## Tool Usage Instructions
1. Gather initial information about the customer's issue.
2. Determine the type of request:
 - If customer mentions "refund" or "money back":
 → Call function `transfer_to_support` immediately
 - If customer needs order status:
 → Call function `check_order_status` with order_id

(1)

Additionally, GPT-4.1 supports tool_call_strict_mode, which can improve tool calling reliability when enabled.(3)(4)

If you continue experiencing issues after tightening your tool-calling instructions, consider these debugging steps from the community:(2)

  • Check the prompt section that should trigger the function
  • Treat it as a prompting/configuration problem rather than a platform bug
  • Test with different temperature settings

For hands-on troubleshooting assistance, you can:

Our LLM does not have “tool_call_strict_mode” set. We’d like to enable it. The states and edges are correctly configured, but the model is not triggering any state transitions during calls. Could enabling tool_call_strict_mode resolve this?

Yes, enabling tool_call_strict_mode could potentially help with your state transition issues. This setting is available when creating or updating a Retell LLM and affects how tool calls are handled.(1)(2)

According to the documentation, tool_call_strict_mode is described as “Whether to use strict mode for tool calls. Only applicable when using certain supported models.”(1)(2) When set to true, it ensures stricter adherence to the tool calling schema.(1)(2)

How to enable it:

You can enable tool_call_strict_mode by updating your existing Retell LLM using the Update Retell LLM API:(2)

import Retell from 'retell-sdk';
const client = new Retell({
 apiKey: 'YOUR_RETELL_API_KEY',
});
const llmResponse = await client.llm.update('16b980523634a6dc504898cda492e939', {
 tool_call_strict_mode: true
});

(2)

Or when creating a new Retell LLM:(1)

import Retell from 'retell-sdk';
const client = new Retell({
 apiKey: 'YOUR_RETELL_API_KEY',
});
const llmResponse = await client.llm.create({
 tool_call_strict_mode: true
});

(1)

Important notes:

  • This feature is “only applicable when using certain supported models”(1)(2)
  • Since you’re using GPT-4.1, which is a supported model option in Retell AI, this should be available to you(1)(2)
  • State transitions in multi-prompt agents rely on tool calls (edges between states are implemented as tools), so stricter tool calling behavior could improve transition reliability(1)

However, while tool_call_strict_mode may help, it primarily ensures the LLM sticks to the JSON parameter schema rather than guaranteeing the tool will be called more frequently.(3) If your transitions still aren’t triggering after enabling this, you’ll likely need to make your edge descriptions and state prompts more explicit about when transitions should occur.

Is it not enough to write down the prompt like; ?

Transit to the next section:
“Now, let me start by giving you some information about our company and the role you’ve applied for.”

  • Immediately proceed to Section 2 and begin the introduction of Section 2 content. Avoid pauses or waiting for additional prompts before continuing.