Block Interruptions on function nodes disappeared, with no alternative?

I have a function node with block_interruptions: true set via API, but the caller can still interrupt the agent during tool execution.

I also tried interruption_sensitivity: 0 on the function node, with the same result.

The docs page ( Function Node Overview - Retell AI ) shows a “Block Interruptions” toggle. What happened? I’ve had quite some calls already that went terrible because the AI started hallucinating inside the function node.

Hi @maurizio

Could you please share the call IDs along with the model you’re currently using? This will help us investigate the issue more effectively.

Thank you!

Sure! Call IDs:

  • call_7e16ad79b5e04d3cb01ec2c45f5

  • call_ef5b6248479827348cc1fc65800

Model: gpt-5-mini (cascading)

Hi @maurizio

Have you tried GPT 4.1, as it tends to provide more stable and reliable results.

No, I have not tried that. But the issue is that I want the user to not be able to interrupt the agent at the function node. And there was a block interruptions feature. I’m not sure how GPT-4.1 would prevent the user from being able to interrupt the agent at the function node.

Hi @maurizio

I’ve forwarded your Call IDs to our internal team for review.

We’ll keep you updated as soon as we have more information.

Best regards

Hi @maurizio

In both calls, We did not observe any interruptions. Could you please share the exact timestamp of when this occurred, along with a screenshot showing the agent being interrupted? This will help us investigate the issue more accurately.

Thank you for your cooperation!

Call 1: call_7e16ad79b5e04d3cb01ec2c45f5

At ~19.7s, the agent is in the check_order_zip function node executing the verify_order_zip tool. The user says “I think.” at ~19.8s. The agent gets interrupted and starts generating at ~19.7s: “Thanks - I’ve got ZIP code 2002. Verifying now… Your order shows as shipped and the tracking says it’s out for delivery today.”

The tool result came back at ~20s saying the ZIP didn’t match, but the agent had already started hallucinating a response ~1 second before the result arrived. The user’s speech triggered the LLM to generate instead of waiting for the tool result.

Call 2: call_ef5b6248479827348cc1fc65800

At ~27.4s the user provides a second ZIP (“seven nine six eight”) and the function node fires again. At ~31.5s the user says “I think.” during tool execution, which interrupts the function node. At ~34.4s the agent hallucinates a response: “Okay - just to confirm, did you mean 07968, 79680, or 7968? ZIPs are five digits. Which one should I try?” - this is fabricated by the LLM, not based on any tool result. Then at ~45.7s the user says “What?” and the agent finally responds based on the actual tool result: “That ZIP didn’t match.”

The user’s “I think” at ~31.5s during function execution triggered the LLM to generate a hallucinated clarification response instead of waiting for the tool result.

Hi @maurizio

Thank you for your response. I’ve shared this with our team for further review.

We’ll keep you updated as soon as we have more information.

Best regards

Hello @maurizio,

In the first call:

The user said the complete sentence and then ended it with “I think,” before the agent turn, so no interruption here.

The agent said, “Thanks - I’ve got ZIP code 2002. Verifying now.” This is not a hallucination; you have an option turned on to make the agent talk while waiting when executing the function.

Then you can see the result came before the agent started speaking, so far, no hallucination or interruption happened at this point, either from the user or the agent. Everyone waited till their turn.

The same thing happened in the second call.

Hey Omar, I just checked and I might have turned “speak during execution” on afterwards. Anyways, here is a new call where speak_during_execution is OFF:

Call ID: call_29343a92fbe99f7a7972c9c167d
Model: gpt-5-mini (cascading)
Function node: check_order_zip with speak_during_execution: false, wait_for_result: true

Timestamps:

  • 34.09s - 38.70s: User provides ZIP code (“two nine nine two”)

  • Function node fires the verify_order_zip tool

  • 40.14s - 40.62s: User says “I think.” - this overlaps with the tool execution

  • 42.93s - 56.07s: Agent gets interrupted and generates a response before the tool result arrives. The generated text includes leaked system instructions and fabricated questions about billing vs shipping ZIP.

  • 57.62s: Agent then responds again, this time based on the actual tool result.

The agent was interrupted during function node execution even though speak_during_execution was set to false. The user’s speech (“I think”) triggered the LLM to generate while the tool was still processing.

@Omar_Ashraf anything on this yet?