During the function call, the “talk while waiting” option is disabled and the previous conversation node is set to skip user response. However, if the user says “thank you” or speaks during the function call execution, the system responds with a strange, fast-paced robotic voice.
Call ID 1 = call_26e4fdf795bdaaeb64cd34a3c2f (Around 0:32 second mark, it should be silent but agent gives a response paced answer)
Call ID 1 = call_6b9f60a7a436b5caa0836dc88ba (Around 0:31 second mark, it should be silent but agent gives a response fast paced answer)
Team checked and what’s happening on both calls (around 0:31–0:32) is that as soon as the call enters the Waiting node, the runtime applies the agent’s default voice speed of 1.74×. Most of your other nodes override voiceSpeed down to ~0.9–1.06, but the Waiting node has no override, so it falls back to the 1.74 default.
“Talk while waiting = disabled” only stops the agent from speaking proactively during the function execution. It does not prevent a response when the user actively says something (like “thank you”). When that happens, the LLM generates a short reply (“Please hold while I connect you” / “Please hold while I transfer you”) and it’s rendered at 1.74× — that’s the robotic/fast voice you heard.
You can enable “skip response” on the Waiting node so the agent remains silent.
You may also want to lower the agent-level default voiceSpeed (currently 1.74), since that’s the value any node without an override will inherit.
@mark1 ,I’m sorry, but this approach won’t work. We are currently handling two languages, English and Latin American Spanish, and plan to expand to 4–5 more soon. As per client requirements, we need language-specific dialogues, with this node serving as a routing point for language detection before directing to the appropriate dialogue. These must remain separate nodes; otherwise, the LLM won’t be able to manage the complexity effectively in a specific order. Dividing the workflow across different conversational nodes is necessary.
The issue we’re encountering is that during validation in the function call node, which triggers an API endpoint and can take around five seconds, the agent begins speaking randomly, and we have no control over it. We need the agent to remain silent during this time or say something which we want, as this unexpected speech can occur at any point during any other function call.
This control is available in the conversation node, but not in the function call node.
What you can do is, instead of having the node before it skips the user response, you can make it “after user response.” So when the customer responds back saying thank you, it will go to the waiting function, so the customer will not interrupt the agent again until the function finishes.
Even after trying that, the issue still persists, it generates fast, random responses if the user speaks during the execution of the function node. Additionally, setting it to “After User Responds” makes the node wait indefinitely for input, even though the user isn’t expected to reply all the time, as the caller is transferred with a message asking them to wait while you are transferred.
This appears to be a bug, while the settings specify using a short phrase to fill silence, the agent is expected to remain silent during function execution.
For the fast response, this happens because your default voice speed is very high, 1.62X. The other nodes you have overridden the voice speed to be normal, but for this node you didn’t, which is why it went back to the default speed.
If the customer “speaks” while the function is executing, the agent will talk, saying something like “one second, I’m checking that,” even if you turned off “talk while waiting.” The use of this option is to force the AI to say something like “wait a second” when executing the function.
I will submit a feature request to make the AI fully silent while waiting and I will update you on it