I understand you’re experiencing a significant latency increase since yesterday. Based on the available troubleshooting guidance, here are the recommended steps to diagnose and resolve this issue:
Immediate Troubleshooting Steps
Monitor LLM Response Time(1)
Check your current LLM latency and compare against the normal range of 500ms - 900ms.(1) If consistently above 900ms, consider using fast tier, switching to an alternative LLM provider, or waiting for your current provider to resolve performance issues.(1)
Check the Status Page(1)
Visit the Status Page for any ongoing issues.(1) If there are ongoing issues, wait for them to resolve.(1)
Consider Geographic Distance(1)
International phone calls may introduce additional latency due to geographic distance.(1) Check if you’re making calls between different countries or continents, and consider getting a local phone number in the same region as your users.(1)
Turn-Taking Model Impact(2)
In some cases, extra end-to-end latency can be explained by turn-taking taking longer during non-standard interactions.(2) If you have too many global nodes with long instructions, all those things will be added to the node transition prompt, which will cause the node transition to consistently slow.(2) Reducing the global node settings would help with the node transition latency.(2)
Configuration Optimizations
Change to a Faster LLM(3)
Use a more efficient language model to reduce response time and improve performance.(3)
Change to a Faster TTS(3)
Implement a faster text-to-speech system for quicker audio output and smoother conversations.(3)
Better Turn-Taking Model(3)
Implement a more sophisticated turn-taking model to accurately detect the end of a user’s turn, preventing the AI from interrupting or prematurely responding.(3)
Get Support
If the above steps don’t resolve your latency issues:(1)
Since you mentioned this started yesterday without any agent changes on your end, checking the Status Page and monitoring LLM response times should be your first priority to rule out platform-wide issues.