Based on the available documentation, tool calling behavior differences between models is a known consideration in Retell AI.(1)
When tools aren’t being triggered as expected, the recommended approach is to make your tool-calling instructions more explicit in your prompts.(1)(2) For Single/Multi Prompt agents, you should define clear triggers for tool usage rather than relying on tool descriptions alone.(1)
Here’s what to include in your prompt:(1)
Specify exact conditions for tool usage - Reference tools by their exact names and list specific trigger words or phrases that should activate them.(1)
Define sequences - Clarify when tools should be called and in what order.(1)
Set boundaries - Explain when NOT to call certain tools.(1)
Provide context - Explain why each tool is being called.(1)
For example, structure your tool instructions like this:(1)
///## Tool Usage Instructions
1. Gather initial information about the customer's issue.
2. Determine the type of request:
- If customer mentions "refund" or "money back":
→ Call function `transfer_to_support` immediately
- If customer needs order status:
→ Call function `check_order_status` with order_id
Additionally, GPT-4.1 supports tool_call_strict_mode, which can improve tool calling reliability when enabled.(3)(4)
If you continue experiencing issues after tightening your tool-calling instructions, consider these debugging steps from the community:(2)
- Check the prompt section that should trigger the function
- Treat it as a prompting/configuration problem rather than a platform bug
- Test with different temperature settings
For hands-on troubleshooting assistance, you can:
- Ask questions on Discord
- Visit Hire certified Retell partner