Do your LLM fallbacks work with Anthropic LLMs?

Hello! I was looking at your LLM fallback stuff since we want to use Claude Sonnet for our bot, but Claude tends to be down pretty frequently. Do your fallback settings work with Anthropic LLM’s as well? Just wanted to make sure since your page on it is either stating that it only works for OpenAI models or is using it as an example: How Retell AI Handles Outages with a 99.99% Uptime | Retell AI

It says: “Similarly, if your call is powered by an OpenAI LLM and OpenAI becomes temporarily unavailable, Retell AI will automatically route that conversation through a hosted backup model (e.g., Azure-hosted equivalent) to maintain conversational flow.”

Hi @caleb

LLM fallback and retries is automatically built in — without limiting it to any specific LLM provider. It is general resilience feature of the platform.

Thank You