Gemini 3.0 Flash Error - "missing a 'thought_signature'" when handling parallel function call results

Environment

  • LLM Provider: Gemini 3.0 Flash (via Retell)

  • Issue Type: LLM Integration Bug

  • Severity: High (blocking voice conversations)

Problem Description

When Retell sends parallel function calls (multiple functions executed simultaneously) and our backend returns successful results for all of them, the Gemini 3.0 Flash LLM fails with the following error:

Unable to submit request because function call 'default_api:lock_state' in the 11. content block is missing a 'thought_signature'

This causes the voice conversation to hang/fail after the function calls complete successfully.

Steps to Reproduce

  1. Configure a Retell agent with Gemini 3.0 Flash as the LLM

  2. Define custom functions that can be called in parallel

  3. During a conversation, trigger multiple function calls simultaneously

  4. All functions execute successfully and return valid responses

  5. Error: Gemini rejects the function results with thought_signature error

Expected Behavior

The LLM should process the function call results and continue the conversation.

Actual Behavior

The conversation hangs. Dashboard logs show:

error: Error encountered in LLM call: Failed to get response from any LLM provider: 
{"error":{"message":"Unable to submit request because function call `default_api:lock_state` in the 11. content block is missing a `thought_signature`"}}

Additional Context

  • The thought_signature appears to be a Gemini-specific requirement related to their extended thinking feature

  • Individual function calls work fine; the issue only occurs with parallel function call results

  • Our backend successfully executes all functions and returns valid JSON responses


The Gemini 3.0 Flash thought_signature error you’re encountering with parallel function calls isn’t directly addressed in the available Retell AI documentation. However, I can provide context based on what’s documented about function calling behavior.

Retell AI’s function calling implementation has specific behaviors around parallel execution. When you create a Retell LLM, you can configure tool_call_strict_mode which affects how tool calls are handled(1).

Based on community discussions, there have been reported issues with the LLM calling functions without waiting for previous function completion(2). One user reported: “this directly conflicts with your documentation: When there’re tools being called, the model would not be able to make any new tool calls until the current tool call is completed”(2).

For your specific Gemini 3.0 Flash issue with the thought_signature error during parallel function calls, this appears to be a Gemini-specific requirement that may not be fully compatible with Retell’s current implementation.

Recommended actions:

  1. Try enabling tool_call_strict_mode when creating your Retell LLM(1)
  2. Consider switching to a different model - Retell supports multiple LLM options including GPT-4.1, GPT-5 models, and Claude variants(1)
  3. Contact Retell support directly through their Discord or Contact Sales(3)

The thought_signature requirement appears to be specific to Gemini’s extended thinking feature and may require Retell to update their Gemini integration to properly handle this parameter during parallel function call responses.

Hello @tyec

Can you share the call id that has this issue?