I have a need to inject a set of structured data of unknown length into a conversation prompt. Let’s say I have:
A question customer has asked about and we are responding to
The potential resolution status
ID of that thing in the database
There is a list of unknown length of these questions and their structured metadata, and we write them all into the prompt (along with individual dynamic variables). When the agent talks with the customer about the outstanding questions, and the customer interacts, the resolution statuses are updated and able to be referenced, because they have been discussed, but the IDs aren’t able to be referenced (also via a custom function), because they haven’t been discussed. I notice that if I ask about the ID of a question, then it actually populates properly when a webhook is called. The others are not.
Has anyone else run into this? Still trying to figure out a solution here.
Based on the available sources, there’s a known issue where dynamic variables in conversation flows may not populate correctly unless they’re explicitly referenced during the conversation.(1)
From the Discord community, one user reported: “I have some dynamic messages based on data from my webhook that is generated from the caller’s phone number however this doesn’t populate and the agent is unaware of the value of the variables, if my conversation is, ‘Read “{{something}}”’, the agent literally says the characters and the word something.”(1) The resolution noted was that dynamic variables should populate correctly when the webhook returns proper values.(1)
What’s happening in your case:
When you inject structured data with IDs that aren’t explicitly discussed in the conversation, the LLM may not “activate” or reference those variables in its context, preventing them from being accessible to webhooks or custom functions. Only when you explicitly ask about the ID does it become part of the active conversation context, making it available for webhook calls.
Potential approaches:
Explicitly reference all IDs in the prompt - Instruct the agent to internally note or acknowledge all IDs at the start of the conversation, even if not spoken aloud to the customer
Use Custom Functions - Custom Functions in Retell AI can make GET, POST, PUT, PATCH, and DELETE requests with custom headers and query parameters.(2) You could structure your custom function to always pass the full list of IDs as parameters, regardless of what’s been discussed
Leverage Post-Call Analysis - Use Post-Call Analysis to extract and map the data you want from the conversation, which can capture information more reliably after the call ends(3)
Store data in webhook payload - When using webhooks, the system sends structured JSON-formatted data to your endpoint.(4) You could structure your initial webhook to include all IDs upfront, maintaining them in your backend rather than relying on the conversation context
The core issue appears to be that Retell AI’s context-persistent transformer maintains conversation state,(5) but variables not actively referenced in the dialogue may not be fully accessible to downstream functions until they’re explicitly mentioned.