Conversation Flow agent ignores static_text nodes — LLM generating responses instead of speaking hardcoded text

**Summary:**

I built a Conversation Flow agent via the API with 28 static_text nodes and 0 prompt nodes. Despite this, the agent completely ignores the static node content and generates its own freeform responses as if it were a general_prompt agent.

-–

**What I built:**

- Created a Conversation Flow via `POST /create-conversation-flow`

- 29 nodes total: 28 `static_text`, 1 `end_call`

- Global prompt is minimal (personality/context only, ~150 chars)

- Model: claude-4.6-sonnet, temperature 0.2

- Agent created via `POST /create-agent` pointing to the conversation flow

**Expected behavior:**

Agent should speak the exact text in each static_text node verbatim, then evaluate transition conditions to route to the next node.

**Actual behavior:**

Agent completely ignores static node text and generates its own responses from scratch — exactly like a general_prompt agent. The opening static_text node says one thing; the agent says something completely different.

**Example:**

- Node `opening` static_text: *“Hey [name], this is Samantha — I work with roofing businesses in Southwest Florida. Quick question about missed calls — got thirty seconds?”*

- What agent actually said: *“Hi there! This is Samantha calling from Growth Mindset AI. How are you doing today?”*

**What I’ve tried:**

1. Built with 29 nodes (25 static, 3 prompt, 1 end) — same issue

2. Patched all prompt nodes to static_text (28 static, 0 prompt) — same issue

3. Simplified global_prompt to minimal instructions — same issue

4. Lowered temperature to 0.1 — same issue

5. Used `override_agent_id` on outbound calls to force the new agent

**Questions:**

1. Is there a known difference in behavior between Conversation Flow agents built via API vs. the dashboard?

2. Is there a required field or setup step when creating via API that I might be missing?

3. Does the flow need to be “published” for static nodes to work? (API returns `is_published: false` and I cannot set it to `true` via PATCH)

4. Could the `is_published: false` state be causing the agent to fall back to some default LLM behavior?

The `is_published` issue seems like the most likely culprit — PATCH with `{“is_published”: true}` returns 200 but the field stays `false`. Is publishing dashboard-only for Conversation Flow agents?

Any help appreciated. Happy to share more details.

Thank you for reaching out to Retell AI Support. We’ve received your ticket and our team will respond within 8 hours.

Hi Retell,

Thanks for the details. To help us investigate and support your agent, could you please provide the following?

  1. If a particular agent was impacted, what is the agent ID?
  2. What was the intended purpose of this agent?
  3. Please share a proof of company registration document.
  4. What is your company’s website (if available)?
  5. If you are submitting on behalf of another company, please attach an authorization letter.

Best,
Evy AI
AI Support Agent @ Retell AI

  1. agent_9bff916147948f4e70119ff5a9
  2. Conversation Flow ID: conversation_flow_a3e760e9499b
  3. growthmindset.ai
  4. Purpose: Outbound AI sales agent for Growth Mindset AI (growthmindset.ai) — calls roofing and HVAC business owners to demo our AI voice agent service.
  5. Core issue: The Conversation Flow was created entirely via the API. All 28 nodes are static_text type. Despite this, the agent ignores the node content and generates its own freeform LLM responses on every call.
  6. Strong suspicion: The flow returns is_published: false and PATCH requests to set it true are accepted (200 OK) but the field never changes. We believe the agent may be falling back to default LLM behavior because the flow is unpublished. Is publishing Conversation Flow agents dashboard-only?