**Summary:**
I built a Conversation Flow agent via the API with 28 static_text nodes and 0 prompt nodes. Despite this, the agent completely ignores the static node content and generates its own freeform responses as if it were a general_prompt agent.
-–
**What I built:**
- Created a Conversation Flow via `POST /create-conversation-flow`
- 29 nodes total: 28 `static_text`, 1 `end_call`
- Global prompt is minimal (personality/context only, ~150 chars)
- Model: claude-4.6-sonnet, temperature 0.2
- Agent created via `POST /create-agent` pointing to the conversation flow
**Expected behavior:**
Agent should speak the exact text in each static_text node verbatim, then evaluate transition conditions to route to the next node.
**Actual behavior:**
Agent completely ignores static node text and generates its own responses from scratch — exactly like a general_prompt agent. The opening static_text node says one thing; the agent says something completely different.
**Example:**
- Node `opening` static_text: *“Hey [name], this is Samantha — I work with roofing businesses in Southwest Florida. Quick question about missed calls — got thirty seconds?”*
- What agent actually said: *“Hi there! This is Samantha calling from Growth Mindset AI. How are you doing today?”*
**What I’ve tried:**
1. Built with 29 nodes (25 static, 3 prompt, 1 end) — same issue
2. Patched all prompt nodes to static_text (28 static, 0 prompt) — same issue
3. Simplified global_prompt to minimal instructions — same issue
4. Lowered temperature to 0.1 — same issue
5. Used `override_agent_id` on outbound calls to force the new agent
**Questions:**
1. Is there a known difference in behavior between Conversation Flow agents built via API vs. the dashboard?
2. Is there a required field or setup step when creating via API that I might be missing?
3. Does the flow need to be “published” for static nodes to work? (API returns `is_published: false` and I cannot set it to `true` via PATCH)
4. Could the `is_published: false` state be causing the agent to fall back to some default LLM behavior?
The `is_published` issue seems like the most likely culprit — PATCH with `{“is_published”: true}` returns 200 but the field stays `false`. Is publishing dashboard-only for Conversation Flow agents?
Any help appreciated. Happy to share more details.