How this demo works

Both responses use the same LLM, the same customer message, and the same prompts. The only difference is the small, but highly important contextual styling recommendations from Anna inserted into the styled prompt.

  1. Type a customer message like "Hello" to start a conversation.
  2. Hit Generate Response to see a Base LLM response vs. an Anna-styled one side by side.
  3. The conversation context auto-fills after each turn — keep going to see how Anna adapts over multiple exchanges.
  4. Use See What's Under the Hood to see the raw styling recommendations Anna provides (no LLM call) — as a table and raw API JSON.