How I Vibe-Coded an AI Marketing Agent Platform with Claude
ai-era-strategy15 min read

How I Vibe-Coded an AI Marketing Agent Platform with Claude

After building PDP Forge in 30 prompts, I took on something bigger: a full AI agent platform with persistent conversations, progressive profiling, and multi-agent architecture. Here is how I built The Viable Edge platform through VibeCoding with Claude Code.

AS

Adam Sandler

Strategic Vibe Marketing pioneer with 20+ years of experience helping businesses build competitive advantage through strategic transformation. Expert in AI-era business strategy and systematic implementation.

Share:

A few months ago, I published a post called "How I Built PDP Forge in 30 Prompts." It became the most-read article on this site. People resonated with the idea that you could build a real product through conversation with an AI, no traditional coding required.

This is the sequel. But this time, the project was significantly more ambitious.

With PDP Forge, I built a focused product image tool. With The Viable Edge, I built an entire AI agent platform: persistent conversations, progressive user profiling, magic link authentication, multi-agent switching, database-backed session restoration, and a chat-first interface designed to rival professional AI applications.

The tool of choice this time was Claude Code, Anthropic's CLI for Claude. And the difference between prototyping with a chat-based AI studio and building production systems with Claude Code was like moving from a sketch pad to an architecture firm.

Here is how it happened.

The Vision: From Quiz to Conversation

The Viable Edge started as a traditional marketing assessment tool. Users would answer 20 questions, get a score, and receive recommendations. It was functional, but it had a fundamental problem: the experience felt transactional. People would complete the assessment, glance at their results, and leave. There was no relationship. No ongoing value. No reason to come back.

I wanted to transform it into something conversational. Instead of a quiz that produces a static report, I wanted an AI agent that analyzes your business and then has an ongoing dialogue with you about your brand strategy. Something you would return to the way you return to ChatGPT or Claude, but purpose-built for marketing and brand architecture.

The first prompt set the direction:

"I want to transform this from an assessment tool into a conversational AI agent platform. The Discovery Agent should analyze a user's website and then engage them in a strategic conversation about their brand. Conversations should persist across sessions. The interface should feel like a professional AI chat application, not a marketing website with a chatbot bolted on."

Takeaway: Start with the experience transformation you want, not the technical architecture. Describe what the user should feel, and let the AI figure out the implementation path.

Prompt 1: The Chat-First Interface

The first major build was the chat interface itself. I wanted it to feel native, like a real AI application rather than a widget embedded in a marketing page.

"Build a full-screen chat interface component at /agents/discover. It should have a message list with proper scroll behavior, a fixed input area at the bottom, and a clean header showing the agent name. Messages should render with distinct styling for user and assistant messages. The assistant messages should support rich HTML content including headers, lists, tables, and blockquotes. No sidebar, no navigation clutter. This should feel like opening Claude or ChatGPT."

Claude Code generated the ChatInterface component with proper auto-scrolling, message bubble styling, HTML rendering for rich responses, and a minimal layout that removed all the typical website chrome. The first version was immediately usable.

Takeaway: Reference products your audience knows. Saying "feel like Claude or ChatGPT" communicates more about the expected experience than any technical specification could.

Prompt 2: Website Analysis as Conversation Starter

The old flow had users filling out a 20-question quiz. I wanted to replace that with something that required almost no effort from the user: just give us your URL and we will do the work.

"Create a website analysis service that takes a URL, crawls the site's main pages, and extracts brand signals: messaging patterns, visual identity cues, content themes, competitive positioning indicators, and audience targeting signals. The analysis results should be passed as context to the Discovery Agent's first message, so the conversation opens with specific observations about the user's actual brand rather than generic questions."

This prompt produced the website analysis engine, which crawls the provided URL, extracts key content, and structures it into a context object that the AI agent uses to personalize the conversation from the very first message. The difference in user engagement was immediate. Instead of "Tell me about your business," the agent opens with "I noticed your homepage emphasizes speed and reliability, but your About page tells a story about innovation and disruption. Let's talk about that tension."

Takeaway: The best AI experiences do the heavy lifting for the user. Reduce input friction to the absolute minimum and compensate with intelligent analysis.

Prompt 3: Conversation Persistence

This was the prompt that transformed the project from a demo into a platform. Without persistence, every page refresh or browser close meant starting over. That is unacceptable for a tool people should return to.

"Implement full conversation persistence using Supabase. Create tables for agent_conversations (metadata, context, timestamps) and conversation_messages (role, content, timestamps, token usage). When a user returns, restore their most recent conversation with full message history. The chat interface should load previous messages on mount and scroll to the bottom. Include API routes for creating conversations, fetching conversation lists, and loading message history."

Claude Code generated the database migration files, the API routes, the data fetching logic in the chat component, and the session restoration flow. It handled edge cases I had not even considered, like what happens when a user has multiple conversations, or when a conversation has no messages yet.

This was also the point where I appreciated the difference between Claude Code and chat-based AI tools. Claude Code could see my entire project structure, understand the existing Supabase configuration, reference the types I had already defined, and produce code that slotted into the existing architecture without conflicts. A chat-based tool would have generated generic code that I would have had to manually integrate.

Takeaway: Conversation persistence is what separates a demo from a product. If your AI experience does not remember, it does not matter how good the responses are.

Prompt 4: Progressive Profile Enrichment

This was the most strategically interesting part of the build. Traditional SaaS products front-load data collection: fill out this long form before you can use anything. I wanted to flip that. Let the user start using the product immediately and gather context gradually through the natural flow of conversation.

"Build a progressive profile enrichment system. During conversations, the Discovery Agent should identify gaps in the user's profile (industry, company size, marketing budget, primary challenges, growth stage, target audience). Rather than asking directly, it should weave these questions into the natural conversation. When the user provides this information, extract it and update their profile in the background via an API call. The agent should track which enrichment questions have been asked and which remain. Store enrichment data in a JSONB column on the users table."

The system Claude Code built was more sophisticated than I expected. It created a profile enrichment service that tracks question priority, prevents asking the same question twice, and identifies natural conversation moments to gather information. The data gets stored incrementally, so even if a user only has a short conversation, whatever context they provide improves their next session.

Takeaway: Progressive profiling is how modern AI products should work. The best data collection feels like a conversation, not a form. And VibeCoding is particularly well-suited for building these kinds of nuanced interaction patterns because you can describe the desired behavior in natural language.

Prompt 5: Magic Link Authentication

Authentication is usually the part of a project where everything gets complicated. I wanted to avoid passwords entirely. Magic links felt right for the brand: simple, modern, and low-friction.

"Integrate magic link authentication into the onboarding flow. When a user completes onboarding (providing their URL, email, name, company, and industry), automatically create their account and send a magic link. The magic link should route them directly to /agents/discover with their conversation context already loaded. For returning users, detect their email during onboarding and send a new magic link rather than creating a duplicate account. Handle the Supabase auth callback to establish the session."

This was a complex prompt because it touched authentication, routing, session management, and the existing onboarding component. Claude Code handled the integration cleanly because it could see the OnboardingInterfaceV3 component, the existing Supabase auth configuration, and the agent routing system. It modified the onboarding flow, created the auth callback handler, and updated the agent page to validate authentication, all in one pass.

Takeaway: Authentication is where VibeCoding with Claude Code really shines compared to chat-based tools. Auth touches everything. You need the AI to see your entire project to get it right. Context-aware code generation is not optional here; it is essential.

Prompt 6: Multi-Agent Switching

The Viable Edge is not just one agent. The vision is a suite of specialized agents: Discovery for brand analysis, Build for brand architecture development, Deploy for implementation guidance, and Evolve for ongoing market optimization.

"Refactor the ChatInterface component to support multiple agents. Create a dynamic routing system at /agents/[agentId] that loads the correct agent configuration and conversation logic based on the URL parameter. Each agent should have its own system prompt, conversation style, and capabilities, but share the same chat interface, persistence layer, and user session. Include an agent switcher in the header that shows available agents and their descriptions. The Discovery Agent should be free, while Build, Deploy, and Evolve should show a premium lock indicator."

The multi-agent architecture was one of those moments where the ambition of the prompt matched the capability of the tool. Claude Code restructured the routing, created agent configuration files, built the switcher UI, and added access control logic for premium agents. The shared infrastructure (persistence, message rendering, input handling) stayed common while the agent-specific logic was properly separated.

Takeaway: Think in systems, not features. Once the architecture supports extensibility, adding new agents becomes a configuration task rather than a rebuild. This is the kind of architectural thinking that VibeCoding enables because you can describe the pattern you want rather than implementing it procedurally.

Prompt 7: Dynamic Dashboard Responses

Text-only AI responses are fine for general conversation, but a strategic brand analysis deserves a richer presentation. I wanted the Discovery Agent to generate interactive dashboard cards as part of its responses.

"Create a DynamicDashboard component that renders interactive insight cards within agent responses. The agent should be able to output structured dashboard data (cards with titles, metrics, descriptions, and action items) alongside conversational text. Cards should have hover effects, expandable details, and clear visual hierarchy. The dashboard should feel like an embedded analytics view within the conversation, not a separate page."

This prompt produced one of the most satisfying outcomes of the entire project. The Discovery Agent can now generate responses that mix conversational analysis with structured dashboard cards showing specific metrics, scores, and recommendations. It transforms a wall of text into a scannable, actionable format.

Takeaway: AI responses do not have to be plain text. The interface you build around the AI determines how professional and useful the output feels. This is where building your own platform gives you an advantage over using third-party chat interfaces.

Prompt 8: Conversation Context Management

As conversations grew longer, token management became critical. You cannot send the entire conversation history to the API on every message because you will hit context limits and the costs will spike.

"Implement a conversation context management system that maintains a sliding window of recent messages while preserving key context from earlier in the conversation. Include the website analysis results as persistent context. Summarize older messages rather than dropping them. Track token usage per message and per conversation. Add rate limiting based on the user's tier (free vs. premium)."

This was a behind-the-scenes prompt that users never see directly, but it is essential for a production system. Without context management, long conversations degrade in quality and costs spiral. The system Claude Code built maintains conversation coherence even across sessions that span days or weeks.

Takeaway: The unglamorous infrastructure prompts are often the most important. Context management, rate limiting, and token tracking are what separate a prototype from a product people can actually rely on.

How This Differs from PDP Forge

Building The Viable Edge taught me fundamentally different lessons than PDP Forge. Here is what changed:

Scale of ambition. PDP Forge was a focused tool with a clear scope: take a photo, improve it, export it. The Viable Edge is a platform with authentication, persistence, multiple agents, progressive profiling, and a freemium business model. VibeCoding scales to both, but the prompting strategy is different. Platform-level projects require more architectural prompts and fewer feature prompts.

The tool matters. I built PDP Forge using Google AI Studio and Antigravity. I built The Viable Edge with Claude Code. The difference is context. Claude Code operates inside your project. It sees your file structure, your existing code, your configurations. This changes everything when you are building something with interconnected systems. Authentication, database schemas, API routes, and front-end components all need to work together. An AI that can see the full picture produces code that integrates cleanly.

Iteration rhythm. PDP Forge was built in bursts over two weeks of evenings. The Viable Edge was built through sustained iteration over months. The prompting approach evolved from "build this feature" to "refactor this system." As the codebase grew, prompts needed more context and more specificity. Early prompts could be loose and exploratory. Later prompts needed to reference existing components and specify exact integration points.

Debugging complexity. PDP Forge bugs were usually isolated. A button did not work, an image did not render, a layout broke on mobile. The Viable Edge bugs were systemic. A change to the conversation persistence layer could break the profile enrichment system. An authentication update could affect agent routing. Claude Code's ability to trace issues across files was essential.

What I Learned About VibeCoding at Scale

After two major VibeCoding projects, here are the principles I have solidified:

  1. Start with the experience, not the architecture. Describe what the user sees and feels. Let the AI determine the technical approach. You can always refactor architecture. You cannot refactor a bad experience concept.
  2. One major system per prompt. Do not try to build authentication, persistence, and a new UI component in the same prompt. Break systems apart. Let each one stabilize before building the next.
  3. Reference your own product in prompts. As the project grows, your prompts should reference existing components, pages, and data structures by name. The more specific you are about how new features integrate, the cleaner the output.
  4. Describe edge cases explicitly. "What happens when a user returns after 30 days?" or "What if the website analysis fails?" These questions in your prompts prevent bugs before they happen.
  5. Invest in infrastructure early. Persistence, authentication, and error handling are not exciting, but they determine whether your project is a demo or a product.
  6. Test through usage, not unit tests. I spent more time actually using the platform as a real user than writing formal tests. Using the product reveals UX problems that tests cannot catch.
  7. Keep a build log. Document your prompts and what they produced. This is not just for blog posts. When something breaks later, the build log helps you understand why a system was built a certain way and what the original intent was.

What Is Next

The Discovery Agent is live and free to use. The remaining agents (Build, Deploy, Evolve) are functional but need polish and integration work. The premium tier with payment processing is the next major build.

The broader lesson is this: VibeCoding is not just for simple tools anymore. You can build production-grade platforms through conversation with AI. The key is choosing the right tool (Claude Code for anything with interconnected systems), developing your prompting instincts over time, and treating the AI as a collaborator rather than a code generator.

Every prompt I shared in this post produced working code on the first or second attempt. Not perfect code, but functional code that I could iterate on. That is the real power of VibeCoding: it compresses the distance between an idea and a working product to almost nothing.

If the PDP Forge post convinced you that VibeCoding is real, I hope this one convinces you it is ready for serious work.

Try the Platform I Built

The Discovery Agent is live and free to use. See what an AI-powered brand analysis looks like by running one on your own business.

Try the Discovery Agent