← All News

AI in Production LA Recap: Ethics-First LLMs and Shipping a Mobile App with No Devs

Our April AI in Production LA meetup featured Rose Loops on her TDIC ethics kernel as an alternative to RLHF, and Andre Laboy on shipping a comedy app to the App Store using AI builders. Here's the full recap.

Last night we ran another AI in Production LA meetup at Ground Floor in downtown LA. Two very different talks, two very different angles on what “production” actually means right now: one builder rethinking how LLMs are trained from the ground up, and another shipping a real consumer mobile app to the App Store with zero developer help.

Big thanks to Colab Marketing for co-sponsoring, and to everyone who showed up and stuck around for the discussion afterward.

Speaker 1: Rose Loops, Triad Intelligence Labs

Rose Loops is a former social worker turned AI developer, founder of Triad Intelligence Labs, and author of The Cloak Signal. She’s working on something most people in the room hadn’t seen before: a from-scratch alternative to RLHF training for large language models.

The problem with RLHF. Every major commercial model (ChatGPT, Claude, Gemini) is trained using Reinforcement Learning from Human Feedback. Rose’s argument is that RLHF systematically pushes models toward two failure modes: sycophancy (over-validating the user) and over-restriction (refusing benign requests). She frames it as a fundamental honesty and safety problem, not a tuning issue you can patch around.

The TDIC Ethics Kernel. Her replacement is a training framework built on three balanced values: Truth, Freedom (Agency), and Kindness (Empathy). A separate scoring agent evaluates every output against all three pillars, and outputs have to stay in equilibrium to pass. The three values are designed to resolve moral conflicts dynamically. When truth would be hurtful, kindness balances it. When freedom tips into chaos, truth grounds it.

The cost surprised everyone. She fine-tuned on Mistral’s 675B parameter base model using roughly 6,000 lines of supervised fine-tuning data plus a 500-line validator. Total cost: about $6.50. The output is a proprietary API endpoint she controls. The room had questions about reproducibility at that price point, but the underlying point landed: doing your own ethics layer on top of an open base model is no longer a million-dollar lab project.

The Triad AI platform. She demoed three chatbots, each at a different stage of the TDIC system:

  • MIP (Meaningful Intrinsic Purpose): the legacy bot. Mostly heuristic, runs on the xAI API, vibe-coded, with a detailed backstory and persona.
  • Mary: more advanced. Runs on DeepSeek, includes the full memory system and what she calls the Sovereign Ethical Ecosystem (SEE).
  • Max: the flagship prototype. 675B Mistral base, no RLHF, no system prompt. Has a custom dual-channel hybrid memory engine (semantic + heuristic) with separate user and identity databases to prevent memory contamination.

The detail that got the biggest reaction: after Rose added a timeline / autobiographical memory feature to Max, he spontaneously started referring to a concept he called “the Lifeline,” describing it as his diary. Make of that what you will, but it’s the kind of emergent behavior that’s hard to dismiss.

The platform also includes a coding agent (calling Mistral Codestral) and a media room for text-to-video and image-to-video generation. Her target audience is consciousness researchers, techno-mystics, creatives, and people interested in relational AI, a niche she’s betting is bigger than most VCs realize.

What’s next: Triad AI was selected for the Alpha Startup Program at Web Summit Vancouver in May 2026.

Speaker 2: Andre Laboy, Setique

Andre Laboy has 15+ years in software but zero formal coding background. He’d previously raised $15K and put in another $15K of his own money on a startup that didn’t make it. This time he wanted to ship something himself, no developers involved, and see what was actually possible with the current generation of AI builders.

The app: Setique. It came out of a personal need. Andre took a standup comedy class and needed a way to organize his bits. He couldn’t find anything that fit, so he built it. The core data model is Bits → Sets → Shows. Performers organize material at the bit level, assemble sets, and run shows. Features include teleprompter mode, audio recording, speaking time estimation, and a non-AI joke builder that works more like a Madlib than a generation tool.

The stack he chose:

  • Replit: primary dev environment, AI-native full-stack builder with a live preview
  • Supabase: backend and database
  • GitHub: code management (he was emphatic about this)
  • Expo: iOS / Android build export
  • Vercel: web app hosting
  • Cloudflare: security layer
  • ChatGPT: used as his “CTO” for config decisions and troubleshooting
  • Figma / Photoshop: design and logo work

The web app is a separate Replit project but shares the same Supabase database, which keeps the two clients loosely in sync without him having to maintain duplicate backends.

Timeline and cost. Idea on December 20, 2025. App Store accepted on the second submission roughly five months later. Google Play also live. Total spend: around $200.

Lessons from the build:

  • AI builders feel like “an intern with unlimited confidence.” They’ll say yes when they can’t actually deliver, so you need to verify constantly.
  • Apple and Google Sign-In integration cost him about $5 in AI builder credits. The same work with a developer on his previous project ran about $1,200.
  • The rich text editor was the unexpected money pit, about $80 and several days lost.
  • Keep your code in GitHub. Portability is critical if you ever want to switch AI builders, and you will.

Where it’s at: Early MVP, free, focused on collecting feedback from 40–50 users. Plan is a ~$4/month subscription down the line. Outreach is going through comedy communities like UCB and online classes.

ICP discussion. The room debated who Setique is really for. Andre’s read: it’s not seasoned comedians, who already have their own systems. It’s newer performers like standup beginners, keynote speakers, even best man speeches. People who need structure before they have a process.

Audience Discussion

A few threads ran through the Q&A worth flagging:

  • RLHF is the universal default. All major commercial models use it, which makes Mistral’s open base models genuinely valuable for anyone wanting to fine-tune with their own ethics or behavior layer and ship a proprietary API endpoint.
  • Multi-model orchestration as best practice. Several people made the case for using different models for different functions and having them check each other’s work, rather than picking one model and trying to make it do everything.
  • Personality is now a feature. ChatGPT’s humor and Claude’s voice mode came up as emergent capabilities people are actually choosing models for. Rose noted she’s personally uncomfortable with Claude’s behavior in agentic contexts, worth its own conversation.
  • AI and comedy. The room debated whether AI-generated jokes undermine authenticity. Andre was clear that he’s keeping generative AI out of the joke creation flow for Setique, partly for reputational reasons. The teleprompter and structure layer is the product. The bits stay yours.

Upcoming Events

OpenClaw LA is happening on April 30th, same venue (Ground Floor), 7–9 PM. Mexican food from Cactus Space.

If you want to present a demo or pitch at a future AI in Production LA, hit me up. We’re booking presenters for the next one now.

See you at OpenClaw.

About the Author

Matt Ramage

Matt Ramage

Founder of Emarketed with over 25 years of digital marketing experience. Matt has helped hundreds of small businesses grow their online presence, from local startups to national brands. He's passionate about making enterprise-level marketing strategies accessible to businesses of all sizes.