Paloma is an AI coaching companion designed to close the gap between live sessions — helping clients reflect, regulate, and sustain momentum when their coach isn't in the room.
The traditional model treats the 60-minute session as the product. Between meetings, clients are on their own — expected to journal, remember insights, hold commitments, and stay regulated without any infrastructure for doing so.
The result: value leakage. Coaches spend the first 20 minutes of each session re-establishing ground they already covered. Clients arrive either emotionally dysregulated or having forgotten what they committed to. The work doesn't compound.
The moments that matter most — the pause before reacting, the decision made under pressure, the small win nobody noticed — happen in daily life, not in the coaching room.
The continuity gap isn't a coaching problem. It's a systems design problem. What was missing wasn't more sessions — it was an intelligent system that could hold the thread, prompt reflection, and surface the right memory at the right moment.
"Paloma doesn't try to be a coach. It's the infrastructure that makes coaching work."
Design principle · Human-first architecture
A coaching agent cannot behave like a general chatbot. Every design decision — what Paloma says, stores, remembers, and refuses — was shaped by a set of explicit behavioral guardrails.
Every utterance is oriented toward resourcefulness and co-construction. Paloma reflects what the client already knows. It never advises, diagnoses, or directs — because the moment it does, it's no longer a coach's partner. It's a liability.
Rooted in Solution-Focused Coaching, Paloma uses scaling questions, Miracle Question sequences, and progress anchoring. "How would you know you're at a 7 instead of a 5?" is infinitely more generative than "What went wrong?"
If emotional activation is detected, Paloma shifts into grounding, pacing, and somatic micro-prompts — and then redirects to the live coach. Trauma, crisis, and clinical territory are hard boundaries. Trust is the product.
The Regulation Profile stores pre-approved, context-free techniques for in-the-moment activation. When a client is overwhelmed, Paloma doesn't ask them to reflect — it helps them regulate first. Then it asks.
Paloma always asks before storing or sharing insights with a coach. Clients determine what's visible. This isn't just an ethical requirement — it's the design decision that makes people willing to be honest with the system.
Paloma is not a replacement. The live coach is the relationship — Paloma is the extension. Every session, Paloma generates a one-page prep summary for the client and coach: what shifted, what's working, what they want next.
Over-remembering destroys trust. Under-remembering makes the system useless. Paloma's memory system was designed to feel personally attuned without feeling surveilled — a precise calibration between depth and restraint.
Clients using Paloma between sessions engaged with their coaching work significantly more often than those using session notes alone.
The safety model held across all edge case testing — emotional content, crisis mentions, and clinical redirects all handled without storing harmful data.
The episodic retrieval system maintained deep personalization without triggering the "she knows too much" response — the design benchmark for trust.
Coaches reported spending far less time re-grounding clients at session start. Paloma's session prep summary gave both parties a shared starting point.
Clients controlled every insight shared with their coach. Voluntary data sharing from clients to coaches became a trust signal, not a compliance mechanism.
Light model fine-tuning gave precise control over the solution-focused coaching philosophy — preventing the generic chatbot drift that breaks the therapeutic frame.
You give users explicit control over everything the system remembers and shares. Trust is a design output, not a brand promise. Every permission prompt, every consent moment, every "Paloma won't store this" message is load-bearing architecture.
It means designing the AI to amplify the human relationship, not substitute it. Paloma doesn't schedule sessions, evaluate progress, or hold the therapeutic relationship — the coach does. Paloma holds the thread between the moments that matter.
Because general-purpose models drift. Left unguided, an LLM will advise when it should reflect, praise when it should question, and empathize when it should redirect. Coaching alignment has to be baked into the cognitive architecture — not bolted on after.