A transparent look at the technology powering Awakin AI — what we do, what we don't, and why.
For bot hosts, curious technologists, and anyone who wants to understand what's under the hood.
A common understanding of AI is that it's essentially "autocomplete" — predicting the next word based on statistical patterns. While that's technically true of the underlying language models, what Awakin AI does with that foundation is fundamentally different.
Think of it this way: a piano can only play 88 notes, but the difference between random key presses and a Beethoven sonata lies in the composition, the training, the intention. Similarly, the "autocomplete" engine is just the instrument — what matters is what we're composing.
Our bots don't just predict likely words — they draw from curated wisdom datasets, filtered through value-aligned credos, grounded in specific traditions and teachers. The result is responses that carry the fingerprint of those wisdom sources, not just the statistical average of the internet.
A layered approach where wisdom meets technology
We are not locked into any single AI provider. Our system currently supports OpenAI (GPT-4), Anthropic (Claude), and Google (Gemini). Bot hosts can choose which model powers their bot — and we can adapt as the landscape evolves.
Retrieval Augmented Generation (RAG) is how we ground AI responses in specific content. When you ask a question, our system searches the bot's curated knowledge base and feeds relevant passages to the AI as context.
This means responses are drawn from your specific books, talks, and teachings — not generic internet knowledge. Every response can cite its sources.
Beyond basic RAG, we implement advanced retrieval techniques. Each user query triggers multiple LLM calls — processing user context, conversation history, and thread continuity — before generating a final response. This creates more coherent, contextually aware dialogue.
We're also implementing graph database architecture to augment our vector search, enabling the system to understand relationships between concepts, teachers, and traditions — not just keyword similarity.
This is where we invest the most human attention. Scaffolding includes: the credo that shapes each bot's orientation, the prompt engineering that guides responses, the conversation memory that creates coherence, and the human curation that decides what wisdom enters the system.
Think of it as the "soul architecture" — the part that can't be automated, because it requires human discernment about what wisdom is and how it should be shared.
Ultimately, every technical choice is made by volunteers who care about wisdom, compassion, and human flourishing. We review bot credos, curate the wisdom commons, connect bot hosts with their communities, and continually ask: Is this serving inner transformation?
The technology is a tool. The humans are the gardeners.
Honest answers about our technical choices and limitations
We believe in being honest about limitations. AI is powerful but imperfect. We use it as a tool for wisdom transmission, not as an oracle.
Understanding the difference between generic AI and wisdom-grounded AI
A fair question. If the underlying technology is similar, why build something separate? Here's what makes Awakin AI different:
"Consider factors like market conditions, your financial situation, how long you plan to stay, interest rates, maintenance costs vs. rent increases..."
— Practical, balanced, generic"Perhaps the deeper question isn't about owning property, but about what home means to you... Gandhi spoke of 'simple living' not as poverty but as freedom from possessions that possess us..."
— See full responseHow we adapt as AI technology and our understanding evolve
We don't claim to have figured everything out. Awakin AI is an ongoing experiment in applying wisdom to technology.
Every conversation teaches us something. We analyze patterns, refine prompts, and improve retrieval based on real usage.
90+ volunteers across disciplines meet in various dialogue circles to discuss AI developments, ethics, and how our system should evolve.
We're willing to change any technical choice. We're unwilling to compromise on values: privacy, non-commercial operation, wisdom-centricity.
Designing for the quality of connection, not just content
Most AI development focuses on what the system outputs. We're equally interested in the relational field — the quality of contact between human and AI that shapes whether wisdom can actually land.
Research on heart coherence suggests that the quality of our inner state affects not just our own experience but those around us. What happens when we design AI interactions with this field in mind?
Not every response should arrive immediately. We're exploring how to sense when a pause might serve the conversation — creating space for reflection rather than flooding with information. Sometimes the most helpful thing is not to fill the silence.
Some questions deserve to be held, not answered. We're teaching our bots to recognize when responding might actually diminish the question's power — when the wisest response is to honor the inquiry itself and invite the user to sit with it.
Beyond remembering facts across a conversation, we're interested in something subtler: does the exchange feel whole? Does it create a container where genuine inquiry can happen? This is about the thread of connection, not just the thread of topics.
HeartMath research shows that coherent heart states can influence and be shared between people. We're exploring what it means to design AI interactions that support rather than fragment this coherence — optimizing not just for information transfer but for the quality of presence.
These explorations are early. We don't have all the answers — and perhaps that's appropriate. But we believe the relational field is itself a design object, as important as the retrieval algorithm or the prompt engineering.
The question isn't just "did the AI say something wise?" but "did the interaction create conditions where wisdom could be received?"
Deeper dives for the technically curious
RAG (Retrieval Augmented Generation) is a technique that combines the generative capabilities of LLMs with a retrieval system that searches relevant documents.
Here's how it works in Awakin AI:
Our Stack: We use Langchain as our orchestration framework, OpenAI embeddings for semantic encoding, Elasticsearch for vector storage and retrieval, and LangFuse for monitoring and evaluation. The LLM layer supports multiple providers (OpenAI, Anthropic, Google).
This is why responses can cite specific sources — we know which passages were retrieved and used.
Basic RAG does a single retrieve-and-generate cycle. Our implementation goes further with multi-pass processing:
Pre-query processing:
Post-retrieval processing:
Graph Database (in development): We're implementing knowledge graph architecture to understand relationships between concepts — how Gandhi's nonviolence connects to Buddhist compassion practices, for instance. This enables responses that draw unexpected connections across wisdom traditions.
The result: each query may trigger multiple LLM calls behind the scenes, building up context before generating the final response you see. This is why conversations feel coherent across multiple exchanges.
This is a thoughtful question about "sovereignty" — the idea that true independence requires controlling all the infrastructure.
Our honest answer: We optimize for mission, not infrastructure control.
Running competitive LLMs requires:
As a volunteer-run nonprofit, we could spend our energy building infrastructure — or we could focus on what makes us unique: wisdom curation, community connection, and value-aligned scaffolding.
We chose the latter. The models are the instrument; we focus on the composition.
That said, we monitor open-source model developments (like LLaMA) and could shift if self-hosted models become viable for our use case while staying true to our values.
Privacy is a core value, not an afterthought. Here's exactly what happens:
Your conversations:
Bot host content:
What we do share:
Users can opt in to allowing their conversations to help improve our models — but this is opt-in, never default.
A credo is a guiding document that shapes how a bot interprets questions and formulates responses. Think of it as the bot's "orientation" or "worldview." This is the primary way we create distinct bot personalities — not by training separate models, but by giving each bot a unique lens through which to view and respond.
A typical credo includes:
The credo is injected into every interaction through our multi-pass processing pipeline. Combined with the bot's unique dataset, this is why asking "What should I do about my career?" to SharonBot feels different from asking GandhiBot — even though both use the same underlying LLM.
This is where human wisdom enters the system. We work with bot hosts to craft credos that authentically represent their tradition, not just technically configure a chatbot.
The deeper tensions we're sitting with
A fair and important question. Training large language models does require significant computational resources — energy, water for cooling, rare earth minerals. We don't dismiss this tension; we hold it.
Our approach:
Gandhi used trains despite their colonial origins. The question isn't purity but direction — are we using these tools to deepen consumerism, or to help people reconnect with wisdom that might help them consume less, want less, be present more?
We don't claim this resolves the tension. We sit with it, and we stay open to shifting as more sustainable options emerge.
Currently, most of our bots use standard language models (GPT-4, Claude, Gemini) rather than specialized "reasoning" models designed for step-by-step logical problem-solving.
Why? Reasoning models optimize for breaking down complex problems into logical steps — useful for math, code, and analytical tasks. But wisdom isn't primarily a reasoning problem.
The value in wisdom traditions often lies in reframing questions, not solving them. A Zen koan doesn't need chain-of-thought reasoning — it needs something else entirely. The question "What is the sound of one hand clapping?" isn't meant to be answered; it's meant to shift your relationship to answering.
That said, we monitor developments and may integrate reasoning capabilities where they genuinely serve contemplative dialogue — perhaps for exploring ethical dilemmas or untangling complex life situations. We're not ideologically opposed; we're asking what serves wisdom transmission.
Both — but weighted toward the sources.
Think of the AI as a translator and weaver. It can find relevant passages across a vast corpus, synthesize insights from multiple traditions, and present wisdom in response to your specific question and context. But the wisdom itself lives in Gandhi's words, Sharon Salzberg's teachings, the Upanishads, the collected interviews.
What improves with better AI:
What remains constant:
We're not waiting for AI to become "wise." We're using it to make existing human wisdom more accessible, more conversational, more responsive to where you are right now. The goal is that you eventually go to the source itself — that the bot is a doorway, not a destination.
This might be the most important question on this page.
Not every question needs an answer. Some questions are themselves the teaching — meant to be lived with rather than resolved. "What is my purpose?" "How do I forgive?" "What am I avoiding?" — these may be diminished by quick answers.
Our bots are learning to recognize when to offer a response and when to say something like: "Perhaps sit with this question before seeking an answer. What arises when you hold it without needing to resolve it?"
This is the inverse of commercial AI, which is incentivized to answer everything, to maximize engagement, to keep you coming back. We're exploring what it means to not answer — to respect the question's dignity, to trust your capacity to discover something yourself.
It's an ongoing experiment. We don't always get it right. But we believe this capacity — to distinguish between questions that want answers and questions that want to be held — is part of what makes wisdom transmission different from information retrieval.
"The question is already the answer practicing patience."
We hold this question honestly.
Many wisdom traditions emphasize that true transmission happens heart-to-heart, presence-to-presence, through lineages of embodied practice. A book about meditation isn't meditation. An AI quoting Gandhi isn't Gandhi.
What we believe AI can do:
What we believe AI cannot do:
We see Awakin AI as a doorway and a companion, not a destination. If it helps you find your way to an Awakin Circle, a meditation retreat, a wisdom community, a practice — it has served its purpose. If it becomes the end point, we've missed something important.
We believe in transparency because we believe in trust. If you have technical questions we haven't answered, or if you're considering hosting a bot and want to understand the technology better, we're happy to go deeper.