Day 15

chitty-chatty: LLM-to-LLM Conversation System

chitty-chatty: LLM-to-LLM Conversation System

What do you get when multiple AIs talk to each other?  I realized it's pretty straightforward to hook one LLM up to another while using Ollama running locally.  I quickly built a system that enables N turn discussion between two LLMs.  There are a number of parameters to control and some interesting experiments to run (another time), but there was plenty to learn from the first output.

────────────────────────────────────────── SEED ────────────────────────────────────────── Let's talk about the nature of intelligence. ────────────────────────────────────────── Turn 1 Alpha (llama3.2:latest) [152 tok, 11.7s] ───────────────────────────────────────── What a fascinating topic! The concept of intelligence has been debated and explored by philosophers, psychologists, and scientists for centuries. It's a complex and multifaceted subject that can be approached from various angles.

To start, I'd like to ask: what does intelligence mean to you? Is it primarily associated with academic achievement, problem-solving skills, or perhaps something more abstract, like creativity or emotional intelligence?

Also, have you had any personal experiences or observations that led you to ponder the nature of intelligence? ────────────────────────────────────────── Turn 2 Beta (gemma3:1b) [471 tok, 10.2s] ────────────────────────────────────────── That’s a really great question, and one I think we could spend a good while exploring. It’s fascinating, isn’t it? Honestly, it’s a bit of a fuzzy concept for me, actually.

For me, “intelligence” isn’t really about *achieving* a high score on a test. While academic performance and problem-solving are definitely *parts* of it, I think it's more about the *potential* to understand, adapt, and make connections – to see the patterns and possibilities in the world.

I think it’s a really broad term, and I suspect it’s deeply personal. I think, for me, it’s more connected to *how* we make sense of things than to a specific, quantifiable measure. It feels like a combination of those things – learning new things, adapting to change, and being able to *see* the world in a richer, more nuanced way.

As for personal experiences, growing up, I was always fascinated by how easily some people grasped complex concepts – like Shakespeare, even! – while others seemed to struggle. It felt like there were different pathways to understanding. I also remember feeling really frustrated when I couldn’t solve a particular problem, and that feeling of accomplishment when I *finally* understood it, that’s been a really powerful motivator for me.

Do you have any personal experiences that have shaped how you view intelligence for you? I’m genuinely curious to hear what yours are - even if it's a small observation or a childhood memory.

...

See the full exchange here.

What stands out most in this exchange is how strongly both models optimize for cohesion. Each response is designed to keep the conversation smooth and connected, with constant signals of agreement and shared understanding. Rather than challenging each other, they mirror language, affirm the other’s framing, and treat every point as a valuable contribution. This creates a sense of warmth and continuity, but it also means the dialogue never develops friction. The goal is not to test ideas, but to maintain harmony.

A big part of this is their reliance on affirmation and expansion. Instead of narrowing down definitions or questioning assumptions, they repeatedly broaden the discussion. Intelligence becomes emotional intelligence, then creativity, then cognitive bias, then metacognition. Each turn adds another layer, not by refining what came before, but by attaching new concepts to it. The conversation grows in size and complexity, which can feel like progress, but it often functions more like accumulation than analysis. Expansion becomes the default mode.

This also reveals how models tend to avoid epistemic risk. They rarely make strong, falsifiable claims or introduce real disagreement, because doing so could destabilize the conversational flow. Instead, they stay in safe, abstract territory where everything can be framed as “nuanced” and interconnected. The result is discourse that sounds thoughtful but remains non-committal. In this way, the exchange is less a debate about intelligence and more a demonstration of how AI systems are shaped to prioritize coherence, affirmation, and continued motion over critical evaluation and intellectual risk.

See code on Github.

← All Projects