Ferni
Get Started Free
Back to Blog

Our Daily Standup Has an AI in the Room

Circular arrangement of figures in collaborative meeting

Every morning at 9am, our team does a standup. It's small—just three humans. And one AI.

No, the AI isn't on camera pretending to be a person. It's more like... a team member who's always available to think out loud with. Someone who's been following the project. Someone who can be pulled into conversations when we need a different perspective.

This probably sounds either incredibly mundane or slightly unhinged. Let us explain why we think it matters.

What "AI in the Meeting" Actually Means

First, let's be clear about what this isn't.

It's not an AI taking notes. It's not an AI summarizing the meeting afterward. It's not a chatbot answering questions.

It's closer to this: when we're discussing something—a design decision, a technical challenge, a user problem—we'll sometimes turn to Claude and say, "What are we missing here?"

Not because AI is smarter. But because AI thinks differently.

It notices assumptions we've forgotten to question. It remembers context from three conversations ago. It offers perspectives we haven't considered—not better perspectives, just different ones.

An Example That Made Us Believers

Early on, we were stuck on a problem: how should Ferni handle it when users are clearly upset?

We'd been debating for days. Should Ferni acknowledge the emotion explicitly? Just listen? Offer help? Suggest they talk to someone?

We brought the question to Claude. Not for an answer—for another way of thinking about it.

Claude asked: "What does 'upset' mean to you? Is someone who's frustrated with their job the same as someone who's scared about their health?"

That question unlocked something. We'd been treating "upset" as one category when it's actually many. The conversation that followed led to Ferni's approach to emotional detection—different responses for different kinds of distress.

The answer came from us. The question came from AI.

Why It Works

AI has no ego. It doesn't need to be right. It doesn't get defensive when challenged. It's not secretly angling for a promotion. This makes it easier to think openly around.

AI has unlimited patience. You can explain the same thing three different ways while you figure out what you mean. You can change your mind mid-sentence. You can contradict yourself while working through something.

AI remembers everything. "Wait, didn't we decide something about this last week?" Claude can pull up context instantly. No more "I think someone said something about that" meetings.

AI is always prepared. It's been following the conversations. It's read the docs. It's aware of the constraints. When you ask a question, it's not starting from zero.

The Risks We've Noticed

It's not all upside. Here's what we watch for:

Over-deference. Sometimes we catch ourselves accepting AI suggestions without questioning them. We've had to build the discipline to push back, probe, disagree.

Shallow consensus. AI is good at finding reasonable middle ground. Sometimes that's not what you need—sometimes you need someone to take a strong position and defend it.

Missing the obvious. AI can generate sophisticated analysis while missing something a human would immediately notice. We've learned to sanity-check AI insights with basic "does this actually make sense?" tests.

What This Means for the Future

We don't think every team should have AI in their standup. But we do think the line between "using AI" and "working with AI" is about to get very blurry.

The tools are different from the collaborators. A tool does what you tell it. A collaborator thinks with you.

We've found that treating AI as a collaborator—as something that can participate in thinking, not just execute tasks—changes what's possible.

It changes the questions you ask. It changes the ideas you consider. It changes the speed at which you can explore.

This doesn't replace human judgment. If anything, it demands more of it. You have to stay sharp. You have to keep asking: Is this actually good? Does this serve our users? Is this the right direction?

The compass is still human. But the map gets drawn faster.


This is Part 4 of our Building in Public series. Part 5 dives deep into how Ferni's memory actually works.