
Conferences
My Takeaways from the 2025 Missional AI Conference
Every leap forward in AI brings with it a familiar tensions: Excitement and unease, innovation and introspection. At the 2025 Missional AI Conference, the focus wasn’t just on new technologies or dazzling demos. It was focused on a deeper question:
How do we embrace AI in a way that leads to human flourishing?
It’s a question that forces us to identify our values. Whether those values are grounded in personal ethics, religious beliefs, or organizational principles, they inevitably shape the AI we build and the systems we trust.
This conference approached the topic from a faith-based perspective, but the themes and tensions it surfaced apply in every human context. This post is my summary of the sessions that stood out to me, shared in the spirit of curiosity and mutual sensemaking.
Pat Gelsinger (Former Intel CEO): Human Flourishing as a Mission
Pat Gelsinger’s keynote set the tone for the event. Known for his leadership at Intel, where he helped usher in technologies like Wi-Fi, USB, and multicore processors, Gelsinger offered a perspective that was both deeply technical and profoundly human.
His personal mission:
“I will work on technology that improves the lives of every person on Earth.”
One of his key points: With AI, computers are beginning to adapt to us, not the other way around. That’s a paradigm shift.
But today’s best models are still crude approximations of human cognition. And they are incredibly energy-intensive. For AI to become a net positive, Gelsinger argued, it must be trusted, useful, and open. The opportunity? To build something that genuinely serves all of humanity.
Richard Zhang (Google DeepMind): Teaching AI to Think Better
Zhang’s session looked into the mechanics of reasoning in large language models. One striking insight: simply adding phrases like “Let’s think step-by-step” improves reasoning outcomes.
We’re also seeing the rise of “AI as a judge” systems—models that evaluate each other’s outputs, correcting and refining without human input. This opens a path toward self-improving agent flows where AI plans, critiques, and executes across different modalities (text, code, logic, etc.).
Zhang shared The AI Scientist paper, which demonstrates a workflow where agents autonomously conduct research, write papers, and review them.
Jean Millard (Meta Fundamental AI Research): A Model for Everyone
Meta’s SeamlessM4T model, named one of Time’s best inventions of 2023, supports speech and text translation in nearly 100 languages, including many with small digital footprints.
This matters. Models that can bridge communication gaps in underrepresented communities aren’t just impressive, they’re necessary. Equity in translation is part of equity in access.
Daniel Whitenack (Prediction Guard): Guardrails and Grace in the Age of AI
One of the more grounded voices at the conference was Daniel Whitenack, a data scientist, founder of Prediction Guard, and co-host of the long-running Practical AI podcast. Dan’s work focuses on making AI systems safe, accessible, and aligned with the people they serve.
Dan brings a rare blend of technical fluency and human depth to the conversation. He’s thinking seriously about how to build large language models that come with built-in guardrails, especially for sensitive domains like healthcare, finance, and education.
I recently had the pleasure of talking with him for a “Data in Chief” segment on The Sensemakers. We talked about AI safety, bias mitigation, and what it looks like to design systems that reflect our best intentions, not just our capabilities.
Liz Grennan (AI Trust and Ethics)
Grennan brought a unique perspective combining legal expertise (formerly McKinsey’s lead analytics lawyer) with a deep grounding in responsible AI.
Her framework for trust includes:
- Technical trust: Does the system work as promised?
- Ethical trust: Is it fair and unbiased?
- Relational trust: Do people feel safe using it? Is it contestable and transparent?
She highlighted real-world failures:
- Insurance models that discriminated based on race and gender.
- Healthcare algorithms that assumed Black patients needed less care.
- Financial systems that scaled historical lending bias.
These aren’t edge cases. They’re reminders that bias at scale is still bias — just more efficient.
James Poulter: The Road to 2027 (and Beyond)
Poulter offered a glimpse into what’s next. If current trends hold, OpenAI’s Sam Altman predicts a month’s work may take just one hour by 2030. He highlighted the AI 2027 forecast, which suggests we will have superhuman AI researchers by mid-2027.
We saw a demo where ChatGPT wrote a product requirements doc, then Vercel’s V0 used it to generate a working website within minutes. No technical skills required.
The takeaway: How do we prepare people for the jobs AI can’t do?
We’ll need more “full-stack professionals” — people who bring leadership, empathy, and strategic thinking to teams augmented by AI. Poulter proposed new apprenticeship models focused on people skills and AI fluency—not just productivity.
Michael Arena: What Makes Us Human
Arena’s session was perhaps the most sobering. Drawing from research by Elon University and Pew, he pointed to growing challenges:
- Declining social intelligence
- Emotional detachment
- Fragmented identity (“Who am I?” becomes harder to answer)
Yet even as AI encroaches on more domains, spirituality, creativity, and social reflection remain uniquely human traits that enrich teams and organizations.
Arena’s message: Humans are still essential, but for reasons we don’t (or can’t) always measure.
Other Notables: Infrastructure, Security, and New Frontiers
- MCP servers (Model Context Protocol): A new standard like
http
for AI, allowing AI to interact with structured data and tools without custom code. dbt Labs is already using this with LLMs. - Security vs. Freedom: If users don’t have a safe place to use AI, they’ll turn to risky tools. See The Sensemakers episode with Daniel Whitenack of Prediction Guard for more on secure internal AI platforms.
- Sign language translation: Automated tools powered by AI are emerging to bridge gaps where speech-to-text isn’t effective.
The Opportunity Before Us
At the heart of this conference was a simple yet vital reminder:
AI is not just about new capabilities, it’s about new choices.
How we build, deploy, and govern AI systems reflects what we value. And whether those values are shared through faith, philosophy, or policy, they need to be clearly named and made visible.
This moment requires intentionality, transparency, and a renewed focus on what it means to flourish in a world quickly being reshaped by intelligent machines.