• Ripples
  • Posts
  • Should Leaders Use AI at All?

Should Leaders Use AI at All?

Ethical and practical things to think about before you get started.

In The Good Place, Chidi Anagonye, the anxious professor of moral philosophy, agonizes over almond milk. He loves the taste but knows the environmental cost of almond groves, so every sip feels morally compromised.

The joke lands because we all feel it.

Every modern choice has hidden costs. Gasoline cars warm the planet. Electric cars rely on rare minerals and a carbon-heavy grid. Even “good” tools carry shadow consequences.

AI is no different. It’s full of promise and entangled with harm. So when thoughtful leaders ask me whether they should use AI, I don’t have an easy answer. But I’ve sat with the question, and I can offer a path into it.

The Harms Are Real

AI risk isn’t abstract—the harms are tangible, and fall into three categories leaders should weigh:

  1. Direct harms: high energy inputs, use of copyrighted work without consent, exploited labor (often in the Global South).

  2. Indirect harms: plagiarism, disinformation, misinformation, job market disruption.

  3. Systemic harms: societal cognitive decline, erosion of trust in institutions, supply chains increasingly run by opaque automated trading or logistics models. And yes, the Skynet problem—the specter of runaway superintelligence.

There are legitimate reasons to opt out. But as Donella Meadows reminds us, in her classic essay Leverage Points, systems rarely change because of individual divestment. Real change comes when we shift the rules of the system and the mindsets that drive them.

AI isn’t safe or neutral. But it will be transformative. It’s the most powerful technological advance I’ve seen in my 60 years. So I’ve chosen to engage. Not blindly or without critique. But to try to harness its power and shape its trajectory.

Machine, Mind, or Magnifying Glass?

AI’s direct harms can be mitigated primarily through energy, trade, and labor policy. And it’s essential to put pressure on governments and corporations. But indirect and systemic harms depend less on how the core technology is developed, and more on how leaders choose to implement it. 

Leaders tend to approach AI in three ways: 

  • AI as Machine: “Give it a task, get an output.” This is the dominant posture today (faster search, automated workflows, and pseudo-employees). It’s useful for efficiency, but risky when overextended as hallucinations or brittle agents can cause costly errors—a hallucinated legal precedent, or a supply chain model collapsing under novel conditions. 

  • AI as Mind: “Use it to improve myself.” Some treat AI as a companion, coach, or therapist. This posture risks false authority, emotional dependency, and even AI-induced psychosis. 

  • AI as Magnifying Glass: “Use it to see more clearly.” This posture is the most promising one for executives leading in complexity. Used this way, AI isn’t an oracle—it’s an instrument that illuminates patterns, tensions, and blind spots, sharpening questions so leaders can see more completely, think more clearly, and act more wisely.

AI-as-Machine has real possibility, but must be implemented thoughtfully—something the magnifying glass posture supports. AI-as-Mind has limited utility and real risk; it’s why I don’t use AI as a coach or therapist. That way lies madness. 

With AI as Magnifying Glass, we remain engaged and accountable for our insights and outputs. As Richard Feynman said, “The first principle is that you must not fool yourself, and you are the easiest person to fool.” This posture helps leaders slow down, stay connected to complexity, engage with nuance, and lead with clarity instead of outsourcing judgment.

My Shift

I ignored AI for a long time. Then, over lunch, a thoughtful rabbi I know nudged me to use it in a way I hadn’t thought of. Not just “give me an answer” but “help me look at this in a new way.”

To my surprise, I found it gave me meaningful insights that sharpened my thinking. LLMs draw associations rather than reason or discern truth, so this style of use plays to their strengths. 

AI use made my writing sharper, my logic more sound, and my leadership clearer. It helped me cut through the noise to find the signal.

This eventually led me to build rippleIQ, at first for myself then sharing it with clients who embraced it—so I decided to offer it publicly. It’s not for speeding through a problem, but to see it more clearly by asking better questions. I think of it as a reflective space, a kind of library you can talk to. And this can be a game changer. 

The Real Alignment Problem

In AI circles, “alignment” means building systems that act in accordance with human values, a concept borrowed from economics. 

But the hard truth is most organizations aren’t aligned with human values either. They reward control over clarity, performance over purpose, and efficiency over effectiveness—often at the expense of the very people doing the work.

Alignment has been my work for two decades. My book Radical Alignment (2020) focused on how teams build trust through shared intent. Operationally, I’ve spent years helping leaders design and build organizations that actually work for people.

Now, through rippleIQ, I’m exploring how AI might help surface where an organization’s values, processes, tools, and structures aren’t pulling in the same direction—and where sustainable shifts are possible and likely to make the biggest impact.

AI alone can’t fix misalignment, but it can illuminate it. And that’s the first step towards meaningful change. 

So: Should You Use AI?

That’s not my call to make. But I can tell you that I’ve answered yes—but with discernment, limits, reflection, and accountability.

We need principled leaders using AI, not retreating from it. We need thoughtful experiments, not blind acceleration. Because engagement doesn’t have to mean extraction. It can mean stewardship. It can mean shaping something better.

Every sip, every click, every query carries consequence and possibility.

You don’t have to be all-in. You don’t have to be all-out. But you do have to choose how you’ll engage.

Because the question isn’t whether to use AI. It’s whether we’ll use it wisely enough to matter.

If you’re curious to join me on this journey please follow me on substack and join the rippleIQ waitlist.

Coming Next 

The next essay in this series is called “If We Use AI, How Do We Use It Well?” and it covers a set of practical, human-centered commitments for engaging with AI responsibly.

Work With Me

Want more ease with your team?

I help build organizations that work for people. That can look like:

  • Fractional COO Support

  • AI for Operations

  • Team Diagnostics & Coaching

If you want more clarity, capability, or calm as a leader — we should talk.

Until next time,

Bob Gower
(bobgower.com)

Organizations That Work