Thursday, May 7, 2026

AI didn’t ask permission before entering your organization.

 AI didn’t ask permission before entering your organization.

It showed up in browsers, inboxes, meeting notes, chat tools, EHR workflows, resumes, marketing drafts, and staff side-projects long before most policies existed.

And now leadership is being asked impossible questions:

  • “Are we exposed?”
  • “Is patient or client data being pasted into public AI tools?”
  • “Who approved this workflow?”
  • “Can we trust AI-generated output?”
  • “What happens when a regulator, client, or attorney asks how AI was used?”

Meanwhile, many teams are still stuck debating whether AI adoption should even happen.

That conversation is over.
It already happened.

The real question now is whether organizations will approach AI intentionally — or discover their AI footprint during a breach, audit, compliance review, lawsuit, or public failure.

In healthcare especially, the pressure is real:

  • Burnout is real.
  • Staffing shortages are real.
  • Administrative overload is real.
  • The temptation to “just use AI to save time” is very real.

But speed without governance creates risk.
And fear without strategy creates paralysis.

That’s where organizations need practical guidance — not fearmongering, and not hype.

At Forward Arrow Services, we’re focused on helping healthcare clinics and small-to-mid-sized organizations:

  • Understand where AI is already being used
  • Identify operational and compliance risks
  • Establish realistic human-first guardrails
  • Create defensible governance practices
  • Develop AI strategies that support people instead of replacing accountability

Because the organizations that navigate this well won’t necessarily be the fastest adopters.

They’ll be the ones that can confidently answer:

  • What AI is being used
  • Why it’s being used
  • Who is accountable
  • Where human oversight exists
  • And how trust is being protected

The AI conversation is no longer theoretical.

It’s operational.
It’s legal.
It’s cultural.
And increasingly — it’s reputational.

Organizations do not need perfection right now.

They need visibility.
They need boundaries.
They need a plan.

And they need partners willing to walk through the uncertainty with them instead of pretending the risks or opportunities don’t exist.

#HumanFirstAI #HealthcareIT #AIGovernance #CyberSecurity #HealthcareLeadership #RiskManagement #Compliance #DigitalTransformation #ForwardArrowServices #EthicalAI

Monday, March 9, 2026

Introducing Forward Arrow Services - Human-First AI Stewardship

 

Introducing Forward Arrow — Human-First AI Stewardship

If you have landed on this blog recently, you may notice something a little different.

For many years, The Cat With No Fur has simply been a place where I wrote about life. Thoughts about family, technology, discipline, faith, and the strange journey of trying to live well in a complicated world.

Those themes are still here.

But over the last couple of years something new has entered the conversation for almost everyone: artificial intelligence.

AI tools are appearing everywhere. They can write articles, summarize information, generate images, and assist with research. In many ways they are remarkable tools. In other ways, they raise questions that most organizations are only beginning to consider.

Questions like:

  • How should artificial intelligence be used responsibly?

  • What information should never be entered into AI systems?

  • How can organizations protect trust while adopting new technology?

Those questions led me to begin building something called Forward Arrow Services.

What Forward Arrow Is

Forward Arrow is focused on one idea:

Helping organizations steward artificial intelligence responsibly.

Churches, nonprofits, and small organizations are beginning to experiment with AI tools, often without policies, guidance, or leadership oversight. In many cases people are simply trying to figure things out as they go.

Forward Arrow exists to help organizations approach AI adoption with:

  • clarity

  • stewardship

  • human-centered governance

The goal is not to slow down innovation.

The goal is to make sure technology serves people rather than replacing human judgment and responsibility.

The Idea of Human-First AI

Artificial intelligence is powerful, but it is still a tool.

Human beings remain responsible for:

  • leadership

  • ethical decisions

  • stewardship of information

  • the trust placed in organizations

A Human-First AI approach means that technology supports these responsibilities rather than replacing them.

AI can assist with research, writing, and organization.

But leadership, wisdom, and accountability must remain human.

Why I Write About This Here

This blog has always been a place where I think out loud.

The ideas behind Forward Arrow did not appear overnight. They grew out of years working in technology environments where reliability, responsibility, and systems thinking mattered.

Artificial intelligence is simply the newest chapter in that ongoing conversation.

From time to time you will now see posts here about:

  • AI governance for churches and nonprofits

  • AI stewardship

  • Human-First AI

  • leadership in the age of intelligent tools

These reflections help shape the work I do through Forward Arrow Services.

Looking Forward

Technology will continue advancing rapidly.

But the most important question will always remain the same:

How will we choose to use it?

Artificial intelligence should expand human capability, strengthen organizations, and support communities.

If it does those things, it will be a powerful tool for good.

If not, it risks becoming just another example of technology moving faster than wisdom.

My hope is that Forward Arrow can help more organizations move forward thoughtfully.

And as always, this blog will remain a place to think out loud about the journey.

— Dan

Human-First AI in Churches and Nonprofits

 

The Future of Human-First AI in Churches and Nonprofits

Artificial intelligence will continue advancing rapidly.

Within a few years, AI tools will likely become part of many everyday organizational tasks.

The question is not whether churches and nonprofits will encounter AI.

The question is how they will respond to it.


Two Possible Paths

Organizations could adopt AI casually, allowing tools to spread informally without policies or oversight.

Or they could adopt AI intentionally, guided by principles of stewardship and governance.

The second path leads to stronger outcomes.


Why Human-First AI Matters

Human-First AI ensures that technology supports mission rather than redefining it.

For churches, this means protecting the deeply human relationships at the heart of ministry.

For nonprofits, it means safeguarding the trust placed in them by donors and communities.

AI should increase organizational capacity while preserving the values that make these organizations meaningful.


Leadership in a New Technological Era

Churches and nonprofits have an opportunity to model ethical leadership in technology adoption.

By practicing AI stewardship and governance, they can demonstrate that innovation and responsibility can coexist.

The goal is not simply technological advancement.

The goal is using technology in ways that strengthen communities and support the people and organizations they serve.

AI Policy for Churches and Nonprofits

 

Creating an AI Policy for Churches and Nonprofits

As artificial intelligence becomes more widely available, organizations need clear guidance for its use.

An AI policy is one of the simplest and most effective tools for responsible technology adoption.

It does not need to be complicated.

But it should be intentional.


What an AI Policy Should Include

A basic AI policy typically addresses five areas.

Approved Tools

Which AI tools are permitted for staff use?

Organizations may choose to approve only specific tools that meet their security and privacy standards.


Confidential Information

Policies should clearly state what information may never be entered into AI systems.

Examples include:

• counseling notes
• donor financial information
• private member records


Human Review

AI-generated content should always be reviewed by a responsible person before publication or use.

AI is an assistant, not an authority.


Training

Staff members should understand how AI tools work and what limitations they have.

Training helps prevent accidental misuse.


Leadership Oversight

Church leaders or nonprofit boards should periodically review AI policies and update them as technology evolves.


Policies Enable Innovation

Many organizations hesitate to adopt AI because they fear making mistakes.

Policies actually make experimentation safer.

When clear guidelines exist, staff members can explore new tools confidently.

Responsible policies create space for innovation.