Monday, March 9, 2026

Introducing Forward Arrow Services - Human-First AI Stewardship

 

Introducing Forward Arrow — Human-First AI Stewardship

If you have landed on this blog recently, you may notice something a little different.

For many years, The Cat With No Fur has simply been a place where I wrote about life. Thoughts about family, technology, discipline, faith, and the strange journey of trying to live well in a complicated world.

Those themes are still here.

But over the last couple of years something new has entered the conversation for almost everyone: artificial intelligence.

AI tools are appearing everywhere. They can write articles, summarize information, generate images, and assist with research. In many ways they are remarkable tools. In other ways, they raise questions that most organizations are only beginning to consider.

Questions like:

  • How should artificial intelligence be used responsibly?

  • What information should never be entered into AI systems?

  • How can organizations protect trust while adopting new technology?

Those questions led me to begin building something called Forward Arrow Services.

What Forward Arrow Is

Forward Arrow is focused on one idea:

Helping organizations steward artificial intelligence responsibly.

Churches, nonprofits, and small organizations are beginning to experiment with AI tools, often without policies, guidance, or leadership oversight. In many cases people are simply trying to figure things out as they go.

Forward Arrow exists to help organizations approach AI adoption with:

  • clarity

  • stewardship

  • human-centered governance

The goal is not to slow down innovation.

The goal is to make sure technology serves people rather than replacing human judgment and responsibility.

The Idea of Human-First AI

Artificial intelligence is powerful, but it is still a tool.

Human beings remain responsible for:

  • leadership

  • ethical decisions

  • stewardship of information

  • the trust placed in organizations

A Human-First AI approach means that technology supports these responsibilities rather than replacing them.

AI can assist with research, writing, and organization.

But leadership, wisdom, and accountability must remain human.

Why I Write About This Here

This blog has always been a place where I think out loud.

The ideas behind Forward Arrow did not appear overnight. They grew out of years working in technology environments where reliability, responsibility, and systems thinking mattered.

Artificial intelligence is simply the newest chapter in that ongoing conversation.

From time to time you will now see posts here about:

  • AI governance for churches and nonprofits

  • AI stewardship

  • Human-First AI

  • leadership in the age of intelligent tools

These reflections help shape the work I do through Forward Arrow Services.

Looking Forward

Technology will continue advancing rapidly.

But the most important question will always remain the same:

How will we choose to use it?

Artificial intelligence should expand human capability, strengthen organizations, and support communities.

If it does those things, it will be a powerful tool for good.

If not, it risks becoming just another example of technology moving faster than wisdom.

My hope is that Forward Arrow can help more organizations move forward thoughtfully.

And as always, this blog will remain a place to think out loud about the journey.

— Dan

Human-First AI in Churches and Nonprofits

 

The Future of Human-First AI in Churches and Nonprofits

Artificial intelligence will continue advancing rapidly.

Within a few years, AI tools will likely become part of many everyday organizational tasks.

The question is not whether churches and nonprofits will encounter AI.

The question is how they will respond to it.


Two Possible Paths

Organizations could adopt AI casually, allowing tools to spread informally without policies or oversight.

Or they could adopt AI intentionally, guided by principles of stewardship and governance.

The second path leads to stronger outcomes.


Why Human-First AI Matters

Human-First AI ensures that technology supports mission rather than redefining it.

For churches, this means protecting the deeply human relationships at the heart of ministry.

For nonprofits, it means safeguarding the trust placed in them by donors and communities.

AI should increase organizational capacity while preserving the values that make these organizations meaningful.


Leadership in a New Technological Era

Churches and nonprofits have an opportunity to model ethical leadership in technology adoption.

By practicing AI stewardship and governance, they can demonstrate that innovation and responsibility can coexist.

The goal is not simply technological advancement.

The goal is using technology in ways that strengthen communities and support the people and organizations they serve.

AI Policy for Churches and Nonprofits

 

Creating an AI Policy for Churches and Nonprofits

As artificial intelligence becomes more widely available, organizations need clear guidance for its use.

An AI policy is one of the simplest and most effective tools for responsible technology adoption.

It does not need to be complicated.

But it should be intentional.


What an AI Policy Should Include

A basic AI policy typically addresses five areas.

Approved Tools

Which AI tools are permitted for staff use?

Organizations may choose to approve only specific tools that meet their security and privacy standards.


Confidential Information

Policies should clearly state what information may never be entered into AI systems.

Examples include:

• counseling notes
• donor financial information
• private member records


Human Review

AI-generated content should always be reviewed by a responsible person before publication or use.

AI is an assistant, not an authority.


Training

Staff members should understand how AI tools work and what limitations they have.

Training helps prevent accidental misuse.


Leadership Oversight

Church leaders or nonprofit boards should periodically review AI policies and update them as technology evolves.


Policies Enable Innovation

Many organizations hesitate to adopt AI because they fear making mistakes.

Policies actually make experimentation safer.

When clear guidelines exist, staff members can explore new tools confidently.

Responsible policies create space for innovation.

AI Stewardship & Human-First AI

 

What Is AI Stewardship?

The term AI stewardship is beginning to appear more frequently in discussions about responsible technology.

It is a simple but powerful idea.

AI stewardship means recognizing that artificial intelligence is a powerful tool that must be used with responsibility, care, and ethical leadership.

Just as organizations steward finances, facilities, and relationships, they must also steward technology.


Why Stewardship Matters

Artificial intelligence systems are capable of producing impressive outputs.

They can generate text, analyze data, summarize research, and assist with decision-making processes.

But these systems also have limitations.

AI can produce inaccurate information.
It can misinterpret context.
It can generate convincing but incorrect answers.

Without human oversight, these errors can spread quickly.

AI stewardship ensures that technology remains accountable to human leadership.


Stewardship Is Not Resistance

Some people assume that responsible AI discussions are simply attempts to slow down innovation.

That is not the case.

Stewardship does not reject technology.

Instead, it recognizes that powerful tools require thoughtful use.

Responsible organizations adopt AI intentionally rather than impulsively.


The Role of Leadership

AI stewardship requires leadership involvement.

Boards, executive directors, and pastors should understand:

• how AI tools work
• where they are being used
• what policies guide their use

Leadership provides the ethical framework that technology alone cannot.


Human-First AI

At its core, AI stewardship supports a Human-First approach to artificial intelligence.

Technology should expand human capability while preserving human responsibility.

When organizations adopt AI through stewardship rather than impulse, they create systems that are both innovative and trustworthy.

AI Governance for Nonprofits - Protecting Trust in the Age of AI

 

AI Governance for Nonprofits: Protecting Trust in the Age of Artificial Intelligence

Nonprofit organizations operate on trust.

Donors trust nonprofits to steward resources responsibly.
Volunteers trust leadership to operate with integrity.
Communities trust nonprofits to serve their needs faithfully.

Artificial intelligence has the potential to support nonprofit work in powerful ways.

AI tools can help with:

• grant research
• donor communication
• data summarization
• administrative tasks

But like all powerful tools, AI requires responsible oversight.

That is where AI governance for nonprofits becomes essential.


The Governance Gap

Many nonprofits are already experimenting with AI tools.

Staff members use them informally for writing assistance or brainstorming program ideas.

The problem is that most organizations have no formal guidance for how these tools should be used.

This creates several risks.

Sensitive donor data might be entered into AI tools.
Incorrect AI outputs may be mistaken for factual information.
Staff may rely on tools they do not fully understand.

These are governance issues, not technical ones.


What Responsible AI Governance Looks Like

Nonprofits can adopt AI responsibly without becoming technology experts.

A strong starting point includes:

  1. Identifying approved AI tools

  2. Defining unacceptable uses of AI

  3. Protecting donor and financial information

  4. Ensuring human review of AI outputs

  5. Training staff on responsible AI use

These practices help organizations benefit from AI without compromising trust.


Stewardship and Innovation Can Coexist

Responsible governance does not slow innovation.

In fact, it enables organizations to adopt technology more confidently.

When leaders understand how AI is used within their organization, they can explore new tools while maintaining strong ethical standards.

AI governance simply ensures that technology remains aligned with mission.

AI Governance for Churches - a Call to Stewardship

 

AI Governance for Churches: Why Every Church Should Think About AI Policy Now

Artificial intelligence is arriving quietly in many churches.

Staff members use AI tools to help write newsletters. Volunteers experiment with AI image generators for event materials. Sermon research tools are beginning to incorporate AI summarization.

In many cases, these tools appear helpful and harmless.

But something important is missing in most churches today:

AI governance.

AI governance simply means establishing thoughtful leadership oversight for how artificial intelligence is used within an organization.

For churches, this matters more than many people realize.


Why Churches Are Particularly Sensitive to AI Risk

Churches often handle deeply personal information.

Pastoral counseling conversations.
Prayer requests.
Member records.
Donor contributions.

If AI tools are used casually, sensitive information could be entered into systems that were never designed to protect that kind of data.

Many AI systems store prompts, improve training data, or process information through external servers.

Without clear guidance, a well-meaning staff member could accidentally share confidential information.

This is not a technology problem.

It is a leadership responsibility problem.


What AI Governance Looks Like in a Church

Churches do not need complex technical frameworks to begin practicing responsible AI governance.

A few simple steps make a tremendous difference.

Church leadership should:

• identify which AI tools are approved for use
• define what information should never be entered into AI systems
• require human review of AI-generated content
• periodically review how AI tools are being used

These steps are similar to how churches already govern financial systems or membership records.

AI governance is simply the next extension of responsible stewardship.


Technology Should Support Ministry, Not Replace It

Artificial intelligence can help churches operate more efficiently.

It can assist with communication, help summarize research, and reduce administrative workload.

But ministry itself remains human.

AI should support people in their calling rather than replacing the human relationships that sit at the heart of faith communities.

Responsible governance ensures that technology strengthens ministry rather than distracting from it.


The Opportunity for Thoughtful Leadership

Churches have an opportunity to lead responsibly in this new technological moment.

By adopting clear AI governance practices now, they can model ethical leadership for their communities.

The goal is not fear of technology.

The goal is wisdom in its use.

Human-First AI Stewardship and Governance

 

The Human Side of Artificial Intelligence

Much of the conversation around artificial intelligence focuses on capability.

People ask what AI can do, how fast it works, and how powerful it might become.

Those questions are interesting, but they are not the most important ones.

The more important question is:

How will we choose to use it?

Technology does not determine values.

People do.

Artificial intelligence can help organizations write newsletters faster or analyze information more efficiently.

But AI cannot replace leadership, judgment, or responsibility.

Those things remain deeply human.

When organizations adopt AI without thinking carefully about governance, they risk allowing tools to shape behavior rather than guiding those tools intentionally.

Human-First AI simply reverses that relationship.

People lead.
Technology assists.

Forward Arrow was created around that principle.

The goal is not to slow innovation or resist technology.

The goal is to ensure that artificial intelligence strengthens the work organizations are already doing to serve their communities.

Technology should expand human capability.

It should never replace human responsibility.

AI Governance and Stewardship in 2026 and Beyond

 

Stewarding Artificial Intelligence

One of the most interesting aspects of artificial intelligence is that it is arriving quietly.

Most technological revolutions were obvious.

The internet changed communication.
Smartphones changed how we interact with information.

AI is different.

It is slipping into everyday tools—writing assistants, research helpers, software integrations—often without organizations fully realizing what they are adopting.

That creates both opportunity and risk.

Artificial intelligence can save time, assist with writing, summarize research, and help organizations operate more efficiently.

But it can also introduce new challenges.

Sensitive information can be shared accidentally.
Incorrect outputs can be mistaken for facts.
Staff members may use tools without understanding how they work.

This is why stewardship matters.

Stewardship is a word often used in faith communities, but it applies equally well to technology.

Organizations are entrusted with resources, information, and relationships.

Artificial intelligence should be used in ways that protect that trust rather than undermine it.

Responsible AI use means:

  • protecting confidential information

  • reviewing AI-generated outputs

  • establishing clear policies for staff

Technology should strengthen the mission of an organization, not compromise it.

That philosophy sits at the heart of Forward Arrow.

Introducing Forward Arrow Services

Introducing Forward Arrow Services

For most of my career, I have worked in technology environments where systems had to work reliably.

Infrastructure, databases, monitoring systems—these are the quiet parts of technology that most people never see, but they are essential to making modern life function.

Recently the world has been flooded with discussion about artificial intelligence. AI tools are appearing everywhere, promising to write, summarize, analyze, and automate tasks.

Some of these tools are genuinely useful.
Some are overhyped.

But the real challenge organizations face today is not the technology itself.

The challenge is how to use it responsibly.

That is the reason Forward Arrow Services exists.

Forward Arrow focuses on helping organizations adopt artificial intelligence in a thoughtful, human-centered way.

Churches, nonprofits, small organizations, and even healthcare providers are beginning to experiment with AI tools. Many of them are doing so without clear policies, guidance, or governance.

Forward Arrow helps leaders answer important questions:

  • What AI tools are appropriate to use?

  • What information should never be shared with AI systems?

  • How can organizations protect trust while still benefiting from new technology?

Artificial intelligence will not disappear. It will only become more common.

The real opportunity is not simply using AI faster than everyone else.

The opportunity is learning how to steward it responsibly.

That is the work Forward Arrow hopes to support.