insight magazine

Evolving Accountant | Spring 2026

The Not-for-Profit’s 4 Pillars of Mission-Aligned AI Adoption

Here’s how not-for-profit organizations can ethically and effectively adopt AI to strengthen their mission.
Andrea Wright, CPA Partner, Johnson Lambert LLP


The unique challenges not-for-profits (NFPs) face—limited resources, high demand for services, and a constant need for compelling communication—make them particularly well-suited to benefit from generative artificial intelligence (AI). However, successful AI integration requires more than just selecting a tool; it demands a comprehensive strategy to ensure the technology serves the NFP’s mission and not the other way around.

NFP leaders looking to ethically and effectively adopt generative AI across their organization should build their AI foundation on these four pillars.

1. EXPLORING AI’S POTENTIAL

Generative AI has the potential to free up limited resources and amplify mission delivery. Here are a few ways NFPs can leverage AI’s benefits:

  • Enhance fundraising and communication efforts: Generative AI can automate and personalize donor outreach at scale, drafting compelling, tailored emails and solicitations. Further, AI excels at summarizing research for grant proposals, drafting powerful narratives, and generating engaging, on-brand social media content and press releases.
  • Streamline administrative and operational efficiencies: AI can significantly mitigate administrative burdens by automating report generation, summarizing complex meeting minutes and documents, improving data entry accuracy, and optimizing scheduling across teams.
  • Boost program impact: AI can analyze large, complex data sets to identify emerging demographic trends, refine service delivery models for maximum efficacy, and more accurately predict community needs than traditional methods.


2. LAYING THE FOUNDATIONAL DATA STRATEGY

Many generative AI tools function successfully as “off-the-shelf” creative partners without accessing your confidential database. With these tools, employees can immediately leverage them for drafting content, brainstorming, and summarizing public information without waiting for a massive data cleanup. But to eventually use AI for analyzing donor trends or predicting program outcomes, the quality of your internal data becomes paramount. For advanced applications where AI interacts with your records, organizations must prioritize data hygiene, which involves standardizing formats and purging obsolete records. As part of this data cleanup, NFPs should consider:

  • Ethical data sourcing: Because NFPs handle sensitive constituent data, it’s crucial to have a clear discussion on the legal and ethical considerations of using the data for AI training. This includes ensuring all data is anonymized where appropriate, consent is explicitly secured for its use, and data usage strictly adheres to the organization’s privacy principles.
  • Assessing technological readiness: Successful AI solutions often require significant computational resources. Therefore, NFPs must evaluate their current IT infrastructure, including cloud storage capabilities, network bandwidth, and existing security measures to identify the steps needed to support scalable and secure AI solutions.


3. BUILDING AN AI-READY CULTURE

AI’s success depends on the people using it. Cultivating an AI-ready operational culture through comprehensive training and organizational buy-in is vital to ensuring enthusiastic and effective adoption across all departments.

Before diving into training, it’s important to define the goal. For most NFPs, the goal is to move employees beyond basic AI literacy to AI fluency.

Employees with high AI fluency understand these core concepts:

  • Prompting as delegation: They know that to get a good result, they must brief the AI agent just as they would a human intern— providing context, role, constraints, and examples.
  • Capability discernment: They instinctively know which prompts are high value for AI (e.g., “Summarize these meeting notes”) and which are high risk (e.g., “Fact-check this news event”), saving time by avoiding dead ends.
  • Iterative collaboration: They understand the first output is rarely the final product. They know how to “reply” to the AI agent to refine, edit, and polish the work, treating the tool as a thought partner rather than a search engine.

Importantly, a one-size-fits-all approach to training is insufficient. Organizations need to develop specialized training pathways tailored for different departments to ensure effective and relevant adoption. For example, development teams should be focused on prompt engineering for fundraising letters, program delivery teams should be focused on data analysis tools, and finance teams should be focused on automated reporting.

While training is a start, true AI fluency comes from continued usage. Leadership should encourage a culture of experimentation where employees feel safe testing these tools on small, low-risk tasks every day. To do so, consider these tips:

  • Measure usage to improve: Adoption should be tracked not just by who has a license but by daily active usage. There’s a direct correlation between frequency of use and fluency.
  • Foster a continuous learning environment: The more employees interact with large language models, the faster they learn to distinguish between what the models excel at (summarization, ideation, and drafting) and where they struggle (nuanced judgment and factual recall of obscure events). High-frequency users quickly learn how to “guide” the AI by customizing the context they provide to get the best responses.
  • Leadership sponsorship: The sustained success of any major organizational change requires executive support. The role of executive leadership is to champion AI initiatives, allocate necessary financial and human resources for implementation and ongoing maintenance, and visibly model the responsible use of AI tools.


4. IMPLEMENTING ESSENTIAL AI SAFEGUARDS AND GOVERNANCE POLICIES

As AI tools become deeply integrated into an NFP’s operations, robust governance is nonnegotiable in protecting the organization and constituents. A good start is establishing essential AI safeguards and policy frameworks:

  • Identify and address bias: AI algorithms can perpetuate and even amplify existing societal biases if not carefully monitored. NFPs need to adopt strategies for identifying and addressing algorithmic bias to ensure equitable outcomes for all populations they serve. This includes regular auditing of AI outputs and ensuring diverse voices are involved in the development and review of AI-driven processes.
  • Ensure data privacy compliance: NFPs should establish clear, stringent policies that adhere to relevant state and federal privacy regulations. This is particularly critical when using AI tools for constituent data analysis or direct communication.
  • Maintain transparency and accountability: Defining clear lines of responsibility for AI-driven decisions is paramount. Employees must understand when a decision was influenced or made by an AI action and who’s ultimately accountable for the outcome. Further, organizations must ensure audit trails are maintained for all critical AI applications, allowing for review and correction.
  • Follow risk management best practices: Identifying potential security vulnerabilities is a continuous process. This involves establishing protocols for responsible AI deployment, monitoring for unauthorized data leakages through generative tools, and continuously training employees on secure AI practices to protect sensitive information from both internal and external threats.


It’s vital for leadership to recognize that generative AI models are probabilistic engines, not deterministic databases. They’re designed for creativity and pattern matching rather than factual retrieval. That’s why these safeguards are necessary to ensure they’re used for generating drafts and ideas, not for making final, unverified decisions (i.e., policies should dictate that AI is the drafter, but the human is always the editor and publisher).

Overall, generative AI is a rapidly expanding technology that’s creating new opportunities for NFPs to work smarter and further amplify their missions. However, as with any emerging tool, constant vigilance is necessary to mitigate potential risks. By building on these four pillars, NFPs can harness AI’s benefits while safeguarding the integrity of their work.


This column was co-authored with Johnson Lambert LLP’s David Fuge, chief innovation officer, and Paul Preziotti, CPA, partner.



Leave a comment