The AI Imperative: Why Strategic Action on Generative AI Can’t Wait

Brendan Morgan
Project Manager

Every organization faces a defining question: Will you shape how artificial intelligence transforms your operations, or react after peer agencies have already moved? With 79% of organizations now regularly using generative AI[1] and 64% reporting that AI enables innovation[1], the window for intentional AI strategy is narrowing. The question is no longer whether to adopt AI but how: with appropriate governance, clear pilots, and a realistic roadmap.

Understanding Generative AI: Beyond Traditional Automation

Traditional software follows rigid, preprogrammed rules. Generative AI operates differently: it analyzes your request, draws on patterns learned from vast data, and produces original responses. Whether drafting a policy memo, summarizing a report, or answering customer questions, it creates new content rather than retrieving stored information.

This distinction matters because generative AI affects different work than previous automation waves. McKinsey research indicates natural language understanding now enables automation of 25% of total work time previously protected from disruption.[2] The 60 to 70% of employee time spent on language, reasoning, and content creation is suddenly within AI’s reach.[2] This technology augments knowledge work itself: analysis, writing, and problem solving that previously required exclusively human judgment.

The Productivity Evidence Is Compelling

Adoption has reached an inflection point. McKinsey’s 2024 Global Survey found 65% of organizations regularly use generative AI, nearly double the 33% reported one year earlier.[3] By 2025, that figure climbed to 79%.[1] Federal agencies demonstrate how quickly implementation scales once organizations move past initial experimentation: a July 2025 GAO review found generative AI use cases across eleven major agencies increased ninefold in a single year, from 32 to 282.[4] The technology moved from experimental curiosity to operational necessity faster than any enterprise technology in recent memory.

The productivity gains are substantial. Federal Reserve Bank of St. Louis research documents a 33% average productivity boost during AI assisted tasks.[5] A study by researchers from MIT, Princeton, and the University of Pennsylvania measuring nearly 5,000 developers found 26% higher productivity with AI coding assistants.[6] These are not marginal improvements; they represent a fundamental shift in how quickly knowledge work can be completed.

Real world implementations confirm these gains translate to results. Swedish fintech Klarna deployed an AI assistant handling 2.3 million customer conversations in its first month, equivalent to 700 full time agents, while reducing resolution time from 11 minutes to under 2 minutes.[7] Transit agencies are beginning to follow proven private sector models. Chicago Transit Authority’s “Chat with CTA” chatbot expanded customer service reach by 63% while enabling staff to intercept urgent rider issues within five minutes, and the agency projects customer service capacity will double within two years of deployment.[8]

The Risks Are Real and Require Active Management

Implementing generative AI without understanding its limitations invites costly failures. The most significant limitation is hallucinations: confident but fabricated outputs. A January 2024 Stanford study found large language models hallucinate between 69% and 88% of the time when asked verifiable questions about federal court cases[9], though models have significantly improved since. A database tracking AI errors in legal filings has documented more than 600 cases worldwide where judges identified fabricated citations.[10]

These limitations underscore why governance is not optional. Transit agencies operate under heightened public scrutiny, federal reporting requirements, and board oversight that make unchecked AI errors particularly costly. A fabricated statistic in a board presentation or an inaccurate response to a rider complaint can erode public trust far faster than the efficiency gains AI provides. Clear policies defining where AI can be used, what human review is required, and how outputs are verified are prerequisites for responsible adoption.

The Challenge for Leaders

As you prepare for the year ahead, ask: Does your organization have a deliberate AI strategy, or are you hoping the technology will figure itself out? Organizations achieving the greatest results treat AI not as a technology project but as a business transformation initiative, starting with three foundational elements.

First, establish clear guidance for all staff on acceptable AI use. Employees are already experimenting with generative AI whether organizations have policies or not. A governance framework that defines approved tools, prohibited uses, data handling requirements, and human review protocols reduces risk and gives staff confidence to use these tools productively rather than secretly.

Second, target pilot programs toward low risk, high volume, repetitive tasks where productivity gains are measurable. For transit agencies, promising candidates include drafting responses to customer inquiries, generating first drafts of grant applications, accelerating annual NTD reporting, and benchmarking performance data against peer agencies. The practical challenge often lies not in the technology itself but in accessing data trapped across disparate vendor systems; agencies that begin mapping their data architecture now and holding conversations with vendors will be better positioned when pilot opportunities arise.

Third, conduct retrospectives on pilot outcomes and use those findings to develop a longer term strategic plan for AI integration. Initial pilots reveal what works, what fails, and where organizational capabilities need strengthening. A phased roadmap built on actual results positions agencies to scale AI thoughtfully across operations. As a rule of thumb, McKinsey suggests budgeting $3 for change management for every $1 spent on AI development.[11]

Your riders, board, and community deserve an organization that knows exactly why it is pursuing AI and precisely how it plans to get there. We are navigating these same questions with transit agencies across the country and would welcome the opportunity to share what we are learning. If your agency is evaluating where generative AI fits into your operations, reach out to start a conversation.

Sources

[1] McKinsey & Company, The State of AI in 2025

[2] McKinsey & Company, The Economic Potential of Generative AI

[3] McKinsey & Company, The State of AI in Early 2024

[4] U.S. Government Accountability Office, Generative AI Use and Management at Federal Agencies (GAO-25-107653) (July 2025)

[5] Federal Reserve Bank of St. Louis, The Impact of Generative AI on Work Productivity (February 2025)

[6] MIT Sloan School of Management, How Generative AI Affects Highly Skilled Workers

[7] Klarna, AI Assistant Handles Two-Thirds of Customer Service Chats

[8] Google Public Sector, Chicago Transit Authority Connects with City: AI Chatbot Bridges Language Barriers and Empowers Riders

[9] Stanford HAI, Hallucinating Law: Legal Mistakes in Large Language Models Are Pervasive (Dahl et al., January 2024)

[10] Damien Charlotin, AI Hallucination Cases Database

[11] McKinsey & Company, Moving Past Gen AI’s Honeymoon Phase: Seven Hard Truths for CIOs (May 2024)

 

Facebook
Twitter
LinkedIn

Want to learn more about TransDASH?

Fill out this form below.