In the past eighteen months, AI has moved from a line item in the technology budget to a governance responsibility. Boards that are still treating it as the former are accumulating exposure they have not yet been asked to account for.
The question is no longer "should we use AI?" That decision was made when someone in your organization signed the first AI vendor contract. The questions that matter now are about accountability — who owns the outcomes when AI makes a wrong call, what oversight exists, and whether the board has enough information to fulfill its fiduciary duty when AI is embedded in consequential decisions.
If you cannot answer the five questions below with specifics, the next board meeting is a reasonable time to start asking them.
1. What decisions are being made by AI systems right now, and who is accountable for each one?
Most boards have approved AI investments at a high level — a budget allocation, a strategic initiative, a vendor relationship. Far fewer have a clear map of where AI is actually making or influencing decisions across the organization.
This matters because accountability diffuses quickly in AI deployment. When an AI system influences a hiring screen, a credit decision, a supply chain order, or a customer interaction, the question of who is responsible for a wrong outcome is rarely answered in advance. The vendor disclaimed liability in the contract. The business unit that deployed it assumed the AI was validated. The executive sponsor has moved on.
Before the next board meeting, ask management to produce an inventory: every AI system in production, the decisions it influences, and the named executive accountable for its outputs. This is not a technology question. It is a governance question — exactly the kind boards exist to ask.
2. How would we know if an AI system was making systematically wrong decisions?
AI systems fail in ways that are structurally different from software failures. Software either works or it does not. AI systems produce outputs that look reasonable while being systematically biased, degraded, or misaligned with current conditions.
The monitoring question is one most organizations have not resolved. A demand forecasting model trained on 2021 data is still producing forecasts today — forecasts that look like data, not guesses. A customer scoring model can underperform for months before the pattern surfaces in outcomes. An AI-assisted compliance tool can miss an entire category of risk if the training data had gaps that nobody documented.
What the board should ask management: what is the monitoring infrastructure for each AI system in production? What triggers a review? What is the escalation path when performance degrades? "We rely on the vendor" is not an answer — it is a delegation of oversight that the board has not authorized.
3. What is our liability exposure if an AI system produces a discriminatory or harmful outcome?
Regulatory frameworks for AI are hardening. The EU AI Act is in force. US federal agencies are issuing enforcement guidance on AI-driven decisions in employment, credit, and housing. State-level regulation is accelerating in California, Colorado, and a growing number of jurisdictions.
The legal exposure question is not theoretical. It is the question your D&O insurer will ask if an AI-related incident triggers a claim. It is the question regulators will ask when reviewing a compliance failure that involved an automated system. It is the question plaintiffs' counsel will ask when AI-assisted decisions produced disparate outcomes.
Boards should understand, at a minimum: which AI systems touch regulated processes, what third-party audits or bias assessments have been conducted, and whether the company's legal team has reviewed vendor contracts specifically for AI liability allocation. If the last time AI liability was reviewed was at contract signing, that review is out of date.
4. Does management have the information it needs to govern AI, or are they relying on vendor assurances?
This is the explainability question, and it is the one that most directly reveals whether governance is real or performative.
Vendors sell outcomes. They rarely sell the level of transparency that genuine governance requires. When an AI system influences a consequential business decision, the internal team responsible for that decision should be able to explain: what inputs drove the output, what the confidence level was, what the system would have recommended under different conditions. If the answer is "we trust the vendor's model," that is not governance — that is delegation without oversight.
The board's role is to ensure that management has the tools, the contractual rights, and the organizational capacity to actually govern the AI systems the company depends on. Ask management specifically: for your three highest-impact AI systems, can you audit an individual output and explain it to a regulator? If not, what would it take to get there?
5. What is our plan if a major AI vendor fails, gets acquired, or changes pricing?
Concentration risk in AI is real and underappreciated. Organizations that have built operational dependencies on a small number of AI vendors — or, in some cases, a single foundation model provider — have created exposure that looks like a technology question but is actually a business continuity question.
Vendor acquisitions change roadmaps. Pricing changes affect build-versus-buy decisions. A vendor's model update can change system behavior without notice. Any of these can force an expensive, time-pressured transition in a system that your operations depend on.
The board should understand: what are the single points of vendor dependency in our AI stack, what is the estimated switching cost and timeline for each, and does management have a documented contingency plan? Boards routinely ask these questions about key suppliers, banking relationships, and technology infrastructure. AI vendors deserve the same scrutiny.
Governance Is Already Your Responsibility
None of these questions require technical expertise. They require the same judgment boards apply to financial controls, legal exposure, and operational risk — applied to a new class of decision-making infrastructure that most governance frameworks have not yet caught up to.
The organizations that navigate the next phase of AI well will not necessarily be the ones with the most sophisticated AI. They will be the ones where the board asked the right questions early enough to build governance before an incident forced the issue.
If you want to understand where your organization's AI governance posture stands before the next board conversation, the AI Readiness Assessment gives you an honest baseline in under ten minutes. If you want a deeper framework — one designed for executive and board-level conversations — the Executive's Guide to AI Readiness covers the governance fundamentals in detail.
The questions above are a starting point. The answers are your organization's responsibility — and ultimately, the board's.