AI Governance: A Practical Primer to Getting Started

The Questions That Turn Abstract Principles into Clear Decisions

In partnership with

The Governance Conversation

This week, I was teaching a workshop on building chatbots when one of the participants asked me a question I should have been more prepared to answer: How does an organization actually begin thinking about governance?

I realized I'd been assuming people knew where to start, but the process really is abstract for most folks. So for this issue of Core Concepts, I decided to put together a basic framework.

This is not a one-size-fits-all policy, because good governance has to grow from how your organization actually works, but hopefully a solid starting point for people who want to develop their own approach.

Because here’s the thing: your employees are already using AI. They're asking ChatGPT to help draft emails, running reports through analysis tools, using Grammarly's AI features—what some people call "shadow AI." This isn't because they're trying to circumvent rules or be sneaky.

Most of the time, it's simply because nobody has told them what's okay and what isn't.

In other words, a lack of governance.

Kickstart your holiday campaigns

CTV should be central to any growth marketer’s Q4 strategy. And with Roku Ads Manager, launching high-performing holiday campaigns is simple and effective.

With our intuitive interface, you can set up A/B tests to dial in the most effective messages and offers, then drive direct on-screen purchases via the remote with shoppable Action Ads that integrate with your Shopify store for a seamless checkout experience.

Don’t wait to get started. Streaming on Roku picks up sharply in early October. By launching your campaign now, you can capture early shopping demand and be top of mind as the seasonal spirit kicks in.

Get a $500 ad credit when you spend your first $500 today with code: ROKUADS500. Terms apply.

Starting the Governance Conversation

I've noticed that most organizations tackle AI governance completely backwards. They start by trying to define "responsible AI" or writing elaborate policies before they even know what specific decisions they need to make. You end up with impressive-sounding documents that don't actually help someone figure out if they can use AI to summarize their meeting notes.

What I've found works better is starting with eight straightforward questions. They focus on the bulk of the practical decisions your people are trying to make right now.

If you can get clear answers to these questions, you'll have something that actually helps guide day-to-day behavior.

The Eight Questions That Drive Everything

At a high level, starting with the following eight questions can help most organizations hone in on solid governance:

1. What AI tools can our people use? This sounds simple, but it's where many organizations get stuck. Are you approving specific tools by name? Creating criteria that tools must meet? Leaving it up to individual judgment? Your answer shapes everything else.

2. What information can go into AI tools? Every AI tool processes data differently. Some store everything you input, others delete it immediately, and most fall somewhere in between. You need clear rules about what types of information are okay to share and what should never leave your organization through an AI tool.

3. What tasks can AI help with? AI can assist with writing, analysis, research, coding, design, and dozens of other activities. But not every task is appropriate for AI assistance in every context. Which activities get the green light, and which require human-only approaches?

4. How do we ensure AI output meets our standards? AI tools make mistakes, have biases, and sometimes produce confident-sounding nonsense. What level of human review and verification do you require? Who's ultimately responsible when AI tools are involved in creating work products?

5. Who decides what's allowed? Someone needs authority to approve new tools, interpret policies when situations aren't clear, and make judgment calls about novel AI applications. Who has this authority, and how do decisions get made?

6. What could go wrong and how do we prevent it? Every organization faces different AI-related risks based on their industry, size, and regulatory environment. What specific problems are you trying to avoid, and what safeguards will you put in place?

7. When do we tell others about AI use? Your clients, customers, partners, and regulators may care whether you used AI assistance. When is disclosure required or expected? How do you communicate about AI use transparently?

8. How do we track what's actually happening? Policies only work if people follow them and you can tell whether they're working. How will you monitor AI use, gather feedback, and know when your approach needs adjustment?

Why These Questions Work

It’s important to emphasize these eight questions focus on decisions, not definitions. You don’t need to master the mechanics of large language models or track every new paper.

You need to make clear, practical choices for your organization.

The questions build on one another. Deciding “What tools can people use?” shapes “What information can go into AI tools?” Your approach to quality control influences how you handle disclosure and communication.

Most importantly, they move you from reactive to proactive. Instead of scrambling when someone asks if they can use a new AI tool, you’ll have a framework for consistent, defensible decisions.

Before You Write Any Policies

Start by finding out what's actually happening. This isn't about catching people breaking rules that don't exist yet. It's about understanding your starting point so you can make realistic decisions.

Ask your team members:

  • What AI tools have you used in the last month for work?

  • What work tasks did you use these tools for?

  • What types of information did you input into these tools?

  • Did you review or verify the AI output before using it?

  • Have you disclosed AI use to clients or in any work products?

You'll probably discover that AI use is more widespread and varied than you expected. Marketing might be using AI for social media content while finance runs spreadsheet data through analysis tools.

Pay attention to the gray areas—situations where people weren't sure what to do:

  • "I wanted to use AI to analyze customer feedback data but wasn't sure if I should"

  • "I used AI to draft a client proposal but then rewrote most of it"

  • "I'm not sure if using Grammarly's new AI features counts as using AI"

These gray areas show you exactly where your governance framework needs to provide clarity.

Connect to What You Already Do

You're not starting from scratch. You already handle similar challenges:

Information handling: You classify information as public, internal, or confidential. You have rules about what goes in client contracts and what stays inside the organization. AI tools are just another category of external service that processes information.

Quality standards: You have review processes for client deliverables and approval workflows for marketing materials. AI-assisted work fits into these existing frameworks.

Vendor management: You evaluate software purchases and review terms of service for business tools. AI tools need the same evaluation.

Disclosure: You know when to tell clients about subcontractors and how to label different types of content. AI use follows similar principles.

Making Decisions People Can Follow

The most effective approach gives people simple ways to categorize any AI use scenario: green light, red light, or proceed with caution.

Green Light scenarios are automatically okay within defined boundaries:

  • Using approved writing assistants for internal emails with standard review

  • Analyzing publicly available market data with pre-approved tools

  • Getting help brainstorming ideas for internal projects (no confidential information)

Red Light scenarios are firm boundaries:

  • Uploading customer data to unapproved AI tools

  • Using AI for final decisions on hiring without human review

  • Sharing proprietary processes with AI tools

Yellow Light scenarios need additional approval:

  • New AI tools that haven't been evaluated yet

  • Higher-stakes applications that need extra safeguards

  • Client work where AI use requires disclosure

When someone wants to try something new, they should be able to quickly figure out which category it falls into and what to do next.

Putting It All Together

Your specific answers to the eight questions will depend on your industry, risk tolerance, and regulatory environment. A healthcare organization will have different boundaries than a marketing agency.

Start with your strongest convictions. What uses of AI would definitely worry you? What applications seem clearly beneficial with minimal risk? Build out from there.

Test your framework with real scenarios from your audit. If most things end up needing special approval, your green light criteria might be too restrictive. If everything looks fine, you might need clearer boundaries.

You don't need to solve AI governance perfectly on day one. Your answers today will be different from your answers six months from now, and that's exactly how it should be. Technology evolves, your team gains experience, and you learn what works in your specific context.

What matters most is moving from reactive to proactive. Effective governance feels routine when it's working well because it becomes part of how you normally evaluate tools, manage information, and communicate with stakeholders.

The goal isn't (and shouldn’t be) to create an elaborate AI bureaucracy. It's to extend your existing good business judgment to cover new types of tools.

Ready to get started? At North Light AI, we help organizations move past the hype and put AI to work in ways that are practical, defensible, and human-centered. Whether you’re building governance frameworks, training your team, or exploring new products, our focus is on clarity and impact—not complexity for its own sake.

If you’re ready to move from experimenting with AI to using it with confidence, we’d love to talk. Reach out to start a conversation about how North Light AI can support your goals.