Governing Microsoft 365 Data: the Lynchpin to Your Enterprise AI Revolution

Governing Microsoft 365 Data: the Lynchpin to Your Enterprise AI Revolution

Why the dual ideas of AI productivity and AI governance are hard to execute simultaneously, and what you can do about it

The idea disease

“[There is] the disease of thinking that a really great idea is 90% of the work. And if you just tell all these other people ‘here’s this great idea,’ then of course they can go off and make it happen. And the problem with that is that there’s just a tremendous amount of craftsmanship in between a great idea and a great product.”

This insight from Steve Jobs, the visionary CEO behind Apple’s iPhone, underscores how crucial execution is in transforming groundbreaking ideas into revolutionary products.

Today, the rapid emergence of generative AI (gen AI) in the workplace mirrors the transformative impact of the iPhone. Since its debut in November 2022, ChatGPT has reached 100 million users in just two months, a rate of adoption that eclipses that of mobile phones (which took sixteen years) and before that the internet (which took seven years). As I write this in June 2024, 75% of knowledge workers are using AI tools to enhance productivity, creativity, and decision-making at work.

However, this surge into the mainstream presents massive new challenges. The primary concern for enterprise leaders today isn't only about harnessing AI for innovation & productivity; it's also about securing sensitive data against misuse and potential breaches. This tension creates a dynamic in many companies where employees, despite frequently using gen AI for critical tasks, often hesitate to disclose this, fearing the potential risks to data privacy, security, and corporate policy.

This is where true craftsmanship - the ability to execute - comes into play: balancing the potential and risks of AI requires not just adopting new technologies but also mastering them.

The big gen AI ideas: maximise productivity - minimize security & compliance incidents

Let's dig a little deeper on the two major ideas I prefaced - that every organisation navigating the shift to AI shares: (a) enhancing productivity to maintain a competitive business and (b) ensuring the process is incident-free to maintain a trusted business.

This is how I have seen the two ideas playing out at the individual level when enterprise-grade AI assistants are sanctioned for use...

  • (a) Enhancing Productivity:

Gen AI empowers employees to produce their best work swiftly and effectively. For example, marketers can use AI to quickly develop comprehensive marketing reports, which allows them to concentrate on delivering compelling narratives during board meetings.

Similarly, financial analysts can utilise AI in Excel to halve the time it takes to calculate returns on marketing campaigns, setting a standard for efficiency within their teams.

However, the potential benefits don't exist in isolation from potential risks.

(b) Minimising incidents:

Picture being a wheat salesman in the agricultural age, deciding whether to take a traditional two-day route or a two-hour shortcut across a massive suspension bridge. This bridge could drastically increase your productivity by allowing ten times as many deliveries per year. But if the bridge is poorly constructed, this could mean losing your cargo or worse.

This scenario mirrors the dilemma faced by businesses integrating gen AI. The potential for increased productivity is significant, yet so is the risk. To approve the use of these tools on a team, department, or organisational level, leaders seek not just promises on responsible AI from providers like Microsoft but also tangible evidence and tools to make their own decisions about how to manage these data and AI risks.

To address this, we've introduced the Microsoft Purview AI Hub, designed as a comprehensive management tool to monitor and mitigate AI risks effectively.

The neat aspect about the Purview AI Hub is that it can provide management & operations an access controlled birds-eye-view of where to focus your efforts when balancing risk with the rewards of using AI assistants.

From discussions I've had with a number of my multinational customers, it’s clear: the pursuit of productivity gains while minimising incidents are universally accepted ideas. While an AI hub can bring it all together in a one-stop-shop view, there's a lot going on under the hood that companies need to validate to get the most out of such a hub.

This is where execution comes in...

Before you walk your gen AI journey, have you made your bed and brushed your teeth?

Focusing on Microsoft 365’s enterprise productivity suite, used by millions of companies globally and the suite I know best, the complexities of today’s IT and business processes can make the preparation for deploying enterprise-wide AI copilots daunting.

Yet, distilling these tasks reveals three essential 'data hygiene' practices that my colleagues at Microsoft and I often discuss:

Establishing these practices not only prepares you for effective AI integration but also ensures your operations are secure and compliant.

Immediate actions vs. long-term strategies

Balancing immediate, achievable quick wins with the preparation for long-term objectives is essential, especially if you're rolling out gen AI across a company with thousands, or perhaps tens of thousands of employees. Don't let the size of your gen AI project paralyze you.

For example, if your team already manages permissions on Microsoft 365 solutions like SharePoint Online and Teams, or provides access to corporate resources through Entra ID (formerly Azure AD) user accounts, you have quick wins at your fingertips thanks to your team's familiarity and the immediate applicability of these tools.

Conversely, you might find yourself less acquainted with Microsoft's data security, privacy, and compliance solutions within the Microsoft Purview enterprise administration suite. These tools, scalable and comprehensive for data governance, enable enterprises to apply the same process and policy logic across multiple Microsoft and non-Microsoft apps simultaneously, without needing (much) code or command line scripting.

Although our productivity tools have been around for decades, Microsoft only introduced the data security, governance, and compliance tools in the Purview suite as a consolidated offering to the mass market in the last two or three years. With gen AI assistants set to interact with enterprise data at machine speeds, mastering Microsoft Purview has become more important than ever.

Each company faces unique challenges depending on where their data resides, leading to different levels of maturity and situations. Instead of following hard and fast rules, we at Microsoft recommend adopting a philosophy to rigorously test the strength of governance approaches. It's called 'Zero Trust,' which essentially means 'Never Trust [a computer trying to access your IT network], Always Verify [the computer and person using it is authorised], and Assume Breach [to contain the spread of damage to your IT, as if it has already happened].'

You can visualise the relevant components of a Zero Trust architecture for your enterprise productivity stack with the diagram below, which outlines a deployment plan starting with foundational elements and extending to file or document-level security across Microsoft 365 apps like Word, Teams, and Outlook, as well as other SaaS apps such as Google Drive and ChatGPT.

Diagram that shows the Microsoft 365 Zero Trust deployment stack.

For a detailed guide on implementing this within your organisation, refer to the Zero Trust deployment plan with Microsoft 365

Taking your gen AI deployment to infinity, and beyond

It’s hard to predict when the gen AI adoption curve will plateau. But riding the wave as it happens beats paddling to shore after it’s gone.

Running secure experiments with Copilot for Microsoft 365 and other gen AI tools strengthens your Zero Trust capabilities, making it quicker and easier over time to integrate data hygiene practices—like data access, protection, and lifecycle—into AI-driven business priorities.

Diagram of applying protections and deploying Copilot in parallel.

Addressing these fundamentals allows you to tackle more complex questions about building your AI assistants, like

  • What kind of prompts do I want to be able to flag because of risk of non-compliance, contravention of ethics, or risks to privacy

  • How do I want to escalate certain incidents for investigations where prompts were used

  • How do I balance these needs for corporate control with employee privacy to give them the transparency, freedom and autonomy to experiment responsibly with their AI assistants?


As we navigate the rapidly evolving landscape of gen AI, the journey from concept to execution reveals both immense opportunities and significant challenges.

Embracing these advancements requires more than just enthusiasm for new technology; it demands a rigorous approach to data security, ethical considerations, and ongoing management practices.

By integrating robust data hygiene and Zero Trust strategies, organizations can not only harness the power of AI to boost productivity but also ensure that innovation is sustainable and secure. The future of AI in the workplace is not just about the tools we use but how thoughtfully we deploy them.

As we continue to push the boundaries of what AI can achieve, let us also commit to safeguarding the integrity and privacy of the digital ecosystems we build. This balanced approach will be key to realizing the full potential of AI across industries, ensuring that our technological advancements enhance, rather than compromise, our collective well-being.

The insights shared in this blog were sparked by a whole bunch of best practices put together by several folks in the Microsoft EMEA technical community, all about rolling out gen AI the smart way. A huge shoutout to my France-based colleagues Samuel Gaston-Raoul, Cloud Solution Architect (find him here), and Thierry Matusiak, Information Security Architect (find him here), for their killer internal material that I leaned on a lot for this blog. And, of course, a big thanks to my personal ChatGPT subscription and my enterprise Copilot for Microsoft 365 subscription – I couldn’t have polished this up without their tireless help!

Did you find this article valuable?

Support Rodney Mhungu by becoming a sponsor. Any amount is appreciated!