Share via


Use Microsoft Purview to manage data security & compliance for Microsoft 365 Copilot & Microsoft 365 Copilot Chat

Microsoft 365 licensing guidance for security & compliance

Use the following sections to identify the Microsoft Purview capabilities that are supported for AI interactions with Microsoft 365 Copilot & Microsoft 365 Copilot Chat, and some get started recommendations for you to manage these AI interactions for security and compliance.

Capabilities supported

Use the following table to see at a glance the Microsoft Purview capabilities that are supported with Microsoft 365 Copilot & Microsoft 365 Copilot Chat.

Capability or solution in Microsoft Purview Supported for AI interactions
DSPM for AI
Auditing
Data classification
Sensitivity labels
Encryption without sensitivity labels
Data loss prevention
Insider Risk Management
Communication compliance
eDiscovery
Data Lifecycle Management
Compliance Manager

Data Security Posture Management for AI

Use Microsoft Purview Data Security Posture Management (DSPM) for AI as your front door to discover, secure, and apply compliance controls for AI usage across your enterprise. This solution uses existing controls from Microsoft Purview information protection and compliance management with easy-to-use graphical tools and reports to quickly gain insights into AI use within your organization. With personalized recommendations, one-click policies help you protect your data and comply with regulatory requirements.

For more information, see Learn about Data Security Posture Management (DSPM) for AI.

AI app-specific information:
  • Data risk assessments help you identify and fix issues that could results in oversharing of data. From the recommendations:

    • Protect your data from potential oversharing risks for the default weekly data risk assessment.
    • Protect sensitive data references in Copilot and agent responses for a custom data risk assessment.
  • Recommendation: Get guided assistance to AI regulations, which uses control-mapping regulatory templates from Compliance Manager.

  • Additional guidance from Overview > Microsoft 365 Copilot view.

  • One-click policies available:

    • Sensitivity labels and policies from the recommendation Protect your data with sensitivity labels.
    • DSPM for AI - Detect risky AI usage from the recommendation Detect risky interactions in AI apps.
    • DSPM for AI - Unethical behavior in AI apps from the recommendation Detect unethical behavior in AI.
    • DSPM for AI - Protect sensitive data from Copilot processing from the recommendation Protect items with sensitivity labels from Microsoft 365 Copilot and agent processing
    • DSPM for AI - Detect sensitive info shared with AI via network from the recommendation Extend insights into sensitive data in AI app interactions.

Auditing and AI interactions

Microsoft Purview Audit solutions provide comprehensive tools for searching and managing audit records of activities performed across various Microsoft services by users and admins, and help organizations to effectively respond to security events, forensic investigations, internal investigations, and compliance obligations.

Like other activities, prompts and responses are captured in the unified audit log. Events include how and when users interact with the AI app, and can include in which Microsoft 365 service the activity took place, and references to the files stored in Microsoft 365 that were accessed during the interaction. If these files have a sensitivity label applied, that's also captured.

These events flow into activity explorer in Data Security Posture Management for AI, where the data from prompts and responses can be displayed. You can also use the Audit solution from the Microsoft Purview portal to search and find these auditing events.

For more information, see Audit logs for Copilot and AI activities.

Data classification and AI interactions

Microsoft Purview data classification provides a comprehensive framework for identifying and tagging sensitive data across various Microsoft services, including Office 365, Dynamics 365, and Azure. Classifying data is often the first step to ensure compliance with data protection regulations and safeguard against unauthorized access, alteration, or destruction. You can use built-in system classifications or create your own.

Sensitive information types and trainable classifiers can be used to find sensitive data in user prompts and responses when they use AI apps. The resulting information then surfaces in the data classification dashboard and activity explorer in Data Security Posture Management for AI.

Sensitivity labels and AI interactions

AI apps that Microsoft Purview support use existing controls to ensure that data stored in your tenant is never returned to the user or used by a large language model (LLM) if the user doesn't have access to that data. When the data has sensitivity labels from your organization applied to the content, there's an extra layer of protection:

  • When a file is open in Word, Excel, PowerPoint, or similarly an email or calendar event is open in Outlook, the sensitivity of the data is displayed to users in the app with the label name and content markings (such as header or footer text) that have been configured for the label. Loop components and pages also support the same sensitivity labels.

  • When the sensitivity label applies encryption, users must have the EXTRACT usage right, as well as VIEW, for the AI apps to return the data.

  • This protection extends to data stored outside your Microsoft 365 tenant when it's open in an Office app (data in use). For example, local storage, network shares, and cloud storage.

Tip

If you haven't already, we recommend you enable sensitivity labels for SharePoint and OneDrive and also familiarize yourself with the file types and label configurations that these services can process. When sensitivity labels aren't enabled for these services, the encrypted files that Copilot and agents can access are limited to data in use from Office apps on Windows.

For instructions, see Enable sensitivity labels for Office files in SharePoint and OneDrive.

If you're not already using sensitivity labels, see Get started with sensitivity labels.

AI app-specific information:
  • Microsoft 365 Copilot Chat displays the sensitivity label for items listed in the response and citations. Using the sensitivity labels' priority number that's defined in the Microsoft Purview portal, the latest response in Copilot displays the highest priority sensitivity label from the data used for that Copilot chat.

    Although compliance admins define a sensitivity label's priority, a higher priority number usually denotes higher sensitivity of the content, with more restrictive permissions. As a result, Copilot responses are labeled with the most restrictive sensitivity label.

  • Copilot in Word and Copilot in PowerPoint support sensitivity label inheritance for newly created content. See the following section for more information.

Sensitivity label inheritance

If you use Copilot in Word or Copilot in PowerPoint to create new content based on an item that has a sensitivity label applied, the sensitivity label from the source file is automatically inherited, with the label's protection settings.

For example, a user selects Draft with Copilot in Word and then Reference a file. Or a user selects Create presentation from file in PowerPoint, or Edit in Pages from Microsoft 365 Copilot Chat. The source content has the sensitivity label Confidential\Anyone (unrestricted) applied and that label is configured to apply a footer that displays "Confidential". The new content is automatically labeled Confidential\Anyone (unrestricted) with the same footer.

To see an example of this in action, watch the following demo from the Ignite 2023 session, "Getting your enterprise ready for Microsoft 365 Copilot". The demo shows how the default sensitivity label of General is replaced with a Confidential label when a user drafts with Copilot and references a labeled file. The information bar under the ribbon informs the user that content created by Copilot resulted in the new label being automatically applied:

If multiple files are used to create new content, the sensitivity label with the highest priority is used for label inheritance.

As with all automatic labeling scenarios, the user can always override and replace an inherited label (or remove, if you're not using mandatory labeling).

Encryption without sensitivity labels and AI interactions

Even if a sensitivity label isn't applied to content, services and products might use the encryption capabilities from the Azure Rights Management service. As a result, AI apps can still check for the VIEW and EXTRACT usage rights before returning data and links to a user, but there's no automatic inheritance of protection for new items.

Tip

You'll get the best user experience when you always use sensitivity labels to protect your data, and encryption is applied by a label.

Examples of products and services that can use the encryption capabilities from the Azure Rights Management service without sensitivity labels:

  • Microsoft Purview Message Encryption
  • Microsoft Information Rights Management (IRM)
  • Microsoft Rights Management connector
  • Microsoft Rights Management SDK

For other encryption methods that don't use the Azure Rights Management service:

  • S/MIME protected emails won't be returned by Copilot, and Copilot isn't available in Outlook when an S/MIME protected email is open.

  • Password-protected documents can't be accessed by AI apps unless they're already opened by the user in the same app (data in use). Passwords aren't inherited by a destination item.

As with other Microsoft 365 services, such as eDiscovery and search, items encrypted with Microsoft Purview Customer Key or your own root key (BYOK) are supported and eligible to be returned by Copilot.

Data loss prevention and AI interactions

Microsoft Purview Data Loss Prevention (DLP) helps you identify sensitive items across Microsoft 365 services and endpoints, monitor them, and helps protect against leakage of those items. It uses deep content inspection and contextual analysis to identify sensitive items and it enforces policies to protect sensitive data such as financial records, health information, or intellectual property.

Windows computers that are onboarded to Microsoft Purview can be configured for Endpoint data loss prevention (DLP) policies that warn or block users from sharing sensitive information with third-party generative AI sites that are accessed via a browser. For example, a user is prevented from pasting credit card numbers into ChatGPT, or they see a warning that they can override. For more information about the supported DLP actions and which platforms support them, see the first two rows in the table from Endpoint activities you can monitor and take action on.

Additionally, a DLP policy scoped to an AI location can restrict AI apps from processing sensitive content. For example, a DLP policy can restrict Microsoft 365 Copilot from summarizing files based on sensitivity labels such as "Highly Confidential". After turning on this policy, Microsoft 365 Copilot and agents won't summarize files labeled "Highly Confidential" but can reference it with a link so the user can then open and view the content using Word. For more information that includes which AI apps support this DLP configuration, see Learn about the Microsoft 365 Copilot policy location.

AI app-specific information:
  • The following Endpoint DLP capabilities are supported for Microsoft 365 Copilot Chat only:
    • Block paste of sensitive content
    • Block files based on a specified sensitivity label

Insider Risk Management and AI interactions

Microsoft Purview Insider Risk Management helps you detect, investigate, and mitigate internal risks such as IP theft, data leakage, and security violations. It leverages machine learning models and various signals from Microsoft 365 and third-party indicators to identify potential malicious or inadvertent insider activities. The solution includes privacy controls like pseudonymization and role-based access, ensuring user-level privacy while enabling risk analysts to take appropriate actions.

Use the Risky AI usage policy template to detect risky usage that includes prompt injection attacks and accessing protected materials. Insights from these signals are integrated into Microsoft Defender XDR to provide a comprehensive view of AI-related risks.

Communication compliance and AI interactions

Microsoft Purview Communication Compliance provides tools to help you detect and manage regulatory compliance and business conduct violations across various communication channels, which include user prompts and responses for AI apps. It's designed with privacy by default, pseudonymizing usernames and incorporating role-based access controls. The solution helps identify and remediate inappropriate communications, such as sharing sensitive information, harassment, threats, and adult content.

To learn more about using communication compliance policies for AI apps, see Configure a communication compliance policy to detect for generative AI interactions.

eDiscovery and AI interactions

Microsoft Purview eDiscovery lets you identify and deliver electronic information that can be used as evidence in legal cases. The eDiscovery tools in Microsoft Purview support searching for content in Exchange Online, OneDrive for Business, SharePoint Online, Microsoft Teams, Microsoft 365 Groups, and Viva Engage teams. You can then prevent the information from deletion and export the information.

Because user prompts and responses for AI apps are stored in a user's mailbox, you can create a case and use search when a user's mailbox is selected as the source for a search query. For example, select and retrieve this data from the source mailbox by selecting from the query builder Add condition > Type > Contains any of > Edit > Copilot activity. This query condition includes all Copilot and other AI application activity.

After the search is refined, you can export the results or add to a review set. You can review and export information directly from the review set.

To learn more about identifying and deleting user AI interaction data, see Search for and delete Copilot data in eDiscovery.

Data Lifecycle Management and AI interactions

Microsoft Purview Data Lifecycle Management provides tools and capabilities to manage the lifecycle of organizational data by retaining necessary content and deleting unnecessary content. These tools ensure compliance with business, legal, and regulatory requirements.

Use retention policies to automatically retain or delete user prompts and responses for AI apps. For detailed information about this retention works, see Learn about retention for Copilot & AI apps.

As with all retention policies and holds, if more than one policy for the same location applies to a user, the principles of retention resolve any conflicts. For example, the data is retained for the longest duration of all the applied retention policies or eDiscovery holds.

AI app-specific information:
  • For retention policies, select the option for Microsoft Copilot Experiences.

  • Retention labels can automatically retain files referenced in Microsoft 365 Copilot when you select the option for cloud attachments with an auto-apply retention label policy: Apply label to cloud attachments and links shared in Exchange, Teams, Viva Engage, and Copilot. As with all retained cloud attachments, the file version at the time it's referenced is retained.

    Updated cloud attachments option for auto-apply retention label to include interactions for Copilot.

    For detailed information about how this retention works, see How retention works with cloud attachments.

Compliance Manager and AI interactions

Microsoft Purview Compliance Manager is a solution that helps you automatically assess and manage compliance across your multicloud environment. Compliance Manager can help you throughout your compliance journey, from taking inventory of your data protection risks to managing the complexities of implementing controls, staying current with regulations and certifications, and reporting to auditors.

To help you keep compliant with AI regulations, Compliance Manager provides regulatory templates to help you assess, implement, and strengthen your compliance requirements for all generative AI apps. For example, monitoring AI interactions and preventing data loss in AI applications. For more information, see Assessments for AI regulations.

Use the following steps to get started with managing data security & compliance for AI interactions from Microsoft 365 Copilot & Microsoft 365 Copilot Chat.

Because Data Security Posture Management for AI is your front door for securing and managing AI interactions, most of the following instructions use that solution:

Confirm that auditing is turned on

From DSPM for AI > Overview > All AI apps view > Get Started section, look to see if auditing is on for your tenant. If not, select Activate Microsoft Purview Audit.

Use the Microsoft 365 Copilot view to discover, protect, and apply compliance controls

Change the view from All AI apps to Microsoft 365 Copilot and work your way through the sections:

  • Assess and prevent oversharing of sensitive data
  • Secure your data in Microsoft 365 Copilot
  • Discover Microsoft 365 Copilot activity

These sections include recommendations that are specific to Microsoft 365 Copilot. Wait at least a day for data to display for the reports on this page.

Tip

Secure your data in Microsoft 365 Copilot includes the following recommendations with manual configuration instructions for their respective Microsoft Purview solutions: Create sensitivity labels for your organization and Protect items with sensitivity labels from Copilot processing

If you prefer, you can use DSPM for AI one-click policies for these recommendations, as described in the next section.

Use one-click policies to increase coverage

Select Recommendations from the navigation and use one-click policies to automatically create policies that help you discover, protect, and apply compliance controls. Specific to Microsoft 365 Copilot:

  • Protect your data with sensitivity labels
  • Detect risky interactions in AI apps
  • Detect unethical behavior in AI
  • Protect items with sensitivity labels from Microsoft 365 Copilot and agent processing

View data from your policies

  1. Wait at least a day for data, and then navigate to the Reports page to view the results of your policies. Select Copilot experiences & agents and view information such as:

    • Total interactions over time (Microsoft Copilot and agents)
    • Sensitive interactions per AI app
    • Top unethical AI interactions
    • Top sensitivity labels references in Microsoft 365 Copilot and agents
    • Insider Risk severity
    • Insider risk severity per AI app
    • Potential risky AI usage
  2. Select View details for each of the report graphs to view detailed activities in the activity explorer.

    From the filters, select the AI app category of Copilot experiences & agents, and then use the other filters if you need to further refine the displayed data. Then drill down to each activity to view details that include displaying the prompts and response when you're a member of the Microsoft Purview Content Explorer Content Viewer role group. For more information about this requirement, see Permissions for Data Security Posture Management for AI.

Apply additional compliance controls to Microsoft 365 Copilot interactions

  1. If you need to retain the exact version of files referenced in Microsoft 365 Copilot interactions:

    In the Microsoft Purview portal, navigate to Data Lifecycle Management > Retention labels > and either locate or create a retention label with the required retention period. Then, navigate to Label policies to auto-apply that label, and select the option Apply label to cloud attachments and links shared in Exchange, Teams, Viva Engage, and Copilot. For more information, see Automatically apply a retention label to retain or delete content.

  2. If you need to preserve, collect, analyze, review, or export Microsoft 365 Copilot interactions:

    In the Microsoft Purview portal, navigate to eDiscovery > Cases > Create case. In the case, create a search and use the ItemClass property and the IPM.SkypeTeams.Message.Copilot.* value to search for these interactions in your organization.

Routinely review the reports and data risk assessments in DSPM for AI to determine if you need to make changes, and use activity explorer and events for deeper analysis of how users are interacting with Microsoft 365 Copilot & Microsoft 365 Copilot Chat.

Other documentation to help you secure and manage generative AI apps

For more detailed information, see Considerations to manage Microsoft 365 Copilot for security and compliance.

Microsoft 365 Copilot documentation: