Learn Manage incident response (SC-200) with Interactive Flashcards

Master key concepts in Manage incident response through our interactive flashcard system. Click on each card to reveal detailed explanations and enhance your understanding.

Investigate and remediate threats with Defender for Office 365

Microsoft Defender for Office 365 provides comprehensive threat investigation and remediation capabilities for security operations analysts. This solution protects organizations against malicious threats posed by email messages, links, and collaboration tools.

When investigating threats, analysts use Threat Explorer (also known as Explorer) to analyze threats in real-time. This powerful tool allows you to view malware detected in email, identify phishing attempts, and see the email timeline for suspicious messages. You can filter data by sender, recipient, subject, attachment, and delivery action to narrow down potential threats.

The investigation process typically involves several steps. First, analysts review alerts generated by Defender for Office 365 in the Microsoft 365 Defender portal. These alerts highlight suspicious activities such as malicious attachments, compromised users, or suspicious sending patterns. Next, analysts can use Automated Investigation and Response (AIR) capabilities, which automatically investigate alerts and provide remediation actions.

Remediation actions in Defender for Office 365 include soft delete of email messages, blocking URLs, turning off external mail forwarding, and removing delegates. The Action Center displays all pending and completed remediation actions, allowing analysts to approve, reject, or undo actions as needed.

Threat Trackers provide another valuable investigation tool, offering widgets and views about cybersecurity issues that might affect your organization. Campaign Views help identify coordinated phishing and malware campaigns targeting your organization.

For email-specific threats, analysts can perform message traces to follow the path of messages and determine why specific emails were delivered, quarantined, or blocked. Real-time detections provide visibility into attacks as they happen, enabling faster response times.

Integration with Microsoft Sentinel enhances these capabilities by correlating Defender for Office 365 data with other security signals across the enterprise, providing a unified view of threats and enabling more effective incident response across the entire organization.

Investigate ransomware and BEC incidents from attack disruption

Attack disruption is a powerful Microsoft Defender XDR capability that automatically contains ongoing attacks, limiting their impact while security analysts investigate. When ransomware or Business Email Compromise (BEC) incidents trigger attack disruption, specific investigation procedures should be followed.

For ransomware incidents, analysts should first review the attack disruption actions taken, such as device isolation or user account suspension. Examine the attack story timeline to understand the initial access vector, whether through phishing, exploited vulnerabilities, or compromised credentials. Identify all affected assets by reviewing the incident graph showing lateral movement patterns. Analyze which files were encrypted and check for data exfiltration indicators. Review the automatic evidence collection including process trees, file hashes, and network connections. Validate that containment actions are sufficient and determine if additional manual remediation is required.

For BEC incidents, attack disruption typically suspends compromised user accounts to prevent further malicious email activity. Investigators should examine the email timeline to identify phishing messages that led to credential theft. Review sign-in logs for suspicious authentication patterns, including unusual locations or devices. Check for inbox rules created by attackers to hide their activities. Analyze any financial fraud attempts, such as wire transfer requests or invoice manipulation. Examine OAuth application consents that may have been granted during the compromise.

In both scenarios, analysts should document the attack chain thoroughly, correlate alerts across endpoints, identities, and cloud apps, and assess the full scope of compromise. After investigation, review whether automatic containment actions should be maintained or modified. Coordinate with stakeholders for business continuity decisions. Finally, generate detailed incident reports documenting findings, timeline, affected assets, and remediation steps taken. Understanding the attack disruption feature allows analysts to respond more effectively while automated protections buy valuable investigation time.

Investigate compromised entities from Purview DLP policies

When investigating compromised entities from Microsoft Purview Data Loss Prevention (DLP) policies within Microsoft Sentinel, security analysts must follow a systematic approach to identify and remediate potential data breaches. Purview DLP policies help organizations protect sensitive information by detecting and preventing unauthorized sharing or access to confidential data. When a DLP policy triggers an alert, it indicates that sensitive data may have been exposed or mishandled, requiring thorough investigation. The investigation process begins with accessing the incident queue in Microsoft Sentinel, where DLP-related alerts are aggregated. Analysts should examine the entity details, including user accounts, devices, IP addresses, and files involved in the policy violation. Microsoft Sentinel provides entity pages that consolidate all relevant information about a potentially compromised entity, showing timeline activities, related alerts, and behavioral patterns. Key investigation steps include reviewing the specific DLP policy that was triggered, understanding what type of sensitive information was detected (such as credit card numbers, social security numbers, or confidential documents), and determining the scope of the potential breach. Analysts should examine user activities before and after the incident, looking for unusual patterns like bulk downloads, external sharing attempts, or access from unfamiliar locations. Correlation with other security signals is essential. Analysts should check if the same user or device has triggered other security alerts, which might indicate a broader compromise. Using KQL queries in Microsoft Sentinel allows for deeper analysis of related events and helps establish a complete picture of the incident. Remediation actions may include revoking user access, blocking external sharing, initiating password resets, or escalating to the legal and compliance teams. Documentation of findings and actions taken is crucial for compliance purposes and future reference. Integration between Purview DLP and Microsoft Sentinel enables streamlined workflows for comprehensive incident response.

Investigate threats from Purview insider risk policies

Investigating threats from Microsoft Purview insider risk policies is a critical component of security operations that helps organizations detect and respond to potential internal threats. Purview insider risk management provides visibility into risky activities performed by users within the organization, such as data theft, policy violations, or suspicious behavior patterns.

When working with insider risk policies in Microsoft Purview, security analysts receive alerts generated based on predefined risk indicators. These indicators can include unusual file downloads, attempts to exfiltrate sensitive data, accessing confidential information outside normal patterns, or communication anomalies that suggest malicious intent.

To investigate these threats effectively, analysts should follow a structured approach. First, review the alert details in the Microsoft Purview compliance portal, which provides context about the triggering activity, the user involved, and the risk severity level. The platform aggregates multiple signals to provide a comprehensive view of user behavior over time.

Second, examine the activity timeline to understand the sequence of events leading to the alert. This helps determine whether the behavior represents a genuine threat or a false positive. Analysts can correlate activities across different data sources, including email, file sharing, and endpoint actions.

Third, leverage the case management capabilities within Purview to escalate confirmed incidents for further investigation. Cases allow collaboration between security teams, HR, and legal departments when appropriate action is required.

Integration with Microsoft Sentinel enhances investigation capabilities by allowing analysts to correlate insider risk signals with other security data sources. This unified approach enables comprehensive threat hunting and incident response across both internal and external threat vectors.

Key best practices include maintaining user privacy by limiting access to sensitive investigation data, documenting all investigative steps for compliance purposes, and establishing clear escalation procedures based on risk severity levels. Regular review of policy configurations ensures detection mechanisms remain aligned with evolving organizational risks.

Investigate alerts from Defender for Cloud workload protections

Microsoft Defender for Cloud provides workload protection capabilities that generate security alerts when potential threats are detected across your cloud resources. As a Security Operations Analyst, investigating these alerts is crucial for maintaining a robust security posture.

When an alert is triggered, it appears in the Defender for Cloud dashboard under the Security Alerts section. Each alert contains essential information including severity level (High, Medium, Low, or Informational), affected resource, attack tactics mapped to MITRE ATT&CK framework, and a detailed description of the detected activity.

To investigate alerts effectively, start by reviewing the alert details page. This page shows the full attack story, including related entities such as user accounts, IP addresses, processes, and files involved in the suspicious activity. The timeline view helps you understand the sequence of events leading to the alert.

Defender for Cloud correlates multiple alerts into security incidents when they appear to be part of the same attack campaign. This correlation reduces alert fatigue and provides a comprehensive view of the threat. You can examine the incident graph to visualize relationships between affected resources and understand the attack scope.

For deeper investigation, you can take several actions. First, examine the raw logs and events associated with the alert. Second, check the recommendations provided by Defender for Cloud for remediation steps. Third, use the Take Action tab to suppress similar alerts, trigger automated responses through Logic Apps, or create firewall rules.

Integration with Microsoft Sentinel enhances investigation capabilities by allowing you to create incidents, run playbooks, and leverage threat intelligence. You can also export alerts to your SIEM solution for centralized monitoring.

After investigation, mark alerts as resolved with appropriate classification (True Positive, Benign True Positive, or False Positive) to improve future detection accuracy and maintain accurate security metrics for your organization.

Investigate security risks from Defender for Cloud Apps

Microsoft Defender for Cloud Apps is a Cloud Access Security Broker (CASB) that provides comprehensive visibility, control over data travel, and sophisticated analytics to identify and combat cyber threats across all your cloud services. As a Security Operations Analyst, investigating security risks from Defender for Cloud Apps is crucial for effective incident response.

When investigating security risks, you begin by accessing the Defender for Cloud Apps portal through the Microsoft 365 Defender console. The dashboard presents alerts, discovered apps, and activity logs that require attention. Alerts are categorized by severity levels including high, medium, and low priority, helping analysts prioritize their investigation efforts.

The investigation process involves reviewing the activity log, which captures user activities, file operations, and administrative actions across connected cloud applications. You can filter activities by user, IP address, location, device type, and specific applications to identify anomalous behavior patterns.

Defender for Cloud Apps uses built-in anomaly detection policies that leverage User and Entity Behavior Analytics (UEBA) to identify unusual patterns. These include impossible travel detections, activity from infrequent countries, suspicious inbox forwarding rules, and mass file downloads.

When a potential risk is identified, analysts can drill down into specific alerts to view contextual information including the affected users, associated activities, and related alerts. The investigation graph helps visualize connections between entities involved in the incident.

For remediation, Defender for Cloud Apps offers governance actions such as suspending users, requiring password resets, revoking OAuth app permissions, or quarantining files. These actions can be automated through policies or executed manually during investigation.

Integration with Microsoft Sentinel enhances investigation capabilities by correlating Defender for Cloud Apps data with other security signals, enabling comprehensive threat hunting and incident response across your entire environment.

Investigate compromised identities from Microsoft Entra ID

Investigating compromised identities from Microsoft Entra ID is a critical skill for Security Operations Analysts. When an identity compromise is suspected, analysts must follow a systematic approach to assess the scope and impact of the breach.

First, access the Microsoft Entra ID portal and navigate to the Security section. Review sign-in logs to identify anomalous authentication patterns, such as logins from unusual locations, unfamiliar devices, or impossible travel scenarios. The Identity Protection dashboard provides risk detections that flag suspicious activities including password spray attacks, credential stuffing, and leaked credentials.

Examine the user's recent activity by reviewing audit logs, which track changes to user accounts, group memberships, and application assignments. Look for unauthorized modifications that could indicate an attacker has escalated privileges or established persistence. Check if new authentication methods were added, such as additional MFA devices or app passwords.

Correlate findings with Microsoft Sentinel by querying the IdentityInfo and SigninLogs tables. Use KQL queries to search for patterns across multiple users that might reveal a broader attack campaign. The UEBA (User and Entity Behavior Analytics) feature helps identify deviations from normal user behavior.

Investigate connected applications and OAuth consent grants, as attackers often abuse these to maintain access even after password resets. Review the enterprise applications section for suspicious third-party app permissions.

Document all findings in the incident timeline and determine the initial compromise vector. Common entry points include phishing attacks, token theft, or legacy authentication exploitation.

Remediation steps typically include resetting credentials, revoking active sessions, reviewing and removing suspicious MFA methods, blocking identified malicious IPs, and enforcing conditional access policies. Consider enabling enhanced security features like continuous access evaluation and requiring phishing-resistant authentication methods for sensitive accounts to prevent future compromises.

Investigate security alerts from Defender for Identity

Microsoft Defender for Identity is a cloud-based security solution that leverages on-premises Active Directory signals to identify, detect, and investigate advanced threats, compromised identities, and malicious insider actions. As a Security Operations Analyst, investigating security alerts from Defender for Identity is crucial for protecting organizational resources.

When investigating alerts, analysts should first access the Microsoft 365 Defender portal where Defender for Identity alerts are consolidated. Each alert contains essential information including the alert title, severity level, affected entities, timeline of events, and evidence collected during detection.

The investigation process begins by reviewing the alert details page, which provides context about the suspicious activity. Analysts should examine the affected user accounts, source computers, and target resources involved in the potential threat. The timeline view shows the sequence of events, helping analysts understand the attack progression.

Key alert categories include reconnaissance activities (such as account enumeration and network mapping), compromised credential attacks (like brute force attempts and pass-the-hash attacks), lateral movement detection, and domain dominance activities. Each category requires specific investigation approaches.

Analysts should correlate Defender for Identity alerts with other security signals from Microsoft Defender for Endpoint, Microsoft Defender for Cloud Apps, and Azure Active Directory Identity Protection. This correlation provides a comprehensive view of potential attack chains.

During investigation, analysts can use the entity profile pages to view historical behavior patterns and determine if activities are anomalous. The learning period data helps distinguish between normal user behavior and genuinely suspicious actions.

After completing the investigation, analysts should classify the alert appropriately as true positive, benign true positive, or false positive. For confirmed threats, immediate remediation actions include resetting compromised passwords, disabling affected accounts, and isolating compromised devices. Documenting findings and updating detection rules based on investigation outcomes improves future security posture.

Investigate device timelines in Defender for Endpoint

Device timelines in Microsoft Defender for Endpoint provide a comprehensive chronological view of events and activities occurring on a specific endpoint. This powerful investigative feature enables security analysts to reconstruct attack chains and understand the sequence of malicious activities during incident response.

To access the device timeline, navigate to the Devices page in the Microsoft 365 Defender portal, select the relevant device, and click on the Timeline tab. The timeline displays events spanning up to six months, presenting data in reverse chronological order with the most recent events appearing first.

The timeline captures various event types including process executions, network connections, file modifications, registry changes, login events, and security alerts. Each entry contains detailed information such as timestamps, process trees, command-line arguments, file hashes, and network destinations. Analysts can filter events by time range, event categories, or specific search terms to focus their investigation.

When investigating an incident, analysts should correlate timeline events with triggered alerts to establish causality. The process tree visualization helps identify parent-child relationships between processes, revealing how malicious code propagated through the system. Network events show communication attempts to external servers, potentially indicating command-and-control activity or data exfiltration.

Key investigation techniques include identifying the initial compromise point by tracing events backward from known malicious activity, examining lateral movement indicators through authentication events, and reviewing persistence mechanisms through registry or scheduled task modifications.

Analysts can export timeline data for offline analysis or integration with other security tools. The timeline also supports flagging suspicious events for further review and adding notes to document findings during the investigation.

Effective use of device timelines accelerates incident response by providing contextual evidence needed to scope the breach, identify affected assets, and develop appropriate remediation strategies. This capability is essential for thorough forensic analysis within the Defender for Endpoint ecosystem.

Perform live response and collect investigation packages

Live response is a powerful feature in Microsoft Defender for Endpoint that provides security analysts with instantaneous access to devices using a remote shell connection. This capability enables real-time investigative work during incident response scenarios. When performing live response, analysts can execute commands on potentially compromised endpoints to gather forensic evidence, remediate threats, and collect investigation packages. To initiate live response, navigate to the device page in the Microsoft 365 Defender portal and select 'Initiate Live Response Session.' Once connected, you gain access to a command-line interface where you can run various commands. Basic commands include 'dir' for directory listings, 'cd' for navigation, and 'findfile' to locate specific files. Advanced commands allow file uploads, script execution, and running forensic tools. Investigation packages are comprehensive collections of forensic data from endpoints. To collect an investigation package, use the 'collect investigation package' action from the device page or execute the 'CollectInvestigationPackage' command during a live response session. The package typically contains autoruns data, installed programs, network configurations, prefetch files, scheduled tasks, security event logs, services information, SMB sessions, system information, temp directories, users and groups, and Windows firewall data. Once collected, the investigation package becomes available for download from the Action Center. Analysts can then analyze this data offline using forensic tools to identify indicators of compromise, understand attack patterns, and determine the scope of an incident. Best practices include documenting all actions taken during live response sessions, following your organization's chain of custody procedures, and ensuring proper authorization before accessing endpoints. Live response sessions are logged for audit purposes, maintaining accountability throughout the investigation process. This capability significantly reduces response time by eliminating the need for physical access to affected devices.

Perform evidence and entity investigation

Evidence and entity investigation is a critical component of incident response in Microsoft Security Operations. This process involves systematically analyzing artifacts, indicators of compromise (IOC), and related entities to understand the full scope of a security incident.

In Microsoft Sentinel and Microsoft 365 Defender, security analysts perform evidence investigation by examining alerts, examining associated data, and correlating information across multiple sources. The investigation typically begins with an initial alert or incident that triggers further analysis.

Key aspects of evidence investigation include:

1. **Entity Analysis**: Entities are objects like user accounts, IP addresses, hosts, files, and URLs that appear in security data. Microsoft Sentinel provides entity pages that consolidate all relevant information about a specific entity, showing its activity timeline, related alerts, and behavioral patterns.

2. **Investigation Graph**: Microsoft Sentinel offers an investigation graph that visually maps relationships between entities and alerts. Analysts can expand nodes to discover connected entities and trace attack paths through the environment.

3. **Timeline Analysis**: Reviewing the chronological sequence of events helps analysts understand how an attack progressed and identify the initial entry point.

4. **Bookmarks**: During investigation, analysts can bookmark important findings for later reference and include them in incident documentation.

5. **Entity Behavior Analytics**: UEBA capabilities help identify anomalous behavior by comparing current entity activities against established baselines.

6. **Threat Intelligence Integration**: Analysts cross-reference indicators with threat intelligence feeds to identify known malicious actors or campaigns.

7. **Data Enrichment**: Additional context from external sources enhances understanding of entities and evidence.

The goal is to determine the attack vector, affected systems, compromised accounts, and data exposure. This thorough investigation enables appropriate containment, eradication, and recovery actions while supporting post-incident reporting and lessons learned documentation.

Investigate threats using the unified audit log

The unified audit log in Microsoft 365 is a powerful tool for security operations analysts investigating potential threats and security incidents. This centralized logging system captures user and administrator activities across various Microsoft 365 services, including Exchange Online, SharePoint, OneDrive, Azure AD, and Teams.

To investigate threats using the unified audit log, analysts begin by accessing the Microsoft Purview compliance portal or using PowerShell cmdlets like Search-UnifiedAuditLog. The search functionality allows filtering by date range, users, activities, and specific services, enabling targeted investigations.

When conducting threat investigations, analysts should focus on several key activity types. Failed login attempts and suspicious sign-in patterns may indicate brute force attacks or credential compromise. File access and sharing activities help identify data exfiltration attempts. Administrative actions such as permission changes or mailbox delegations could reveal privilege escalation. Email forwarding rules and transport rules modifications might expose email-based attacks.

The investigation process typically follows these steps: First, define the scope by identifying the timeframe and affected users or resources. Second, execute searches using relevant filters to narrow down suspicious activities. Third, analyze the results by correlating events across different services to build a complete picture of the incident. Fourth, export and preserve evidence for further analysis or legal requirements.

Analysts should understand that audit log data is retained for 90 days by default, though E5 licenses extend this to one year. For effective threat hunting, creating custom alert policies helps automate detection of suspicious patterns.

Best practices include establishing baseline normal behavior to identify anomalies, correlating audit log data with other security tools like Microsoft Sentinel, and maintaining documented procedures for common investigation scenarios. The unified audit log serves as a critical data source for incident response, providing the evidence trail necessary to understand attack scope, identify compromised accounts, and implement appropriate remediation measures.

Investigate threats using Content Search

Content Search is a powerful eDiscovery tool in Microsoft 365 that Security Operations Analysts use to investigate potential threats and security incidents across various Microsoft 365 services. This feature allows analysts to search through Exchange Online mailboxes, SharePoint Online sites, OneDrive for Business accounts, and Microsoft Teams conversations to locate suspicious content or evidence of malicious activity.<br><br>When investigating threats, analysts begin by accessing the Microsoft Purview compliance portal and creating a new content search. They define specific search criteria using keywords, date ranges, sender/recipient information, and other metadata to narrow down relevant content. For example, if investigating a phishing campaign, an analyst might search for specific malicious URLs or attachment names across all mailboxes.<br><br>The search query language supports various operators including AND, OR, and NOT, enabling precise filtering of results. Analysts can also use property conditions to target specific content types, such as emails with attachments or messages sent from external domains. This granular control helps identify compromised accounts, data exfiltration attempts, or policy violations.<br><br>Once the search completes, analysts can preview results to validate their findings before taking action. The preview feature displays matching items and highlights search terms, helping analysts quickly assess the scope of an incident. If the search returns relevant evidence, analysts can export the results for further analysis or legal purposes.<br><br>Content Search integrates with the broader incident response workflow by providing crucial evidence that helps determine attack vectors, affected users, and the timeline of malicious activities. Analysts often combine Content Search findings with data from Microsoft Sentinel and Microsoft Defender to build a comprehensive picture of security incidents.<br><br>Best practices include documenting all searches performed, maintaining chain of custody for exported content, and regularly reviewing search permissions to ensure only authorized personnel can access sensitive investigation data.

Investigate threats using Microsoft Graph activity logs

Microsoft Graph activity logs are a powerful tool for security operations analysts to investigate threats and suspicious activities within an organization's Microsoft 365 environment. These logs capture API calls made to Microsoft Graph, providing visibility into how applications and users interact with organizational data.

When investigating threats, analysts can leverage Microsoft Graph activity logs to track several key elements. First, they can identify which applications are accessing sensitive data by examining the app IDs and permissions used in API calls. This helps detect potentially malicious or compromised applications that may be exfiltrating data.

The logs contain valuable information including timestamps, client request IDs, user agent strings, IP addresses, and the specific Graph API endpoints accessed. Analysts can correlate these details with known indicators of compromise or unusual patterns that suggest malicious activity.

To access these logs, security teams can use Azure Monitor or stream them to a SIEM solution like Microsoft Sentinel. Once ingested, analysts can write KQL queries to search for anomalies such as unusual access times, excessive API calls, or requests from suspicious geographic locations.

Key investigation scenarios include detecting token theft where attackers use stolen authentication tokens to access Graph APIs, identifying data exfiltration attempts through bulk download patterns, and discovering unauthorized application registrations that could indicate persistence mechanisms.

Analysts should establish baselines for normal Graph API activity within their environment to better identify deviations. Cross-referencing Graph activity logs with Azure AD sign-in logs, audit logs, and other telemetry sources provides comprehensive threat context.

Best practices include configuring appropriate log retention periods, setting up alerts for high-risk API operations, and regularly reviewing application permissions. By systematically analyzing Microsoft Graph activity logs, security operations teams can uncover sophisticated attacks that leverage legitimate APIs to avoid traditional detection methods.

Investigate and remediate incidents in Microsoft Sentinel

Microsoft Sentinel provides a comprehensive platform for investigating and remediating security incidents through its unified Security Information and Event Management (SIEM) and Security Orchestration, Automation and Response (SOAR) capabilities. When an incident is created in Microsoft Sentinel, analysts can begin their investigation by accessing the Incidents page, which displays all active incidents with severity levels, status, and assigned owners. The investigation process starts with reviewing the incident details, including related alerts, entities such as users, IP addresses, hosts, and files, as well as the timeline of events. Analysts can use the Investigation Graph feature to visualize connections between entities and understand the attack scope. This graphical representation helps identify lateral movement, compromised accounts, and affected resources. During investigation, analysts can run queries using Kusto Query Language (KQL) to search through logs and gather additional evidence. The Entity Behavior feature provides insights into user and entity activities, highlighting anomalous patterns that may indicate compromise. Bookmarks allow analysts to save important findings for later reference and to build a case. For remediation, Microsoft Sentinel integrates with playbooks built on Azure Logic Apps. These automated workflows can perform actions such as isolating compromised hosts, blocking malicious IP addresses, disabling user accounts, or sending notifications to stakeholders. Analysts can trigger playbooks manually or configure them to run automatically when specific conditions are met. After completing the investigation and remediation steps, analysts update the incident status, add comments documenting their findings, and close the incident with an appropriate classification such as True Positive, Benign Positive, or False Positive. This documentation helps improve future detection rules and response procedures. The entire process ensures organizations can effectively detect, investigate, and respond to security threats while maintaining detailed audit trails for compliance and continuous improvement of their security posture.

Create and configure automation rules in Sentinel

Azure Sentinel automation rules provide a powerful way to streamline incident response and manage security operations efficiently. These rules allow analysts to automate repetitive tasks, assign incidents, and trigger playbooks based on specific conditions.

To create an automation rule in Sentinel, navigate to the Azure portal and access your Sentinel workspace. Select 'Automation' from the left menu, then click 'Create' and choose 'Automation rule.' You will be presented with a configuration interface where you define the rule parameters.

First, provide a meaningful name for your rule that describes its purpose. Set the trigger condition, which determines when the rule executes. Common triggers include when an incident is created or updated. You can specify conditions based on incident properties such as severity level, status, tactics, analytic rule names, or custom tags.

The conditions section allows granular filtering. For example, you might create a rule that only applies to high-severity incidents from specific analytics rules or those containing particular entities like IP addresses or user accounts.

Next, define the actions the rule should perform. Available actions include changing incident status, assigning an owner, modifying severity, adding tags, or running a playbook. You can chain multiple actions within a single rule for comprehensive automation.

Set the rule order to determine execution priority when multiple rules apply to the same incident. Lower numbers execute first. Additionally, configure an expiration date if the rule should only be active temporarily.

Automation rules support both incident-triggered and alert-triggered scenarios. For complex workflows, integrate Logic Apps playbooks that can perform advanced operations like sending notifications, enriching data, or interacting with external systems.

Best practices include starting with simple rules and gradually adding complexity, testing rules in a non-production environment, and documenting rule purposes for team awareness. Regular review and optimization ensure automation remains effective as your security environment evolves.

Create and configure Microsoft Sentinel playbooks

Microsoft Sentinel playbooks are automated workflows built on Azure Logic Apps that help security teams respond to incidents efficiently. These playbooks enable Security Operations Analysts to automate repetitive tasks and orchestrate responses across multiple systems.

To create a playbook, navigate to Microsoft Sentinel in the Azure portal, select Automation from the left menu, and click Create. Choose 'Playbook with incident trigger' or 'Playbook with alert trigger' based on your requirements. The incident trigger provides access to incident details, entities, and allows updating incident properties, while the alert trigger focuses on individual alerts.

When configuring playbooks, you must establish connections to various services like Microsoft Teams, email providers, or third-party security tools. These connectors authenticate and enable communication between systems. Common actions include sending notifications to Teams channels, creating tickets in ServiceNow, enriching alerts with threat intelligence, or isolating compromised devices.

Playbook configuration involves designing the Logic App workflow using the visual designer. You can add conditions, loops, and multiple actions to create sophisticated response procedures. Variables store data between steps, and expressions help manipulate information dynamically.

To attach playbooks to analytics rules, go to the Analytics section, edit your rule, and configure the Automated response tab. You can also run playbooks manually from the Incidents page by selecting an incident and choosing Run playbook.

Permissions are crucial for playbook functionality. The Microsoft Sentinel Automation Contributor role allows users to assign playbooks to automation rules. The Logic App Contributor role enables playbook creation and modification.

Best practices include testing playbooks in a development environment before production deployment, implementing error handling for failed actions, and documenting playbook purposes and dependencies. Regular reviews ensure playbooks remain effective as your security environment evolves.

Run playbooks on on-premises resources

Run playbooks on on-premises resources is a critical capability in Microsoft Sentinel that extends automated incident response beyond cloud environments to hybrid infrastructure. This functionality allows security analysts to execute Security Orchestration, Automation, and Response (SOAR) playbooks against resources located in on-premises data centers or private networks.

To achieve this, Microsoft Sentinel leverages Azure Logic Apps with on-premises data gateways or hybrid connections. The on-premises data gateway acts as a bridge, enabling secure communication between cloud-based Logic Apps and on-premises systems. This setup requires installing the gateway software on a server within your local network that can access the target resources.

When configuring playbooks for on-premises execution, security teams can integrate with various systems including Active Directory, on-premises SIEM solutions, network devices, firewalls, and custom applications. Common use cases include isolating compromised endpoints, disabling user accounts in local Active Directory, blocking IP addresses on on-premises firewalls, or collecting forensic data from local servers.

The architecture typically involves creating Logic App connectors that communicate through the gateway. Analysts must ensure proper network connectivity, firewall rules, and authentication credentials are configured. Service accounts with appropriate permissions are often required to perform remediation actions on local systems.

Key considerations include latency between cloud and on-premises resources, ensuring high availability of the gateway, and implementing proper security measures for the connection. Organizations should also establish governance policies around which automated actions are permitted on critical on-premises infrastructure.

By enabling playbook execution on on-premises resources, security operations centers can maintain consistent incident response procedures across their entire environment, reducing mean time to respond (MTTR) and ensuring comprehensive protection regardless of where assets are located. This hybrid approach is essential for organizations transitioning to cloud while maintaining legacy systems.

Create and use Security Copilot promptbooks

Security Copilot promptbooks are pre-built collections of prompts designed to streamline incident response workflows within Microsoft's security ecosystem. These promptbooks enable Security Operations Analysts to automate repetitive investigative tasks and maintain consistency across incident investigations.

To create a promptbook, navigate to the Security Copilot interface and select the promptbook creation option. You can build custom promptbooks by chaining together multiple prompts that address specific investigation scenarios. Each prompt within the promptbook executes sequentially, with outputs from one prompt potentially feeding into subsequent queries. When designing promptbooks, consider the logical flow of your investigation process, starting with initial triage questions and progressing through deeper analysis steps.

Microsoft provides several built-in promptbooks that cover common scenarios such as incident summarization, threat actor profiling, vulnerability assessment, and script analysis. These pre-configured promptbooks serve as excellent starting points that analysts can modify to suit their organizational requirements.

To use a promptbook, select it from your library and provide the required input parameters, such as an incident ID, IP address, or file hash. The promptbook then executes each prompt in sequence, gathering and correlating information from connected security tools including Microsoft Sentinel, Defender XDR, and Intune.

Best practices for promptbook management include documenting the purpose of each custom promptbook, regularly reviewing and updating prompts to reflect current threat landscapes, and sharing effective promptbooks across your security team. You can also export and import promptbooks to facilitate collaboration between different security teams or organizations.

Promptbooks significantly reduce investigation time by eliminating the need to manually craft individual queries for each incident. They ensure that junior analysts follow established investigation procedures while enabling senior analysts to focus on complex threat analysis. By standardizing response procedures through promptbooks, organizations maintain consistent security operations quality across all team members.

Manage Security Copilot sources, plugins, and files

Microsoft Security Copilot is an AI-powered security tool that enhances incident response capabilities for Security Operations Analysts. Managing its sources, plugins, and files is essential for maximizing its effectiveness in threat detection and response.

**Sources Management:**
Security Copilot integrates with multiple Microsoft security products including Microsoft Defender XDR, Microsoft Sentinel, Microsoft Intune, and Microsoft Entra ID. Analysts must configure these data sources to ensure Copilot has access to relevant security telemetry. This involves enabling appropriate connectors and ensuring proper permissions are established for data flow between services.

**Plugins Configuration:**
Plugins extend Security Copilot's capabilities by connecting to additional services and data sources. Microsoft provides built-in plugins for its security ecosystem, while third-party plugins enable integration with external security tools. Analysts can enable or disable plugins based on organizational needs through the Copilot settings interface. Each plugin requires specific permissions and may need API keys or authentication credentials for proper functionality.

**File Management:**
Security Copilot allows analysts to upload files for analysis during investigations. This includes malware samples, log files, and threat intelligence reports. Uploaded files are processed using AI capabilities to extract indicators of compromise and provide contextual insights. Analysts should understand file size limitations and supported formats when uploading content for analysis.

**Best Practices:**
Regularly review enabled sources and plugins to ensure they align with current security requirements. Maintain proper access controls to limit who can modify Copilot configurations. Document all custom integrations and plugin configurations for team reference. Monitor usage patterns to optimize which sources provide the most valuable insights during incident investigations.

Effective management of these components ensures Security Copilot delivers accurate, contextual responses that accelerate incident investigation and response workflows for security teams.

Integrate Security Copilot with connectors

Security Copilot integration with connectors enables security analysts to enhance their incident response capabilities by connecting to various data sources and security tools within their environment. Connectors serve as bridges between Security Copilot and external systems, allowing the AI assistant to access, analyze, and correlate data from multiple platforms.

To integrate Security Copilot with connectors, analysts first navigate to the Security Copilot portal and access the Sources section. Here, they can enable built-in Microsoft connectors such as Microsoft Defender XDR, Microsoft Sentinel, Microsoft Intune, and Microsoft Entra ID. These native integrations provide seamless access to security data across the Microsoft ecosystem.

The configuration process involves authenticating each connector with appropriate credentials and permissions. Analysts must ensure they have the necessary role-based access control (RBAC) permissions to enable data sharing between Security Copilot and connected services. This typically requires Security Administrator or Global Administrator privileges for initial setup.

Once connectors are established, Security Copilot can pull relevant incident data, threat intelligence, and contextual information from connected sources. During incident investigations, analysts can prompt Security Copilot to query specific data sources, correlate alerts across platforms, and generate comprehensive incident summaries.

Third-party connectors extend functionality beyond Microsoft products, allowing integration with SIEM solutions, threat intelligence platforms, and other security tools. Custom plugins can also be developed using the Security Copilot plugin architecture to connect proprietary or specialized systems.

The integration benefits incident response by providing unified visibility across security tools, reducing context switching between consoles, and accelerating threat investigation through AI-powered analysis. Analysts can leverage natural language queries to extract insights from connected data sources, making complex investigations more efficient. Proper connector management ensures Security Copilot has access to the most relevant and current security data for effective incident handling.

Manage permissions and roles in Security Copilot

Managing permissions and roles in Security Copilot is essential for organizations implementing Microsoft security solutions effectively. Security Copilot operates on a role-based access control (RBAC) model that determines what actions users can perform within the platform.

There are two primary roles in Security Copilot: Owner and Contributor. Owners have full administrative capabilities, including the ability to manage user access, configure settings, assign roles to other users, and modify organizational configurations. Contributors can use Security Copilot features to investigate threats and generate security insights but cannot modify access settings or administrative configurations.

Permission management involves several key aspects. First, administrators must assign appropriate roles based on job responsibilities and the principle of least privilege. This ensures users have only the access necessary to perform their duties. Second, integration with Microsoft Entra ID allows organizations to leverage existing identity management infrastructure for authentication and authorization.

To configure permissions, administrators access the Security Copilot settings through the admin portal. From there, they can add users, assign roles, and review current access configurations. The audit log functionality enables tracking of who made changes and when, supporting compliance requirements and security monitoring.

Security Copilot also integrates with other Microsoft security products like Microsoft Defender XDR and Microsoft Sentinel. Permissions in these connected services affect what data Security Copilot can access and analyze. Users must have appropriate permissions in underlying data sources for Security Copilot to retrieve and process that information effectively.

Best practices include regularly reviewing user access, removing permissions for departing employees promptly, and implementing conditional access policies through Microsoft Entra ID. Organizations should document their permission structure and establish clear procedures for requesting and approving access changes. This comprehensive approach to permission management helps maintain security posture while enabling analysts to leverage Security Copilot capabilities for incident response and threat investigation.

Monitor Security Copilot capacity and cost

Security Copilot capacity and cost monitoring is essential for organizations utilizing Microsoft's AI-powered security assistant. Security Copilot operates on a consumption-based pricing model using Security Compute Units (SCUs), which measure the computational resources consumed during security operations tasks.

Capacity management involves provisioning SCUs based on organizational needs. Administrators must assess workload requirements, considering factors like the number of security analysts, frequency of investigations, and complexity of queries. SCUs can be provisioned in increments, allowing flexible scaling based on demand patterns.

To monitor capacity utilization, security teams should access the Microsoft Defender portal or Azure portal where usage metrics are displayed. Key metrics include SCU consumption rates, peak usage periods, and trending data over time. Understanding these patterns helps organizations optimize their provisioning strategy and avoid service disruptions during critical security incidents.

Cost management requires establishing budgets and monitoring actual spend against allocated resources. Organizations should implement alerts when consumption approaches threshold limits. Azure Cost Management provides detailed breakdowns of Security Copilot expenses, enabling finance and security teams to track return on investment.

Best practices for managing capacity and cost include:

1. Establishing baseline usage patterns during initial deployment phases
2. Creating governance policies defining approved use cases and user groups
3. Implementing regular reviews of consumption reports to identify optimization opportunities
4. Training analysts on efficient query formulation to maximize value per SCU consumed
5. Setting up automated alerts for unusual consumption spikes that might indicate misuse or inefficient workflows

Organizations should balance cost efficiency with operational readiness, ensuring sufficient capacity exists during security incidents when rapid investigation capabilities become critical. Regular capacity planning sessions help align Security Copilot resources with evolving security operations requirements while maintaining budget compliance across the organization.

Identify threats and risks using Security Copilot

Security Copilot is an AI-powered security tool that helps Security Operations Analysts identify threats and risks more efficiently within their organization's environment. This capability transforms how analysts approach incident response by leveraging natural language processing and machine learning to accelerate threat detection and analysis.

When using Security Copilot to identify threats and risks, analysts can submit queries in natural language to investigate suspicious activities, analyze security alerts, and correlate data across multiple security solutions. The tool integrates with Microsoft Sentinel, Microsoft Defender XDR, and other security platforms to provide comprehensive threat intelligence.

Key capabilities include analyzing incident summaries where Security Copilot can process complex security incidents and provide clear explanations of what occurred, which systems were affected, and potential attack vectors. Analysts can ask questions like "What are the indicators of compromise in this incident?" or "Show me related threats across my environment."

Threat intelligence enrichment allows Security Copilot to pull relevant threat intelligence data, helping analysts understand the context of detected threats, including known attacker techniques, malware families, and recommended remediation steps. This contextual information enables faster and more informed decision-making during incident triage.

Risk assessment becomes more streamlined as Security Copilot can evaluate the potential impact of identified threats on business operations. It helps prioritize which incidents require urgent attention based on severity, affected assets, and organizational risk tolerance.

The tool also assists in hunting for threats by generating KQL queries based on analyst descriptions, enabling proactive searches for suspicious patterns or behaviors that might indicate undetected compromises. This capability reduces the expertise barrier for threat hunting activities.

By combining these features, Security Copilot enables analysts to reduce mean time to detect and respond to threats while maintaining thorough documentation of their investigation processes for compliance and future reference purposes.

Investigate incidents using Security Copilot

Security Copilot is an AI-powered tool integrated into Microsoft Defender XDR that enhances incident investigation capabilities for security analysts. When investigating incidents, Security Copilot provides natural language interaction to help analysts understand and respond to threats more effectively.

To investigate incidents using Security Copilot, analysts can access the tool through the Microsoft Defender portal. The Copilot sidebar appears when viewing incident details, allowing analysts to ask questions about the incident in plain language. For example, analysts can ask 'What happened in this incident?' or 'Which devices are affected?' and receive comprehensive summaries.

Key capabilities include incident summarization, where Copilot analyzes alerts, entities, and evidence to provide a consolidated view of what occurred. This helps analysts quickly understand the scope and severity of an incident. The tool can also perform script analysis, examining PowerShell, batch files, or other scripts found during investigation to determine if they contain malicious code.

Guided response is another valuable feature where Copilot suggests remediation actions based on the incident context. These recommendations help analysts take appropriate steps to contain and resolve threats. Additionally, Copilot can generate reports summarizing investigation findings, which is useful for documentation and stakeholder communication.

Analysts can use natural language queries to explore entity relationships, understand attack timelines, and identify indicators of compromise. The tool correlates data across multiple Microsoft security products, providing a unified investigation experience.

Security Copilot also assists with threat intelligence by providing context about known threat actors, malware families, and attack techniques relevant to the incident. This contextual information helps analysts assess the sophistication and potential impact of attacks.

By leveraging these capabilities, security operations teams can reduce investigation time, improve accuracy in threat assessment, and respond to incidents more efficiently while maintaining comprehensive documentation of their analysis.

More Manage incident response questions
1000 questions (total)