Skip to main content

Getting Started with Microsoft Sentinel

What is Microsoft Sentinel?

Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) solution. It provides intelligent security analytics and threat intelligence across your enterprise.

Core Capabilities:

  • Collect data at cloud scale across all users, devices, applications, and infrastructure
  • Detect previously uncovered threats using Microsoft's analytics and threat intelligence
  • Investigate threats with AI, hunting suspicious activities at scale
  • Respond to incidents rapidly with built-in orchestration and automation

Key Components:

  1. Data Connectors: Ingest security data from various sources
  2. Analytics Rules: Detect threats and anomalies
  3. Incidents: Aggregate related alerts for investigation
  4. Workbooks: Visualize and monitor security data
  5. Playbooks: Automate response actions using Azure Logic Apps
  6. Hunting: Proactively search for threats using KQL queries

How Microsoft Sentinel Works

┌─────────────┐
│ Data Sources│
└─────┬───────┘
│ Data Connectors

┌─────────────────────────┐
│ Log Analytics Workspace │ (Storage & Query Engine)
└───────────┬─────────────┘

┌──────┴──────┐
│ │
▼ ▼
┌──────────┐ ┌───────────┐
│Analytics │ │ Hunting │
│ Rules │ │ Queries │
└────┬─────┘ └─────┬─────┘
│ │
▼ ▼
┌────────────────────────┐
│ Incidents │ (SOC Investigation)
└──────────┬─────────────┘


┌────────────┐
│ Playbooks │ (Automated Response)
└────────────┘

Prerequisites

  • Azure Subscription: Active subscription required
  • Permissions:
    • Microsoft Sentinel Contributor (to enable Sentinel)
    • Log Analytics Contributor (to create workspace)
    • Reader role on subscription (to connect data sources)
  • Log Analytics Workspace: Required foundation for Sentinel

Setting Up Microsoft Sentinel

Step 1: Create Log Analytics Workspace

Using Azure Portal

  1. Navigate to Log Analytics workspaces
  2. Click + Create
  3. Configure:
    • Subscription: Select subscription
    • Resource Group: rg-security-prod
    • Name: law-sentinel-prod
    • Region: East US (choose region close to data sources)
  4. Click Review + Create

Using Azure CLI

# Create resource group
az group create \
--name rg-security-prod \
--location eastus

# Create Log Analytics workspace
az monitor log-analytics workspace create \
--resource-group rg-security-prod \
--workspace-name law-sentinel-prod \
--location eastus \
--retention-time 90 # Days to retain data (30-730)

# Get workspace ID (needed later)
WORKSPACE_ID=$(az monitor log-analytics workspace show \
--resource-group rg-security-prod \
--workspace-name law-sentinel-prod \
--query id -o tsv)

Step 2: Enable Microsoft Sentinel

Using Azure Portal

  1. Search for Microsoft Sentinel
  2. Click + Create
  3. Select workspace: law-sentinel-prod
  4. Click Add

Using Azure CLI

# Enable Sentinel on workspace
az sentinel workspace create \
--resource-group rg-security-prod \
--workspace-name law-sentinel-prod

Step 3: Connect Data Sources

Common data connectors:

# Enable Azure Activity connector (management plane logs)
az monitor diagnostic-settings create \
--name send-to-sentinel \
--resource /subscriptions/{subscription-id} \
--workspace $WORKSPACE_ID \
--logs '[{\"category\": \"Administrative\", \"enabled\": true}]'

# Enable Microsoft Defender for Cloud
# (Must be enabled in Defender for Cloud first)
# Portal: Sentinel > Data connectors > Microsoft Defender for Cloud > Open connector page > Connect

# Enable Microsoft Entra ID Sign-in Logs
# Portal: Entra ID > Diagnostic settings > Add diagnostic setting
# Send to: law-sentinel-prod
# Logs: SignInLogs, AuditLogs, NonInteractiveUserSignInLogs

Terraform Example

# Resource Group for Security
resource \"azurerm_resource_group\" \"security\" {\n name = \"rg-security-prod\"
location = \"East US\"
}

# Log Analytics Workspace
resource \"azurerm_log_analytics_workspace\" \"sentinel\" {\n name = \"law-sentinel-prod\"
location = azurerm_resource_group.security.location
resource_group_name = azurerm_resource_group.security.name
sku = \"PerGB2018\"
retention_in_days = 90 # 30-730 days

tags = {
Environment = \"Production\"
Service = \"Security\"
}
}

# Enable Microsoft Sentinel
resource \"azurerm_sentinel_log_analytics_workspace_onboarding\" \"sentinel\" {\n workspace_id = azurerm_log_analytics_workspace.sentinel.id
}

# Data Connector: Azure Activity
resource \"azurerm_sentinel_data_connector_azure_active_directory\" \"aad\" {\n name = \"AzureActiveDirectory\"
log_analytics_workspace_id = azurerm_sentinel_log_analytics_workspace_onboarding.sentinel.workspace_id
}

# Data Connector: Microsoft Defender for Cloud
resource \"azurerm_sentinel_data_connector_microsoft_defender_advanced_threat_protection\" \"defender\" {\n name = \"MicrosoftDefender\"
log_analytics_workspace_id = azurerm_sentinel_log_analytics_workspace_onboarding.sentinel.workspace_id
}

# Analytics Rule: Brute Force Attack Detection
resource \"azurerm_sentinel_alert_rule_scheduled\" \"brute_force\" {\n name = \"Multiple-Failed-Logins-Brute-Force\"
log_analytics_workspace_id = azurerm_sentinel_log_analytics_workspace_onboarding.sentinel.workspace_id
display_name = \"Multiple Failed Login Attempts\"
severity = \"High\"
enabled = true

query = <<QUERY
SigninLogs
| where TimeGenerated > ago(1h)
| where ResultType != \"0\" // Failed sign-in
| summarize FailedAttempts = count() by UserPrincipalName, IPAddress, bin(TimeGenerated, 5m)
| where FailedAttempts >= 5
QUERY

query_frequency = \"PT1H\" # Run every hour
query_period = \"PT1H\" # Look back 1 hour
trigger_operator = \"GreaterThan\"
trigger_threshold = 0

tactics = [\"CredentialAccess\"]
techniques = [\"T1110\"] # MITRE ATT&CK: Brute Force

incident_configuration {
create_incident = true
grouping {
enabled = true
reopen_closed_incidents = false
lookback_duration = \"PT5H\"
entity_matching_method = \"AllEntities\"
}
}
}

# Watchlist: VIP Users (protect high-value accounts)
resource \"azurerm_sentinel_watchlist\" \"vip_users\" {\n name = \"VIPUsers\"
log_analytics_workspace_id = azurerm_sentinel_log_analytics_workspace_onboarding.sentinel.workspace_id
display_name = \"VIP Users - Executives\"
item_search_key = \"UserPrincipalName\"
}

# Automation Rule: Auto-assign incidents to Tier 1
resource \"azurerm_sentinel_automation_rule\" \"auto_assign\" {\n name = \"AutoAssignLowSeverity\"
log_analytics_workspace_id = azurerm_sentinel_log_analytics_workspace_onboarding.sentinel.workspace_id
display_name = \"Auto-assign Low/Medium severity incidents\"
order = 1
enabled = true

triggers_on = \"Incidents\"
triggers_when = \"Created\"

condition {
property = \"IncidentSeverity\"
operator = \"In\"
values = [\"Low\", \"Medium\"]
}

actions {
order = 1
action_type = \"ModifyProperties\"

modify_properties {
owner_id = \"user-object-id-here\" # Assign to specific user/group
status = \"Active\"
}
}
}

Creating Analytics Rules

Built-in Templates

Microsoft Sentinel includes 200+ built-in detection templates:

In Portal:

  1. Sentinel > Analytics > Rule templates
  2. Filter by data sources you have connected
  3. Select a template (e.g., "Brute force attack against Azure Portal")
  4. Click Create rule
  5. Customize query/scheduling if needed
  6. Click Review + create

Custom Analytics Rule (KQL Example)

// Detect privilege escalation via role assignment
AzureActivity
| where TimeGenerated > ago(1h)
| where OperationNameValue == \"Microsoft.Authorization/roleAssignments/write\"
| where ActivityStatusValue == \"Success\"
| where Properties contains \"Owner\" or Properties contains \"Contributor\"
| project TimeGenerated, Caller, ResourceGroup, SubscriptionId, Properties
| extend RoleAssigned = tostring(parse_json(Properties).roleDefinitionName)
| where RoleAssigned in (\"Owner\", \"Contributor\", \"User Access Administrator\")

Save as scheduled analytics rule:

  • Frequency: Every 15 minutes
  • Lookback: 1 hour
  • Threshold: Alert on any result
  • Tactics: Privilege Escalation (MITRE ATT&CK)

Using Workbooks

Pre-built workbooks for visualization:

# Popular workbooks
# 1. Azure AD Sign-ins and Audit
# 2. Azure Activity
# 3. Office 365
# 4. Threat Intelligence
# 5. Identity & Access

# In Portal:
# Sentinel > Workbooks > Templates > Select workbook > Save

Creating Playbooks (Automation)

Playbooks use Azure Logic Apps for automation:

Example: Auto-block malicious IP in NSG

Using Portal

  1. Sentinel > Automation > Create > Playbook with incident trigger
  2. Name: Block-Malicious-IP-in-NSG
  3. Add steps:
    • Get incident entities (IP addresses)
    • For each IP:
      • Parse JSON (extract IP)
      • Azure NSG - Create security rule (deny inbound from IP)
      • Add comment to incident ("Blocked IP: X.X.X.X")
  4. Save playbook
  5. Grant playbook identity Network Contributor role on NSG

Terraform Example for Playbook

# Create Logic App for playbook
resource \"azurerm_logic_app_workflow\" \"block_ip\" {\n name = \"playbook-block-malicious-ip\"
location = azurerm_resource_group.security.location
resource_group_name = azurerm_resource_group.security.name

identity {
type = \"SystemAssigned\"
}
}

# Grant playbook permission to modify NSG
resource \"azurerm_role_assignment\" \"playbook_nsg\" {\n scope = azurerm_network_security_group.app.id
role_definition_name = \"Network Contributor\"
principal_id = azurerm_logic_app_workflow.block_ip.identity[0].principal_id
}

# Attach playbook to analytics rule (automation rule)
resource \"azurerm_sentinel_automation_rule\" \"block_on_high_severity\" {\n name = \"BlockIPOnHighSeverity\"
log_analytics_workspace_id = azurerm_sentinel_log_analytics_workspace_onboarding.sentinel.workspace_id
display_name = \"Block IP on High Severity Incidents\"
order = 2
enabled = true

triggers_on = \"Incidents\"
triggers_when = \"Created\"

condition {
property = \"IncidentSeverity\"
operator = \"Equals\"
values = [\"High\"]
}

actions {
order = 1
action_type = \"RunPlaybook\"

run_playbook {
logic_app_id = azurerm_logic_app_workflow.block_ip.id
tenant_id = data.azurerm_client_config.current.tenant_id
}
}
}

CI/CD Integration

Deploy Analytics Rules via GitHub Actions

name: Deploy Sentinel Analytics Rules

on:
push:
paths:
- 'sentinel-rules/**'

jobs:
deploy-rules:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2

- name: Azure Login
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}

- name: Deploy Analytics Rules
run: |\n for rule in sentinel-rules/*.json; do
RULE_NAME=$(basename "$rule" .json)

az sentinel alert-rule create \\\n --resource-group rg-security-prod \\\n --workspace-name law-sentinel-prod \\\n --rule-name "$RULE_NAME" \\\n --rule-file "$rule"
done

Validate Rules Before Deployment

- name: Validate KQL Syntax
run: |\n # Use Sentinel APIs to validate query syntax
az rest --method post \\\n --url \"https://management.azure.com/subscriptions/{sub-id}/resourceGroups/rg-security-prod/providers/Microsoft.OperationalInsights/workspaces/law-sentinel-prod/api/query?api-version=2021-03-01-privatepreview\" \\\n --body '{\"query\": \"SigninLogs | where TimeGenerated > ago(1h) | take 1\"}'

Best Practices

1. Data Retention Strategy

Configure appropriate retention:

  • Interactive retention: 90 days (hot, queryable)
  • Archive: Up to 7 years (cold storage, lower cost)
# Set retention
az monitor log-analytics workspace update \
--resource-group rg-security-prod \
--workspace-name law-sentinel-prod \
--retention-time 90

# Enable archival for specific table
az monitor log-analytics workspace table update \
--resource-group rg-security-prod \
--workspace-name law-sentinel-prod \
--name SigninLogs \
--retention-time 90 \
--total-retention-time 365 # Archive for 1 year

2. Cost Optimization

Reduce ingestion costs:

  • Use Basic Logs for high-volume, low-value data
  • Filter unnecessary data at source
  • Use Commitment Tiers (100GB/day, 200GB/day, etc.)
// Example: Filter non-essential sign-ins before ingestion
SigninLogs
| where ResultType != \"0\" or UserPrincipalName contains \"admin\" // Only failed logins or admin accounts

3. Analytics Rule Tuning

Reduce false positives:

  • Start with high-confidence detections
  • Use watchlists for allow-lists (known IPs, approved apps)
  • Tune thresholds based on environment
// Use watchlist to exclude known service accounts
let ServiceAccounts = _GetWatchlist('ServiceAccounts') | project UserPrincipalName;
SigninLogs
| where TimeGenerated > ago(1h)
| where ResultType != \"0\"
| where UserPrincipalName !in (ServiceAccounts) // Exclude service accounts
| summarize FailedAttempts = count() by UserPrincipalName

4. MITRE ATT&CK Mapping

Map detections to tactics and techniques:

  • Provides context on attacker behavior
  • Helps prioritize incidents
  • Enables coverage gap analysis

Common mappings:

  • Brute Force → CredentialAccess (T1110)
  • Privilege Escalation → PrivilegeEscalation (T1078)
  • Data Exfiltration → Exfiltration (T1537)

5. Incident Management Workflow

Standardize investigation process:

  1. Triage: Review alert context, entities involved
  2. Investigate: Use entity timelines, related alerts
  3. Contain: Run playbook to isolate affected resources
  4. Remediate: Remove threat, patch vulnerabilities
  5. Document: Add comments, close incident with classification

Common Use Cases

1. Detect Suspicious Sign-ins

SigninLogs
| where TimeGenerated > ago(24h)
| where RiskLevelDuringSignIn == \"high\" or RiskLevelAggregated == \"high\"
| project TimeGenerated, UserPrincipalName, AppDisplayName, IPAddress, Location, RiskDetail

2. Monitor Privileged Role Assignments

AzureActivity
| where OperationNameValue == \"Microsoft.Authorization/roleAssignments/write\"
| where Properties contains \"Owner\" or Properties contains \"Contributor\"
| extend RoleAssigned = tostring(parse_json(Properties).roleDefinitionName)
| extend Assignee = tostring(parse_json(Properties).principalName)
| project TimeGenerated, Caller, Assignee, RoleAssigned, ResourceGroup

3. Detect Anomalous Resource Creation

AzureActivity
| where OperationNameValue endswith \"write\"
| where TimeGenerated > ago(1h)
| summarize ResourcesCreated = count() by Caller, bin(TimeGenerated, 15m)
| where ResourcesCreated > 20 // Unusual volume

Things to Avoid

Don't ingest all logs without filtering (cost explosion) ❌ Don't create too many low-confidence analytics rules (alert fatigue) ❌ Don't ignore built-in templates (reinventing the wheel) ❌ Don't run playbooks without proper RBAC (security risk) ❌ Don't forget to test analytics rules before enabling in production ❌ Don't use Sentinel as general-purpose log storage ❌ Don't overlook data connector health (missing data = blind spots) ❌ Don't assign Sentinel permissions to non-security personnel

Do start with critical data sources (Azure AD, Defender, Azure Activity) ✅ Do enable built-in detections first ✅ Do use watchlists for known-good entities ✅ Do automate common response actions with playbooks ✅ Do tune rules to reduce false positives ✅ Do integrate with ITSM (ServiceNow, Jira) for ticketing ✅ Do regularly review coverage (MITRE ATT&CK navigator) ✅ Do archive old data for compliance

Troubleshooting

Data Not Appearing

# Check data connector status
az sentinel data-connector list \
--resource-group rg-security-prod \
--workspace-name law-sentinel-prod

# Check Log Analytics ingestion
az monitor log-analytics query \
--workspace $WORKSPACE_ID \
--analytics-query \"Heartbeat | summarize count() by bin(TimeGenerated, 1h)\" \
--timespan P1D

Analytics Rule Not Triggering

  1. Test query manually in Logs blade
  2. Check scheduling: Frequency vs. lookback period
  3. Verify data connector is sending data
  4. Review threshold: Is it too high?

Playbook Failing

# Check Logic App run history
az logic workflow show \
--resource-group rg-security-prod \
--name playbook-block-malicious-ip

# View run details (in Portal)
# Logic Apps > playbook-block-malicious-ip > Run history

Cost Estimation

Typical monthly costs (East US region):

  • Log Analytics ingestion: $2.30/GB (after 5GB free)
  • Sentinel ingestion: Additional $2.46/GB
  • Total: ~$4.76/GB ingested

Example scenario (500GB/month):

  • LA ingestion: 500GB × $2.30 = $1,150
  • Sentinel: 500GB × $2.46 = $1,230
  • Total: ~$2,380/month

Optimization: Use Commitment Tiers (100GB/day = $1.70/GB, 30% savings)