Domain Controller Baseline Monitoring Deployment with PowerShell
A PowerShell-based deployment workflow for creating standardized Performance Monitor baselines across Domain Controllers, helping AD teams track CPU, memory, disk, network, and NTDS activity consistently.
Automatically discovers Domain Controllers in the Active Directory environment.
Creates a Performance Monitor Data Collector Set with standardized counters.
Starts continuous circular logging for long-term operational visibility.
Overview
Maintaining performance visibility across all Domain Controllers is critical in any Active Directory environment.
Domain Controllers handle authentication, Kerberos tickets, LDAP queries, DNS lookups, Group Policy processing, replication, and directory reads. When a DC is slow, overloaded, or resource constrained, the impact can appear as login delays, authentication failures, replication issues, or application timeouts.
This script provides a standardized, automated method for collecting Domain Controller performance baselines without manually configuring Performance Monitor on each server.
Why Baseline Domain Controllers?
Without baseline metrics, it is difficult to know whether a performance issue is new, normal, or slowly getting worse.
Baselining helps answer questions such as whether high CPU usage is abnormal, whether LDAP traffic has increased unexpectedly, whether disk latency is affecting replication, whether memory pressure is hurting authentication performance, and whether network throughput is becoming a bottleneck.
What This Script Does
This script automates the deployment of a standardized performance baseline configuration across all Domain Controllers in the Active Directory environment.
It dynamically discovers every Domain Controller, ensures the required logging directory exists, remotely creates a Performance Monitor Data Collector Set using logman, and starts continuous circular logging.
The collector captures critical system and Active Directory counters, including CPU usage, memory pressure, disk I/O, network throughput, and NTDS activity at regular intervals. The data is stored locally on each Domain Controller in a controlled, size-limited format.
Logical Flow
1. Discover all Domain Controllers
2. Connect to each Domain Controller remotely
3. Ensure the baseline logging folder exists
4. Create the Performance Monitor Data Collector Set
5. Add standardized performance counters
6. Configure circular logging
7. Start the collector
8. Repeat safely across all discovered DCs
What Gets Collected?
CPU utilization, memory usage, disk I/O, disk latency, and available resources.
Network throughput and traffic patterns that may affect authentication or replication.
Active Directory-related counters that help identify directory service pressure.
Operational Use Cases
This baseline is useful before and after patching, before domain controller migrations, during performance investigations, after network changes, and when comparing behavior across sites.
It is also useful for capacity planning. If CPU, memory, disk, or NTDS activity trends upward over time, the baseline data helps justify scaling decisions before users are affected.
PowerShell Script
The full PowerShell script is maintained on GitHub so it can be updated, versioned, and downloaded directly.
# Quick usage
# 1. Download or clone the script from GitHub.
# 2. Review the script configuration:
# - Output/log folder path
# - Data Collector Set name
# - Counter list
# - Sampling interval
# - Maximum log size / circular logging behavior
# 3. Run from a secured admin workstation or management server
# with permissions to create Performance Monitor collectors on DCs.
# 4. Recommended execution:
powershell.exe -ExecutionPolicy Bypass -File .\Enable-DCPerformanceBaseline.ps1
Requirements
Run this script from a trusted administrative host with the Active Directory PowerShell module available. The execution account must be able to discover Domain Controllers and remotely create Performance Monitor Data Collector Sets on those servers.
Because the script performs remote configuration, firewall rules, WMI/RPC availability, administrative permissions, and endpoint security controls can affect whether deployment succeeds on every Domain Controller.
How to Use the Baseline Data
The collected data becomes most valuable when compared over time. A single capture can show current performance, but repeated captures reveal what normal looks like for each Domain Controller.
Use the baseline during incidents to compare current behavior against known-good performance. For example, if authentication is slow after a change, you can compare CPU, disk latency, memory pressure, network throughput, and NTDS activity against earlier baseline data.
Limitations
This script deploys performance baseline collection. It is not a full monitoring platform and does not replace alerting, event log monitoring, replication monitoring, or centralized observability tools.
The results also depend on whether collectors are successfully deployed and allowed to continue running. If local disk space is low, permissions are restricted, or remote management paths are blocked, collection may fail on some Domain Controllers.
Final Thoughts
Domain Controller performance problems are easier to investigate when baseline data already exists. Without historical data, every spike looks suspicious and every slowdown is harder to prove.
By deploying the same collector configuration across all Domain Controllers, this script gives AD teams a consistent way to measure and compare health across the environment.
Next, we can cover how to review the collected performance data, identify unhealthy patterns, and decide which counters matter most during AD incidents.