Building a maintainable PowerShell reporting framework for application support teams
This is a practical, repeatable pattern for producing operational reports from scripts (scheduled tasks), with consistent logging, error handling, and optional Slack alerting. The goal is boring reliability: a framework you can reuse across many jobs without creating “hero scripts”.
- Script structure that scales across multiple jobs
- Config-driven execution (no hardcoded secrets)
- Logging you can grep and alert on
- A standard folder layout
- A simple logging module
- A job wrapper pattern
- Report output (HTML/CSV)
- Slack notifications (optional)
Designed for admins, not for show.
1) Folder structure
Keep a consistent layout so every automation looks the same. This makes support, handover, and scaling much easier. Below is a simple structure that works well for scheduled jobs.
C:\OpsAutomation\
config\
environment.json
modules\
Logging.psm1
Slack.psm1
MySql.psm1
HtmlReport.psm1
jobs\
WeeklyReferenceSummary.ps1
ImportHealthCheck.ps1
output\
reports\
exports\
logs\
jobs\
framework\
On GitHub Pages you’re documenting the approach; in a real environment you’d keep this under
something like
C:\OpsAutomation (or your existing ops path) with permissions restricted appropriately.
2) Configuration (no hardcoding)
Store environment settings in JSON. Do not hardcode secrets inside scripts. In production, prefer a secrets store (Windows Credential Manager, Key Vault, 1Password CLI, etc.), but JSON is fine for non-sensitive configuration.
config\environment.json (example)
{
"Environment": "Production",
"OutputRoot": "C:\\OpsAutomation\\output",
"LogRoot": "C:\\OpsAutomation\\logs",
"Slack": {
"Enabled": true,
"WebhookUrl": "https://hooks.slack.com/services/REDACTED/REDACTED/REDACTED",
"Channel": "#ops-alerts"
},
"MySql": {
"Server": "mysql.example.internal",
"Port": 3306,
"Database": "operations",
"User": "report_reader"
}
}
For credentials, use a separate method. A common pattern is to read a stored credential and build the connection string at runtime.
3) Logging module (consistent and grep-friendly)
Logs are the difference between a script that “ran” and a script you can support. Keep it consistent: timestamp, level, job name, message, and optionally a correlation id.
modules\Logging.psm1 (snippet)
function New-LogContext {
param(
[Parameter(Mandatory)] [string] $JobName,
[Parameter(Mandatory)] [string] $LogRoot
)
$date = (Get-Date).ToString("yyyy-MM-dd")
$path = Join-Path $LogRoot ("jobs\\{0}\\{1}.log" -f $JobName, $date)
New-Item -ItemType Directory -Path (Split-Path $path) -Force | Out-Null
return [pscustomobject]@{
JobName = $JobName
LogPath = $path
CorrelationId = [guid]::NewGuid().ToString()
}
}
function Write-Log {
param(
[Parameter(Mandatory)] $Context,
[Parameter(Mandatory)] [ValidateSet("INFO","WARN","ERROR")] [string] $Level,
[Parameter(Mandatory)] [string] $Message
)
$ts = (Get-Date).ToString("yyyy-MM-dd HH:mm:ss")
$line = "{0} [{1}] [{2}] [{3}] {4}" -f $ts, $Level, $Context.JobName, $Context.CorrelationId, $Message
Add-Content -Path $Context.LogPath -Value $line -Encoding UTF8
}
This format makes it easy to search across logs and route alerts (for example, “ERROR” lines in the last hour).
4) Job wrapper pattern (one way in, one way out)
A wrapper standardises how jobs run: load config, init logging, catch exceptions, write outputs, and optionally notify Slack. Every job then follows the same shape.
jobs\WeeklyReferenceSummary.ps1 (structure)
param(
[string] $ConfigPath = "C:\\OpsAutomation\\config\\environment.json"
)
Import-Module "C:\\OpsAutomation\\modules\\Logging.psm1" -Force
Import-Module "C:\\OpsAutomation\\modules\\Slack.psm1" -Force
Import-Module "C:\\OpsAutomation\\modules\\MySql.psm1" -Force
Import-Module "C:\\OpsAutomation\\modules\\HtmlReport.psm1" -Force
$config = Get-Content $ConfigPath -Raw | ConvertFrom-Json
$jobName = "WeeklyReferenceSummary"
$ctx = New-LogContext -JobName $jobName -LogRoot $config.LogRoot
try {
Write-Log -Context $ctx -Level INFO -Message "Job start"
# 1) Query data
$rows = Invoke-MySqlQuery -Config $config -Query @"
SELECT
DATE(created_at) as day,
COUNT(*) as failures
FROM reporting.failures
WHERE created_at >= DATE_SUB(NOW(), INTERVAL 7 DAY)
GROUP BY DATE(created_at)
ORDER BY day;
"@
Write-Log -Context $ctx -Level INFO -Message ("Rows returned: {0}" -f ($rows.Count))
# 2) Build report
$outDir = Join-Path $config.OutputRoot "reports"
New-Item -ItemType Directory -Path $outDir -Force | Out-Null
$reportPath = Join-Path $outDir ("{0}-{1}.html" -f $jobName, (Get-Date).ToString("yyyyMMdd-HHmmss"))
Export-HtmlReport -Title "Weekly Reference Failures" -Rows $rows -OutPath $reportPath
Write-Log -Context $ctx -Level INFO -Message ("Report written: {0}" -f $reportPath)
# 3) Notify (optional)
if($config.Slack.Enabled -eq $true){
Send-SlackMessage -WebhookUrl $config.Slack.WebhookUrl -Text ("{0}: report generated: {1}" -f $jobName, $reportPath)
}
Write-Log -Context $ctx -Level INFO -Message "Job complete"
exit 0
}
catch {
Write-Log -Context $ctx -Level ERROR -Message ("Unhandled error: {0}" -f $_.Exception.Message)
if($config.Slack.Enabled -eq $true){
Send-SlackMessage -WebhookUrl $config.Slack.WebhookUrl -Text ("{0}: FAILED - {1}" -f $jobName, $_.Exception.Message)
}
exit 1
}
Note the important bits: one config source, one log context, a predictable report location, and a single exception handler that controls exit codes (critical for scheduled tasks).
5) MySQL query execution (pattern, not a dependency list)
The exact driver choice varies (ODBC, .NET connector, vendor tooling). The key is that your job calls a single function and receives rows back consistently.
modules\MySql.psm1 (pseudo-implementation)
function Invoke-MySqlQuery {
param(
[Parameter(Mandatory)] $Config,
[Parameter(Mandatory)] [string] $Query
)
# In production:
# - load credential from a secure store
# - build connection string
# - open connection
# - execute query
# - return rows as objects
# Return example object array for demonstration:
return @()
}
Keep the calling code the same regardless of driver. Swapping a connector later should not require rewriting jobs.
6) Report output (HTML + optional CSV)
HTML is useful for quick viewing and email sharing. CSV is useful for downstream tools. Generate both if needed, but keep output paths consistent.
modules\HtmlReport.psm1 (snippet)
function Export-HtmlReport {
param(
[Parameter(Mandatory)] [string] $Title,
[Parameter(Mandatory)] $Rows,
[Parameter(Mandatory)] [string] $OutPath
)
$html = @()
$html += "<!doctype html><html><head><meta charset='utf-8'>"
$html += "<title>$Title</title></head><body>"
$html += "<h1>$Title</h1>"
$html += ($Rows | ConvertTo-Html -Fragment)
$html += "</body></html>"
$html -join "`n" | Set-Content -Path $OutPath -Encoding UTF8
}
7) Scheduled Task execution
Use a dedicated service account, run with least privilege, and capture exit codes. Keep the scheduled task action predictable and avoid running interactive profiles.
Example task action
Program/script: powershell.exe Arguments: -NoProfile -ExecutionPolicy Bypass -File "C:\OpsAutomation\jobs\WeeklyReferenceSummary.ps1" Start in: C:\OpsAutomation
The wrapper pattern above ensures success/failure is meaningful and easy to alert on.
Common mistakes to avoid
Hardcoding environment details
Use config. Keep secrets out of source. Your future self will thank you.
Unstructured “one-off” scripts
Start with the wrapper and modules. Copy/paste structure is fine; copy/paste logic is not.
Logs that don’t help
Log job start/end, counts, paths written, and error messages. Make it easy to diagnose quickly.
Want this implemented in your environment?
If you’re running application support alongside day-to-day admin work, a consistent automation framework reduces manual effort and improves operational visibility without adding complexity.