I wanted to run a full security monitoring stack on my MacBook Pro. Something with file integrity monitoring, threat detection, log correlation. The whole SIEM experience. I wanted Wazuh.
Reality had other plans.
The Original Goal: Wazuh SIEM
Wazuh looked perfect. Open-source SIEM with:
- File integrity monitoring
- Intrusion detection
- Log analysis and correlation
- Compliance reporting (PCI-DSS, HIPAA, etc.)
- Threat intelligence integration
- Security event visualization
I spent hours setting it up. Docker compose files, certificate generation, configuration files. Got the manager running, the indexer configured, the dashboard accessible.
Then I checked Activity Monitor.
6.5 GB of Docker images. Services wanting 4-6 GB of RAM.
The Problem: This Laptop’s Resource Budget
Here’s the constraint for this experiment: I’m running this on a 16GB MacBook Pro that I actively use for:
- Development work
- Running VMs for testing
- Daily computing tasks
- Other experiments
For this tinkering project, I don’t want to sacrifice 25-40% of RAM to monitoring services. The goal is to see what monitoring is possible within a minimal resource footprint.
What I Tried (And Why It Failed)
Attempt 1: Docker on macOS ARM
- Result: Java security policy conflicts with OpenSearch
- Error:
java.security.AccessControlException: access denied - Outcome: Indexer crashed on startup
Attempt 2: Run it in a VM
- Requirement: 4-6 GB RAM minimum for the full stack
- Reality: Not allocating that much for this experiment
- Decision: Different approach needed
The Pivot: What Can Actually Run?
I needed to be realistic. What monitoring could I run that:
- Uses minimal resources
- Runs 24/7 without impacting daily use
- Provides actual system visibility
- Costs $0
Here’s what I landed on:
The Final Stack
1. Prometheus (30 MB RAM)
- Time-series metrics database
- Collects system metrics every 15 seconds
- 15-day data retention
- ~100-200 MB disk growth per week
2. node_exporter (20 MB RAM)
- Exports system metrics for Prometheus
- CPU, memory, disk I/O, network stats
- No configuration needed
3. Grafana (47 MB RAM)
- Visualization dashboards
- Real-time and historical trends
- Alert management (future expansion)
Total: ~94 MB RAM
That’s it. 94 MB. Measured, not estimated.
What About Security Monitoring?
I installed osquery (17 MB) thinking I’d use it for security queries (file integrity monitoring, process inspection, network connections). But honestly? I haven’t actually configured it properly yet. It’s installed, but I’m not running scheduled security queries or doing anything with the data.
So for now, this is pure system resource monitoring, not security monitoring. Might add security features later, but being honest about what’s actually running today.
What This Stack Actually Does
This is system resource monitoring. Prometheus scrapes metrics from node_exporter every 15 seconds. Grafana graphs them. That’s it.
I can see CPU usage trends, memory pressure, disk I/O, network bandwidth. The data retention is 15 days, so I can look back and see what happened when my laptop was running slow last Tuesday. It’s useful for performance troubleshooting and understanding baseline resource usage.
What it doesn’t do: security monitoring. At all.
There’s no file integrity monitoring. No intrusion detection. No threat intelligence feeds. No log correlation. No automated alerting (I have to manually check Grafana). osquery is installed but I haven’t configured it with any security queries, so it’s not actually doing anything yet.
This setup will tell me if my laptop is running hot or out of RAM. It won’t tell me if someone’s attacking my laptop or if a process is doing something suspicious. Important difference.
The Setup
Create the directory structure:
mkdir -p ~/security-monitoring/prometheus/data
mkdir -p ~/security-monitoring/grafana
Install the components:
brew install prometheus node_exporter grafana
Create the Prometheus config at ~/security-monitoring/prometheus/prometheus.yml:
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
static_configs:
- targets: ['localhost:9100']
storage:
tsdb:
retention:
time: 15d
Start the services manually to test (or skip to the LaunchAgent section for auto-start):
prometheus --config.file=$HOME/security-monitoring/prometheus/prometheus.yml \
--storage.tsdb.path=$HOME/security-monitoring/prometheus/data &
node_exporter &
brew services start grafana
# Wait for Grafana to start (takes ~5-10 seconds)
sleep 10
Create the Grafana dashboard script at ~/security-monitoring/grafana/setup-dashboard.sh. This creates the Prometheus datasource and 11 panels with macOS-specific metrics:
#!/bin/bash
# Setup Grafana dashboard with macOS-compatible metrics
echo "Creating Prometheus datasource..."
# Create datasource and capture the UID
DATASOURCE_RESPONSE=$(curl -s -X POST http://localhost:3000/api/datasources \
-u admin:admin \
-H "Content-Type: application/json" \
-d '{
"name":"Prometheus",
"type":"prometheus",
"url":"http://localhost:9090",
"access":"proxy",
"isDefault":true
}')
# Extract UID from response (requires jq, or use default)
DATASOURCE_UID=$(echo $DATASOURCE_RESPONSE | grep -o '"uid":"[^"]*"' | cut -d'"' -f4)
# If jq is available, use it for cleaner parsing
if command -v jq &> /dev/null; then
DATASOURCE_UID=$(echo $DATASOURCE_RESPONSE | jq -r '.datasource.uid')
fi
# Fallback if UID extraction failed
if [ -z "$DATASOURCE_UID" ]; then
echo "Warning: Could not extract datasource UID, using default reference"
DATASOURCE_UID="prometheus"
fi
echo "Datasource UID: $DATASOURCE_UID"
echo ""
echo "Creating dashboard..."
curl -X POST http://localhost:3000/api/dashboards/db \
-u admin:admin \
-H "Content-Type: application/json" \
-d '{
"dashboard": {
"title": "Security Monitoring - macOS",
"tags": ["security", "monitoring", "macos"],
"timezone": "browser",
"schemaVersion": 38,
"panels": [
{
"title": "CPU Usage",
"type": "timeseries",
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 0},
"id": 1,
"targets": [
{
"datasource": {"type": "prometheus", "uid": "'$DATASOURCE_UID'"},
"expr": "100 - (avg by (instance) (irate(node_cpu_seconds_total{mode=\"idle\"}[5m])) * 100)",
"legendFormat": "CPU Usage %",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"unit": "percent",
"min": 0,
"max": 100,
"custom": {
"drawStyle": "line",
"lineInterpolation": "smooth",
"fillOpacity": 10
}
}
}
},
{
"title": "Memory Usage",
"type": "timeseries",
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 0},
"id": 2,
"targets": [
{
"datasource": {"type": "prometheus", "uid": "'$DATASOURCE_UID'"},
"expr": "node_memory_active_bytes_total",
"legendFormat": "Active",
"refId": "A"
},
{
"datasource": {"type": "prometheus", "uid": "'$DATASOURCE_UID'"},
"expr": "node_memory_wired_bytes",
"legendFormat": "Wired",
"refId": "B"
},
{
"datasource": {"type": "prometheus", "uid": "'$DATASOURCE_UID'"},
"expr": "node_memory_compressed_bytes",
"legendFormat": "Compressed",
"refId": "C"
}
],
"fieldConfig": {
"defaults": {
"unit": "bytes",
"custom": {
"drawStyle": "line",
"lineInterpolation": "smooth",
"fillOpacity": 10,
"stacking": {"mode": "normal"}
}
}
}
},
{
"title": "Network Traffic (Bytes)",
"type": "timeseries",
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 8},
"id": 3,
"targets": [
{
"datasource": {"type": "prometheus", "uid": "'$DATASOURCE_UID'"},
"expr": "irate(node_network_receive_bytes_total{device!=\"lo\"}[5m])",
"legendFormat": "RX - {{device}}",
"refId": "A"
},
{
"datasource": {"type": "prometheus", "uid": "'$DATASOURCE_UID'"},
"expr": "irate(node_network_transmit_bytes_total{device!=\"lo\"}[5m])",
"legendFormat": "TX - {{device}}",
"refId": "B"
}
],
"fieldConfig": {
"defaults": {
"unit": "Bps",
"custom": {
"drawStyle": "line",
"lineInterpolation": "smooth",
"fillOpacity": 10
}
}
}
},
{
"title": "Network Traffic (Packets)",
"type": "timeseries",
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 8},
"id": 4,
"targets": [
{
"datasource": {"type": "prometheus", "uid": "'$DATASOURCE_UID'"},
"expr": "irate(node_network_receive_packets_total{device!=\"lo\"}[5m])",
"legendFormat": "RX Packets - {{device}}",
"refId": "A"
},
{
"datasource": {"type": "prometheus", "uid": "'$DATASOURCE_UID'"},
"expr": "irate(node_network_transmit_packets_total{device!=\"lo\"}[5m])",
"legendFormat": "TX Packets - {{device}}",
"refId": "B"
}
],
"fieldConfig": {
"defaults": {
"unit": "pps",
"custom": {
"drawStyle": "line",
"lineInterpolation": "smooth",
"fillOpacity": 10
}
}
}
},
{
"title": "Load Average (1m)",
"type": "stat",
"gridPos": {"h": 4, "w": 6, "x": 0, "y": 16},
"id": 5,
"targets": [
{
"datasource": {"type": "prometheus", "uid": "'$DATASOURCE_UID'"},
"expr": "node_load1",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"decimals": 2,
"thresholds": {
"mode": "absolute",
"steps": [
{"value": null, "color": "green"},
{"value": 2, "color": "yellow"},
{"value": 4, "color": "red"}
]
}
}
}
},
{
"title": "Memory Used %",
"type": "stat",
"gridPos": {"h": 4, "w": 6, "x": 6, "y": 16},
"id": 6,
"targets": [
{
"datasource": {"type": "prometheus", "uid": "'$DATASOURCE_UID'"},
"expr": "100 * (1 - (node_memory_free_bytes / node_memory_total_bytes))",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"unit": "percent",
"decimals": 1,
"thresholds": {
"mode": "absolute",
"steps": [
{"value": null, "color": "green"},
{"value": 70, "color": "yellow"},
{"value": 85, "color": "red"}
]
}
}
}
},
{
"title": "Disk Used %",
"type": "stat",
"gridPos": {"h": 4, "w": 6, "x": 12, "y": 16},
"id": 7,
"targets": [
{
"datasource": {"type": "prometheus", "uid": "'$DATASOURCE_UID'"},
"expr": "100 - ((node_filesystem_avail_bytes{mountpoint=\"/\"} * 100) / node_filesystem_size_bytes{mountpoint=\"/\"})",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"unit": "percent",
"decimals": 1,
"thresholds": {
"mode": "absolute",
"steps": [
{"value": null, "color": "green"},
{"value": 75, "color": "yellow"},
{"value": 90, "color": "red"}
]
}
}
}
},
{
"title": "System Uptime",
"type": "stat",
"gridPos": {"h": 4, "w": 6, "x": 18, "y": 16},
"id": 8,
"targets": [
{
"datasource": {"type": "prometheus", "uid": "'$DATASOURCE_UID'"},
"expr": "time() - node_boot_time_seconds",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"unit": "s",
"decimals": 0
}
}
},
{
"title": "Network Errors",
"type": "stat",
"gridPos": {"h": 4, "w": 8, "x": 0, "y": 20},
"id": 9,
"targets": [
{
"datasource": {"type": "prometheus", "uid": "'$DATASOURCE_UID'"},
"expr": "sum(irate(node_network_receive_errs_total[5m]))",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"decimals": 2,
"thresholds": {
"mode": "absolute",
"steps": [
{"value": null, "color": "green"},
{"value": 1, "color": "yellow"},
{"value": 10, "color": "red"}
]
}
}
},
"options": {
"colorMode": "background",
"graphMode": "area"
}
},
{
"title": "Packet Drops (RX)",
"type": "stat",
"gridPos": {"h": 4, "w": 8, "x": 8, "y": 20},
"id": 10,
"targets": [
{
"datasource": {"type": "prometheus", "uid": "'$DATASOURCE_UID'"},
"expr": "sum(irate(node_network_receive_drop_total[5m]))",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"decimals": 2,
"thresholds": {
"mode": "absolute",
"steps": [
{"value": null, "color": "green"},
{"value": 1, "color": "yellow"},
{"value": 10, "color": "red"}
]
}
}
},
"options": {
"colorMode": "background",
"graphMode": "area"
}
},
{
"title": "Total Interfaces",
"type": "stat",
"gridPos": {"h": 4, "w": 8, "x": 16, "y": 20},
"id": 11,
"targets": [
{
"datasource": {"type": "prometheus", "uid": "'$DATASOURCE_UID'"},
"expr": "count(node_network_receive_bytes_total)",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"decimals": 0
}
},
"options": {
"colorMode": "value"
}
}
],
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {
"refresh_intervals": ["5s", "10s", "30s", "1m", "5m", "15m", "30m", "1h"]
},
"refresh": "30s"
},
"overwrite": false,
"message": "macOS-compatible metrics"
}'
echo ""
echo "✅ Dashboard updated with macOS-compatible metrics!"
echo ""
echo "Access it at: http://localhost:3000/dashboards"
Make it executable and run it:
chmod +x ~/security-monitoring/grafana/setup-dashboard.sh
bash ~/security-monitoring/grafana/setup-dashboard.sh
The script works for a fresh Grafana install:
- Creates the Prometheus datasource (Grafana generates a UID for it)
- Captures that datasource UID from the API response
- Uses that UID in all 11 panel definitions so they know where to query data from
Your datasource and dashboard UIDs will be different than mine - Grafana generates them on creation.
To make services auto-start on boot, create LaunchAgent files in ~/Library/LaunchAgents/.
Important: Replace YOUR_USERNAME with your actual username (run whoami to find it).
com.prometheus.plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.prometheus</string>
<key>ProgramArguments</key>
<array>
<string>/opt/homebrew/bin/prometheus</string>
<string>--config.file=/Users/YOUR_USERNAME/security-monitoring/prometheus/prometheus.yml</string>
<string>--storage.tsdb.path=/Users/YOUR_USERNAME/security-monitoring/prometheus/data</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
</dict>
</plist>
com.node-exporter.plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.node-exporter</string>
<key>ProgramArguments</key>
<array>
<string>/opt/homebrew/bin/node_exporter</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
</dict>
</plist>
Load them:
launchctl load ~/Library/LaunchAgents/com.prometheus.plist
launchctl load ~/Library/LaunchAgents/com.node-exporter.plist
brew services start grafana
Grafana uses Homebrew services, Prometheus and node_exporter use LaunchAgents.
Resource Reality Check
What I measure after 24 hours running:
| Service | RAM Usage | CPU (Idle) | CPU (Active) |
|---|---|---|---|
| osquery | 17.2 MB | 0.1% | 0.3% |
| prometheus | 30.1 MB | 0.2% | 0.8% |
| node_exporter | 19.8 MB | 0.1% | 0.2% |
| grafana | 47.3 MB | 0.2% | 0.5% |
| Total | 94.4 MB | <1% | <2% |
Additional metrics:
- Disk growth: ~100-200 MB per week (Prometheus data)
- Battery impact: Not measurable
- Network: <1 KB/s average (metrics scraping)
This actually runs. All day, every day, without me noticing it’s there.
Where This Fits
This is useful for understanding what’s happening with system resources. If your laptop randomly slows down, you can check Grafana and see if it was disk I/O, memory pressure, or CPU spikes. If you want to learn Prometheus and Grafana without committing serious resources, this works.
It’s also a baseline for adding security monitoring later. The infrastructure runs 24/7 using 94 MB. Now I need to figure out what security features I can add without blowing past 200-300 MB total.
When You Need Actual Security Monitoring
This setup is fine for understanding system resource usage. But if you need real security monitoring (threat detection, intrusion alerts, compliance reporting), you need different tools.
Wazuh on dedicated hardware gives you file integrity monitoring, log correlation, and automated alerting. CrowdStrike or SentinelOne handle threat hunting and endpoint detection. Splunk or Elastic Security give you log correlation across multiple systems.
A 94 MB Prometheus stack doesn’t replace any of those. It just tells you if your CPU is spiking.
Cost Comparison
| Solution | Monthly Cost | RAM Usage | Notes |
|---|---|---|---|
| This setup | $0 | 94 MB | ✅ Works |
| Datadog | $15/host | 100-200 MB | System monitoring |
| New Relic | $25/host | 150-300 MB | APM + monitoring |
| Elastic Stack | $0 (self-hosted) | 2-4 GB | Heavy resource usage |
| Wazuh (in VM) | $0 (self-hosted) | 4-6 GB | SIEM, too heavy for experiment |
Cost: $0 vs. $180-300/year for commercial solutions RAM: 94 MB vs. 2-6 GB for self-hosted alternatives
What I Learned
Setting a tight resource budget forces you to pick what actually matters. I wanted full SIEM capabilities, but that requires 4-6 GB of RAM. So I backed off to basic system monitoring at 94 MB and figured I’d build up from there.
The big takeaway: system monitoring and security monitoring are different things. This setup tracks CPU, RAM, and disk usage. It doesn’t detect threats, monitor file integrity, or alert on suspicious processes. Calling this “security monitoring” would be misleading.
But 94 MB proves the infrastructure can run 24/7 without impacting the laptop. Now the question is what security features I can add while staying under 200-300 MB total. That’s the next experiment.
Accessing the Stack
Once everything’s running:
- Grafana is at http://localhost:3000 (admin/admin). This is where the actual graphs are.
- Prometheus is at http://localhost:9090 if you want to write custom PromQL queries.
- osquery runs via
osqueryiin the terminal for manual SQL queries. - node_exporter metrics are at http://localhost:9100/metrics (raw Prometheus format).
What’s Next
This handles system resource monitoring. Next step is adding actual security monitoring.
That means configuring osquery with scheduled security queries, setting up file integrity checks, building baseline detection for suspicious processes and network connections, and adding some kind of alerting (even if it’s just desktop notifications).
The goal: stay under 200-300 MB total while adding real security visibility. Can’t run Wazuh’s 4-6 GB stack, so I need to build something lighter. That’s the next experiment.
Running into issues? The most common problem is port conflicts. Check if services are already running on ports 3000 (Grafana), 9090 (Prometheus), or 9100 (node_exporter).
Next in this series: Configuring osquery for actual security monitoring without exceeding the resource budget.