Personal Website Monitoring

My personal website has been running for several years, but due to low revenue and low traffic, it hasn't been integrated with monitoring and has been largely unattended.

Because it shares a main domain with a WeChat mini-program backend API, every time I want to see who's using my mini-program, that damn "We Analytics" mini-program is incredibly difficult to understand. The "real-time data" feature is exclusive to the professional version and is pay-as-you-go.

This is typical Tencent behavior; I understand and respect that. User data is already dismal; they can't even recoup the ¥30/year authentication fee, let alone pay for real-time data.

If you don't want to be leeched off by Pony Ma, you have to be self-sufficient. By the way, I've been to Nanniwan in northern Shaanxi; the conditions there are really tough!

The advantage lies with the user, not me. The requirements are as follows:

  • How many people use it daily?
  • What is the access latency?
  • Status code distribution, are there any 4xx or 5xx errors?
  • Which regions are the users accessing the site?

Understanding these basics is sufficient to identify problems promptly and perform targeted optimizations.

My personal website runs on Caddy, so monitoring Caddy itself and its logs is enough. Other web servers like Nginx operate on a similar principle, and similar tools exist.

1. Caddy Monitoring

Caddy can be configured to directly send metrics data to Prometheus. The relevant implementation logic is found in the source code files modules/metrics/metrics.go and modules/caddyhttp/metrics.go.

Enable Monitoring

Enable monitoring in your Caddyfile:

1{
2	metrics {
3		per_host
4	}
5}

To add per-host metrics you can insert the per_host option.

It is also recommended to view monitoring information using a separate domain name, minimize the exposure of port 2019, and view monitoring data directly through the domain name.

1monitor.xxx.com {
2	metrics
3	#@blocked not remote_ip input_your_public_ip
4	#respond @blocked "Forbidden: Access denied👻" 403
5
6	basic_auth {
7		your_username your_hash_password  # (caddy hash-password gen hash password )
8	}
9}

Caddy monitoring metrics are divided into four main categories:

  • Runtime metrics (metric names begin with go_* and process_*, monitoring the Caddy process itself)
  • Admin API metrics (monitoring the admin API, metric names begin with caddy_admin_*)
  • HTTP Middleware metrics (critical part, HTTP-related, such as request latency, TTFB, errors, request and response body size, etc., metric names begin with caddy_http_*)
  • Reverse proxy metrics (monitoring backend proxies; for example, if I have a Rust API used by a WeChat mini-program, Caddy can easily monitor whether the backend service is alive; currently, there is only one metric: caddy_reverse_proxy_upstreams_healthy)

Actually, searching for metrics collected by Prometheus also reveals caddy_config_last_reload_successful and caddy_config_last_reload_success_timestamp_seconds. There are two; looking at the source code, they belong to globalMetrics. The official documentation doesn't mention them, but the meaning is self-evident, so I won't explain further.

2. Prometheus Configuration

Prometheus runs on a Raspberry Pi at home, making the most of the space with very low power consumption. Readers can deploy it according to their own needs, prioritizing convenient access while ensuring security.

Look directly at the configuration

 1global:
 2  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
 3  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
 4  external_labels:
 5    monitor: "example"
 6
 7# Alertmanager configuration
 8alerting:
 9  alertmanagers:
10    - static_configs:
11        - targets: ["localhost:9093"]
12
13rule_files:
14  # - "first_rules.yml"
15  # - "second_rules.yml"
16
17scrape_configs:
18  - job_name: caddy
19    scrape_interval: 10s
20    scrape_timeout: 8s
21    scheme: https
22    basic_auth:
23      username: "your_usernmae"
24      password: "your_password"
25    static_configs:
26      - targets: ["monitor.xxx.com"]

3. Grafana Monitoring Panel

If the monitoring panel is domain-specific, you can choose id: 24146; otherwise, choose id: 22870.

If you're still not satisfied, you can modify the above or create your own, provided you have a basic understanding of Grafana logic and Prometheus query language.

I added domain information to id 22870, and added special filtering to only monitor a few specific domains. Filtering domains requires configuring regular expressions in Grafana. If you don't know how, let the AI ​​do it for you—times have changed! 😅

Chinese readers who need it can also leave a message on our WeChat official account to request it.

Screenshot examples are shown below:

caddy_monitor_1

caddy_monitor_2

The effect is quite good; it has everything it needs, with status codes and latency distribution clearly displayed. However, the Raspberry Pi configuration isn't great. I only store 15 days of information, and selecting 24-hour data causes a slight lag. The older model has poor performance, but it's definitely sufficient for personal website monitoring.

I deploy Grafana on my own mini host and check it occasionally at home.

As for alert notifications, I don't want to integrate them at the moment. Work-related alerts are already troublesome enough, and there are plenty of articles online about them.

4. Log Monitoring

Attentive readers may have already wondered, why can't we see which regions visited which websites? That's right, log monitoring solves that problem.

There are many log monitoring tools available, such as the lightweight GoAccess, the mid-range OpenObservere, and the heavyweight ELK. I chose nginxpulse, developed by a young and energetic developer who is very diligent and updates frequently, meeting my personal website's needs.

Please see the configuration; I've anonymized it, so you should be able to understand it. If not, refer to the documentation at https://github.com/likaia/nginxpulse.

 1root@tokyo:~# systemctl cat nginxpulse.service
 2# /etc/systemd/system/nginxpulse.service
 3[Unit]
 4Description=Nginxpulus Service
 5After=network.target
 6
 7[Service]
 8Type=simple
 9User=root
10WorkingDirectory=/opt/caddylog
11ExecStart=/opt/caddylog/nginxpulse
12Restart=always
13RestartSec=5
14
15[Install]
16WantedBy=multi-user.target
17
18root@tokyo:~# cat /opt/caddylog/
19configs/    nginxpulse  var/
20root@tokyo:~# cat /opt/caddylog/configs/nginxpulse_config.json
21{
22  "websites": [
23    {
24      "name": "yyy",
25      "logType": "caddy",
26      "logPath": "/var/log/caddy/yyy.log",
27      "domains": ["yyy.com"]
28    },
29    {
30      "name": "xxx",
31      "logType": "caddy",
32      "logPath": "/var/log/caddy/xxx.com.log",
33      "domains": ["xxx.com"]
34    }
35  ],
36  "system": {
37    "logDestination": "file",
38    "taskInterval": "1m",
39    "logRetentionDays": 30,
40    "demoMode": false,
41    "accessKeys": ["yourkey"],
42    "language": "zh-CN"
43  },
44  "server": {
45    "Port": "localhost:8088"
46  },
47  "pvFilter": {
48    "statusCodeInclude": [
49      200
50    ],
51    "excludePatterns": [
52      "favicon.ico$",
53      "robots.txt$",
54      "sitemap.xml$",
55      "^/health$",
56      "^/_(?:nuxt|next)/",
57      "rss.xml$",
58      "feed.xml$",
59      "atom.xml$"
60    ],
61    "excludeIPs": ["127.0.0.1", "::1", "10.10.0.1", "192.168.30.21"]
62  }
63}

Using a monolithic deployment approach, compiling the nginxpulse binary on your computer, uploading it to the server, and configuring it simply is all it takes to run. It's quite good.

You can see the results on the author's example website: https://nginx-pulse.kaisir.cn/

5. Summary

Monitoring at the webserver and log layers can basically cover the needs of lightweight personal websites. We won't go into details like link monitoring; that's only for enterprise-level businesses.

Practical and simple, it not only meets the needs listed above but also identifies problems and allows for targeted optimization.

For example, log monitoring revealed that a Guatemalan user actually visited my website, triggering a 404 error. Amazing! It was fixed in a flash.

caddy_404_fix

The reason was that when translating the article into Spanish, the image link address was mistranslated; "wofi-menu" was translated as "menu-wofi." Although there are currently only three Spanish pages on the site, there are indeed readers.

I used to be skeptical of Google Analytics data; the website user distribution maps were always so bizarre. Now I believe it. That article ranked in the top 10 Google search results—it really worked! 🤷‍♂️

I probably won't have the chance to go to Guatemala in my lifetime, so being able to serve unfamiliar Linux users is very gratifying.

You know, typing out an article by hand takes about two hours from start to finish, without any compensation. You can't get paid through Google AdSense in China 💰. It's purely a one-way service, done out of love.

Actually, you can also find various network attack logs; I'll write an article about that later.

Lastmod: Wednesday, January 21, 2026

See Also:

Translations: