Parts 1 and 2 of this series focused on a single station. This final part covers all of them at once: fetch every tracked station, pull predictions for each, rank by confidence, and generate a daily summary.
💡 Tip: All the code from this series is on GitHub: dailyhigh/weather-bot.
Get the station list
The /api/v1/stations endpoint returns metadata for every station DailyHigh tracks. Call it once, cache it locally. It rarely changes.
import requests
API_KEY = "dh_live_xxxxx"
BASE = "https://dailyhigh.app"
HEADERS = {"Authorization": f"Bearer {API_KEY}"}
def get_stations() -> list[dict]:
resp = requests.get(
f"{BASE}/api/v1/stations",
headers=HEADERS,
timeout=10,
)
resp.raise_for_status()
return resp.json()["data"]
The response is an array of objects, one per station:
[
{
"icao": "EGLC",
"name": "London",
"country": "gb",
"timezone": "Europe/London",
"region": "maritime",
"latitude": 51.505,
"longitude": 0.055,
"elevation": 5.8,
"peakHour": 15
},
{
"icao": "KLGA",
"name": "New York",
"country": "us",
"timezone": "America/New_York",
"region": "continental",
"latitude": 40.777,
"longitude": -73.874,
"elevation": 6.1,
"peakHour": 15
}
]
Each station includes a peakHour field: the typical hour (local time, 0 to 23) when the daily max occurs. This tells you which stations are still heating up and which have already peaked.
Fetch predictions for all stations
Loop through the station list and call /api/v1/prediction/:icao for each one. Handle 202 responses gracefully: that station's prediction isn't cached yet.
import time
def get_prediction(icao: str) -> dict | None:
resp = requests.get(
f"{BASE}/api/v1/prediction/{icao}",
headers=HEADERS,
timeout=10,
)
if resp.status_code == 202:
return None
resp.raise_for_status()
return resp.json()["data"]
def fetch_all_predictions(stations: list[dict]) -> list[dict]:
results = []
for station in stations:
pred = get_prediction(station["icao"])
if pred is None:
continue
results.append({
"icao": station["icao"],
"name": station["name"],
"timezone": station["timezone"],
"peakHour": station["peakHour"],
**pred,
})
time.sleep(0.5) # stay well within rate limits
return results
ℹ️ Info: The prediction endpoint has a rate limit of 60 requests per minute. With 12 stations and a 0.5 s delay, one full loop takes about 6 seconds and uses 12 of your 60 requests. Plenty of headroom.
Rank by confidence
Sort stations by confidence descending. High-confidence stations have enough data to trust the prediction. Low-confidence ones are still early in their day.
def rank_by_confidence(predictions: list[dict]) -> list[dict]:
return sorted(predictions, key=lambda p: p["confidence"], reverse=True)
You can also split into two groups: settled (past peak or confidence >= 8) and still in play.
def split_by_status(predictions: list[dict]):
settled = [p for p in predictions if p["isPastPeak"] or p["confidence"] >= 8]
active = [p for p in predictions if not p["isPastPeak"] and p["confidence"] < 8]
return settled, active
Build a daily digest
Format everything into a table. This works for a terminal printout, a Discord message, or an email body.
from datetime import datetime, timezone
def format_digest(predictions: list[dict]) -> str:
settled, active = split_by_status(predictions)
lines = []
lines.append(f"📊 DailyHigh Digest - {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M UTC')}")
lines.append("")
if settled:
lines.append("**Settled (past peak or high confidence):**")
lines.append("```
")
lines.append(f"{'Station':<12} {'Observed':>9} {'Predicted':>10} {'Conf':>5}")
lines.append("-" * 40)
for p in settled:
lines.append(
f"{p['icao']:<12} {p['observedMax']:>8.1f}° {p['predictedMax']:>9.1f}° {p['confidence']:>4}/10"
)
lines.append("
```")
lines.append("")
if active:
lines.append("**Still in play:**")
lines.append("```
")
lines.append(f"{'Station':<12} {'Observed':>9} {'Predicted':>10} {'Peak in':>8}")
lines.append("-" * 43)
for p in active:
hrs = p["hoursUntilPeak"]
lines.append(
f"{p['icao']:<12} {p['observedMax']:>8.1f}° {p['predictedMax']:>9.1f}° {hrs:>6.1f}h"
)
lines.append("
```")
return "\n".join(lines)
Example output:
📊 DailyHigh Digest - 2026-02-13 18:00 UTC
**Settled (past peak or high confidence):**
Station Observed Predicted Conf
EGLC 9.1° 9.3° 9/10
KLGA 30.5° 30.5° 9/10
KATL 18.2° 18.4° 8/10
**Still in play:**
Station Observed Predicted Peak in
RKSI -2.1° 1.4° 3.0h
SAEZ 28.7° 32.1° 2.5h
NZWN 14.3° 16.0° 4.0h
Stations in the "settled" group have essentially reached their final high. Stations in "still in play" are still warming, and the hoursUntilPeak tells you roughly how long until they're decided too.
Add threshold filtering
If you're only interested in stations where the predicted max is near a specific value, add a filter. This is useful when you're watching multiple stations and only care about close calls.
def near_threshold(predictions: list[dict], target: float, margin: float = 1.0):
"""Return stations where predictedMax is within `margin` °C of target."""
return [
p for p in predictions
if abs(p["predictedMax"] - target) <= margin
]
For example, find all stations where the predicted max is within 1 °C of 30 °C:
close_calls = near_threshold(predictions, target=30.0, margin=1.0)
for p in close_calls:
print(f"{p['icao']}: predicted {p['predictedMax']} °C, confidence {p['confidence']}")
Schedule the digest
Run the script 2 or 3 times per day. A good schedule:
- Morning (10:00 UTC): Most European and American stations are pre-peak. Shows the day's outlook.
- Afternoon (18:00 UTC): European stations are settled, US stations are approaching or past peak.
- Evening (23:00 UTC): Everything is settled. Final numbers.
# Cron: 10 AM, 6 PM, 11 PM UTC
0 10,18,23 * * * cd /path/to/bot && python digest.py >> digest.log 2>&1
Send it somewhere
The digest string works with any webhook. Here's Discord:
WEBHOOK_URL = "https://discord.com/api/webhooks/..."
def send_digest(message: str):
requests.post(
WEBHOOK_URL,
json={"content": message},
timeout=10,
)
For Slack, swap "content" for "text". For email, use smtplib with the digest as the body.
Full script
Putting all the pieces together:
import requests
import time
import json
from datetime import datetime, timezone
API_KEY = "dh_live_xxxxx"
BASE = "https://dailyhigh.app"
HEADERS = {"Authorization": f"Bearer {API_KEY}"}
WEBHOOK_URL = "https://discord.com/api/webhooks/..."
def get_stations():
resp = requests.get(f"{BASE}/api/v1/stations", headers=HEADERS, timeout=10)
resp.raise_for_status()
return resp.json()["data"]
def get_prediction(icao):
resp = requests.get(f"{BASE}/api/v1/prediction/{icao}", headers=HEADERS, timeout=10)
if resp.status_code == 202:
return None
resp.raise_for_status()
return resp.json()["data"]
def main():
stations = get_stations()
predictions = []
for station in stations:
pred = get_prediction(station["icao"])
if pred:
predictions.append({"icao": station["icao"], "name": station["name"], **pred})
time.sleep(0.5)
predictions.sort(key=lambda p: p["confidence"], reverse=True)
digest = format_digest(predictions)
print(digest)
requests.post(WEBHOOK_URL, json={"content": digest}, timeout=10)
if __name__ == "__main__":
main()
The full series
This was Part 3 of 3. Here's the full series:
- Part 1: Build a Temperature Alert Bot: threshold monitoring for a single station
- Part 2: Track a Daily High from Prediction to Result: the full-day lifecycle of one prediction across three endpoints
- Part 3: Monitor Multiple Stations at Once (this post): all stations, ranked, in a daily digest
The full API reference documents every endpoint and field. Browse the stations index to see all tracked stations and their current conditions.
Originally published on DailyHigh. DailyHigh tracks daily high temperatures at major airport weather stations worldwide using real-time METAR observations.
Top comments (0)