Building an Automated Meteor Detection System
Every clear night in Oulu in northern Finland, a small camera stares straight up at the northern sky looking for big falling rocks.
It records everything — the stars, aircraft blinking their way, satellites tracing silent arcs, and, occasionally, a meteor burning for a fraction of a second before it is gone. The challenge is finding that fraction of a second inside hours of otherwise uneventful footage. This is the story of how I automated that search.
Intro
Web cameras are cheap enough now that amateur astronomers use them routinely. The Opticam i5 I have on my balcony shoots the northern sky 1920 × 1080 at 25 fps and stores recordings as half-hour AVI files on camera’s SD card. Left to my own devices I would have to sit through every one of those files manually — which, across a full night in February, means five or six hours of video. That is obviously not going to happen.
The Motivation
The motivation to finally build this came from two fireballs I caught manually in Oulu in February 2026. I wrote about those sightings at the time — the first one on 15 February and the second one a week later on 22 February. Both times I found myself digging through recordings by hand after my friend from Lapland gave me a hint that something may be seen there. It was clear that a systematic, automated approach was needed if I was going to catch anything consistently rather than by luck.
The idea behind Tulipallo project (Finnish for fireball) is simple. It is a fully automated pipeline that fetches every new video as the camera produces it. It discards the ones where a meteor is physically impossible. Then it runs AI-based detection on the rest, and surfaces anything interesting through a web interface. There I can confirm, dismiss, or delete results.
No manual work. No missed nights. Just a list of candidate events waiting for me in the morning.
The Pipeline at a Glance
The system is built in Python and Bash and runs on my Windows machine WSL Ubuntu whenever it is on. The computer is equipped with an NVIDIA RTX 5060 GPU. The pipeline has five main stages:
- Fetch – download new video files from the camera’s FTP server
- Pre-filter – discard any video that cannot contain a meteor
- Detect – run the M3Det algorithm on every surviving video
- Post-process – reject noise with calibrated statistical filters
- Review – browse results in a web interface
A cron job triggers the fetch script every thirty minutes. Everything downstream is automatic.
Stage 1 – Fetching Videos
Videos arrive on the camera’s FTP server named schedule_YYYYMMDD_HHMMSS.avi, where the timestamp is in UTC. The fetch script connects, lists remote files, and skips anything already recorded in a local SQLite database. Downloaded files land in a date-organised folder under transfer/downloads/YYYYMMDD/.
The database records every file that has ever been seen. That is including ones that were intentionally ignored, together with the reason they were skipped. This means a re-run of the fetch script never downloads the same file twice and never wonders why a file is absent.
Stage 2 – Pre-filtering: Sun Position and Cloud Cover
Before touching the video content, two cheap checks eliminate the majority of files.
Sun Position
Meteors are only visible when the sky is dark enough. Using the Astral library I compute the Sun’s altitude at the camera’s coordinates for the exact UTC timestamp in the filename. Three thresholds partition the day:
- Sun > +5° — broad daylight; skip the file entirely
- −12° to +5° — nautical twilight (dawn/dusk); process with standard sensitivity
- Sun < −12° — full astronomical night; standard processing with slight adjustments
During winter at 65 °N latitude this cuts very little, because nights are long. In summer it would eliminate almost everything. Either way, the decision is instant and requires no GPU at all.
Cloud Cover
The second pre-filter queries the Open-Meteo Archive API, a completely free historical weather reanalysis service. Given the camera’s coordinates and the video’s timestamp, it returns hourly cloud cover as a percentage. Any video recorded under ≥ 75 % cloud cover is skipped. The reasoning is straightforward: a completely overcast sky produces nothing but blank grey. So running the GPU detector on it is pointless.
The API requires no key, imposes no usage limits, and covers historical data back to 1940. If it is unreachable for any reason, the system defaults to downloading the video rather than risking a missed meteor.
Together these two filters typically eliminate 60–80 % of all recorded video before a single frame is decoded.
Stage 3 – Detection with MetDetPy M3Det
The surviving files go to MetDetPy, an open-source meteor detection library developed by the Lilac Meteor Observatory project. Its M3Det algorithm works on a difference-frame stack: it subtracts consecutive frames, accumulates absolute differences over a sliding window. Then it looks for pixel sequences that move in a straight line at meteor-like speed.
Detection runs on the GPU using ONNX Runtime’s CUDAExecutionProvider. That reduces a thirty-minute video to a few minutes of processing time on a modest NVIDIA card.
Custom Configuration
The default MetDetPy configuration is optimised for Chinese all-sky cameras operated by the Meteor Master network. The Opticam i5 in Finland is a different beast, so I maintain a custom config called m3det_low_norecheck.json. The most important settings are:
- YOLO recheck disabled — the bundled YOLO model was trained on Chinese camera footage and falsely rejects real meteors from the Finnish sky. All custom configs have
"switch": falsein therecheck_cfgblock. - Fixed exposure time,
exp_time: 0.067 s— MetDetPy can auto-detect exposure from video brightness, but on a long dawn/dusk video it may choose rFPS = 10 and completely miss a fast meteor. Fixing it to 0.067 s (two frames at the container’s declared 30 fps rate) forces rFPS ≈ 15 on every single run. continue_on_err: true— the Opticam AVI files occasionally contain corrupt frames. Without this flag MetDetPy stops cold at roughly 79 % of the file.- Minimum length 50 px — rejects cloud-edge artefacts and single-pixel noise that produce very short tracks.
- Detection threshold 0.65, sensitivity ”low” — requires a brighter pixel response for a detection to register.
- Dynamic mask window 5 s — suppresses persistent bright regions such as stationary stars. They would otherwise generate a flood of false detections.
The 30 fps Container Problem
The Opticam i5 shoots at approximately 23.7 fps natively but wraps the stream in a 30 fps AVI container, filling the remaining DTS slots with empty packets. MetDetPy counts frames sequentially from zero, so its reported timestamps are based on the container’s 30 fps rate. The actual seek position in the file is:
corrected_seek = reported_time × (30 / effective_fps) ≈ reported_time × 1.264
The effective fps is measured once per file using ffprobe -count_packets, which counts only real packets and completes in about 0.2 seconds. Without this correction every extracted screenshot and clip arrives roughly 26 % too early in the video — the difference between capturing a meteor and capturing empty sky.
Stage 4 – Post-processing and False-Positive Rejection
MetDetPy, even at low sensitivity, occasionally flags events that are not meteors. A custom post-processing script applies three statistical filters to every detection before it is accepted:
- Minimum fix duration ≥ 1.5 s
Single-frame glitches and noise flickers produce an apparent ”meteor” lasting less than a second. Real meteors last at least 2.5 seconds in the detection window. - Minimum point density ≥ 32 pts/s
Computed asnum_pts / fix_duration. Slow satellite trails and sparse flickers score very low here. Genuine meteors consistently produce ≥ 40 pts/s. - Maximum distance + speed gate
Large global exposure changes (caused by a passing cloud suddenly clearing, for example) register as extremely fast ”meteors” spanning almost the entire frame. The filter rejects any detection wheredist > 300 px AND fix_speed > 12 %/s. A real fast meteor can have a large distance, but not at that speed.
After filtering, the surviving detections are each given a precise screenshot extracted via two-pass ffmpeg seek (coarse seek to near the timestamp, then frame-accurate decode), and a five-second video clip starting two seconds before the event. Results land in results/YYYYMMDD/HHMMSS/.
Stage 5 – The Web Interface
A small Flask application serves the review interface at localhost:5000. Its main features:
- Detection list — all analysed videos, sortable by date, filterable by minimum meteor count, date range, and whether false positives are shown
- Detail view — for each detected meteor: thumbnail screenshot, video clip, and a statistics table (time, duration, distance, speed, point density, score)
- Screenshot overlay — the screenshot can be superimposed transparently over the video clip with an adjustable opacity slider.
mix-blend-mode: lightenmakes the bright meteor trajectory float over the dark video, so it is immediately obvious whether the detected path matches the actual fireball in the clip - Per-detection false-positive flagging — individual detections within a video can be marked as false positives independently, in addition to marking the whole video
- Bulk actions — delete result files or source videos for all false-positive videos with one click, with a preview list showing exactly what would be removed before confirmation
The Database
One SQLite file (transfer/fetched_videos.db) is the long-term record of everything that has happened. Key columns in the main table include the video timestamp (parsed from the filename and stored separately for fast date-range queries), cloud cover at recording time, whether the video was processed, how many meteors survived the post-process filters, and whether a human reviewer marked it as a false positive.
A second table, detection_fps, stores per-detection false-positive flags keyed by filename and zero-based meteor index. This allows marking individual detections inside a multi-meteor video without invalidating the others. When a whole video is marked false positive in the UI, all its individual detections are cascade-marked automatically.
A Validated Example: Dusk Meteor, February 2026
The trickiest detection to get right was a meteor recorded during dusk twilight in February 2026. The Sun was only 2° below the horizon — bright enough to wash out faint stars but dark enough for a fireball. The meteor crossed the frame over five seconds at roughly timestamp 23m07s into the thirty-minute video.

With the original ”auto” exposure setting, MetDetPy chose rFPS = 10 for this long bright video, missed the real meteor entirely, and instead flagged a noise event at 00:09. Fixed exposure forced rFPS = 15, the real meteor appeared at reported time 18m21s, and the frame-rate correction placed the screenshot seek at 23m12s — exactly where the fireball was visible.
Parameters for that detection after post-filtering: fix_duration = 2.6 s, num_pts = 108, pts_density = 42/s, dist = 83 px, fix_speed = 3.4 %/s. Every filter passed with margin.
BTW, the video has been part of SVT Västerbotten news about the bolide.
What Could Be Achieved Next
Triangulation and Orbit Calculation
The most scientifically valuable extension would be a second camera at a known baseline distance. Two simultaneous all-sky views of the same meteor allow triangulation of the entry trajectory, atmospheric ablation height, and — if the meteor is bright enough — a preliminary heliocentric orbit. The Finnish Fireball Network and the European Fireball Network already operate this way; a single additional site 50–100 km from Oulu could connect this system to that infrastructure.
Custom YOLO Recheck Model
The bundled YOLO model is currently disabled because it was trained on different hardware. With one full season of confirmed detections, the ClipToolkit export pipeline included with MetDetPy can generate Labelme-format annotations. A fine-tuned YOLOv5s exported to ONNX and dropped into the MetDetPy/weights/ folder would re-enable the recheck stage — adding a learned second opinion that understands this specific camera’s point-spread function, noise profile, and latitude-specific sky background.
Multi-Station Coincidence Alerting
Even without triangulation, cross-correlating detection timestamps between two or more independently operated cameras reduces the false-positive rate to near zero. A brief HTTP notification (or a shared MQTT topic) would allow a network of hobbyist cameras to alert each other within seconds of a mutual detection.
Shower Association
Every detection already records a start time, an approximate apparent trajectory (from the detection bounding box), and the camera’s exact coordinates and orientation. Mapping these back to celestial coordinates and comparing against the IAU Meteor Data Center shower list would automatically tag detections as probable Perseids, Leonids, sporadic, and so on.
Spectroscopy
A low-cost diffraction grating placed over the lens would disperse meteor light into a spectrum. Combined with the existing frame-accurate screenshot extraction, even a few hundred pixels of spectrum per meteor can reveal composition — magnesium and sodium abundances differ noticeably between carbonaceous and rocky meteoroids.
Public Live Feed
The web interface today is local. Wrapping it in a lightweight authentication layer and exposing it as a public read-only site would let anyone check last night’s detections in real time — a small contribution to citizen science without requiring any additional hardware.
Conclusion
Tulipallo turned a pile of unwatched video into a systematic nightly survey of the sky above Oulu. The core insight was not a clever algorithm but a pipeline discipline: eliminate the impossible early and cheaply, then apply computation only where it matters. Free weather data, a 0.2-second frame count, and a handful of statistical thresholds do most of the work before the GPU ever runs.
The result is a system that is genuinely operational — it runs unattended every night, and the confirmation rate on events that reach the review interface is high. The next steps are more about science than engineering: a second camera, a trained model, and eventually a connection to the wider European fireball network.
And YES, I used vibe coding with VS Code GitHub Copilot’s Claude Sonnet 4.6 to implement this project.
All code will be open source on GitHub after I am convinced it can be released: github.com/emehtata/meteor-watch
Viimeksi muokattu 2.3.2026 23:27