Case Study — Monitoring Military Installations

This case study applies the full GEOINT toolkit to a realistic scenario: monitoring a military facility using open-source satellite imagery over 6 months. The methodology draws from real-world open-source intelligence work by organizations like Bellingcat, CSIS Satellite Analysis Project, and the Middlebury Institute’s James Martin Center.

This is where all five patterns from The GEOINT Mind Map converge. You must choose the right data (Pattern 1), avoid single-image conclusions (Pattern 2), detect changes over time (Pattern 3), work within collection constraints (Pattern 4), and match your method to what the resolution can actually show (Pattern 5).


Phase 1: Baseline Establishment

Objective

Establish “what normal looks like” for the facility. Without a baseline, you cannot detect change. Without detecting change, you cannot generate intelligence.

Step 1: Identify the Facility and Collection Plan

import numpy as np
from datetime import datetime, timedelta
 
def create_collection_plan(lat, lon, start_date, duration_months=6):
    """
    Generate a collection plan for monitoring a facility.
    Defines which sensors to use and expected collection frequency.
    """
    plan = {
        "target": {"lat": lat, "lon": lon},
        "period": {
            "start": start_date,
            "end": start_date + timedelta(days=30 * duration_months),
        },
        "sensors": [
            {
                "name": "Sentinel-2",
                "type": "optical",
                "resolution": "10m",
                "revisit": "5 days",
                "expected_cloud_free": "2-3 per month (Northern Europe)",
                "what_it_shows": "construction, vegetation change, large vehicles",
                "limitation": "cannot identify vehicle type at 10m",
            },
            {
                "name": "Sentinel-1",
                "type": "SAR",
                "resolution": "5x20m",
                "revisit": "6 days",
                "expected_usable": "4-5 per month (all-weather)",
                "what_it_shows": "new structures (metal), ship presence, ground disturbance",
                "limitation": "cannot count vehicles, speckle noise",
            },
        ],
        "osint_sources": [
            "AIS data (if near water)",
            "ADS-B data (if airfield)",
            "Social media monitoring",
            "Local news media",
            "Defense ministry statements",
            "Satellite imagery services (Google Earth, Bing Maps for baseline)",
        ],
    }
    return plan
 
plan = create_collection_plan(59.0, 25.0, datetime(2024, 1, 1))

Step 2: Download and Catalog Time Series

import os
from datetime import datetime
 
def build_imagery_catalog(search_results, output_dir="./monitoring_data"):
    """
    Organize downloaded imagery into a structured catalog.
    """
    os.makedirs(output_dir, exist_ok=True)
 
    catalog = []
    for product in search_results:
        entry = {
            "date": product["date"],
            "sensor": product.get("sensor", "S2"),
            "cloud_pct": product.get("cloud_cover", None),
            "filename": product["name"],
            "path": os.path.join(output_dir, product["name"]),
            "usable": True,  # set False if too cloudy or other issues
            "notes": "",
        }
 
        # Flag as unusable if >50% cloud in AOI
        if entry["cloud_pct"] and entry["cloud_pct"] > 50:
            entry["usable"] = False
            entry["notes"] = "Excessive cloud cover"
 
        catalog.append(entry)
 
    # Summary
    n_total = len(catalog)
    n_usable = sum(1 for e in catalog if e["usable"])
    print(f"Catalog: {n_total} products, {n_usable} usable")
    print(f"Date range: {catalog[0]['date']} to {catalog[-1]['date']}")
 
    return catalog

Step 3: Establish Baseline Metrics

import numpy as np
import matplotlib.pyplot as plt
 
def establish_baseline(ndvi_timeseries, dates):
    """
    Compute baseline statistics for an area of interest.
    ndvi_timeseries: list of 2D NDVI arrays
    Returns: baseline mean and standard deviation per pixel.
    """
    stack = np.stack(ndvi_timeseries, axis=0)  # (time, rows, cols)
 
    # Remove NaN (cloudy) observations
    baseline_mean = np.nanmean(stack, axis=0)
    baseline_std = np.nanstd(stack, axis=0)
 
    # Identify stable pixels (low temporal variance = reliable baseline)
    stable_mask = baseline_std < 0.1
 
    return {
        "mean": baseline_mean,
        "std": baseline_std,
        "stable_mask": stable_mask,
        "n_observations": np.sum(~np.isnan(stack), axis=0),
    }
 
 
def identify_observable_indicators(facility_type="military_base"):
    """
    Define observable indicators for the facility type.
    These are features you will SYSTEMATICALLY check each collection.
    """
    indicators = {
        "military_base": {
            "permanent_structures": [
                "Number and size of buildings/hangars",
                "Perimeter fence condition and extent",
                "Hardened shelters/bunkers",
                "Communication towers/antenna arrays",
            ],
            "activity_indicators": [
                "Vehicle count in motor pool (establish daily/weekly norm)",
                "Vehicle presence on training grounds",
                "Track marks on unpaved areas",
                "New earthworks or defensive positions",
                "Tent/temporary shelter deployment",
            ],
            "logistics_indicators": [
                "Truck traffic at gates (count vehicles per day if temporal resolution allows)",
                "Fuel storage activity (tanker trucks near POL point)",
                "Rail siding activity (if applicable)",
                "Supply container/pallet areas",
            ],
            "construction_indicators": [
                "New excavation or grading",
                "Construction materials staged",
                "New foundations/structures",
                "Road construction or improvement",
            ],
            "concealment_indicators": [
                "New camouflage netting deployment",
                "Vegetation planting over disturbed areas",
                "Decoy placement",
            ],
        },
    }
    return indicators.get(facility_type, {})
 
indicators = identify_observable_indicators()
for category, items in indicators.items():
    print(f"\n{category.upper()}")
    for item in items:
        print(f"  [ ] {item}")

Phase 2: Change Detection

Automated Change Alerting

import numpy as np
 
def monitor_for_changes(baseline, new_observation, threshold_sigma=2.5):
    """
    Compare new observation against baseline.
    Flag pixels that deviate more than threshold_sigma standard deviations.
    """
    deviation = (new_observation - baseline["mean"]) / (baseline["std"] + 1e-10)
 
    significant_decrease = deviation < -threshold_sigma  # vegetation loss
    significant_increase = deviation > threshold_sigma   # vegetation gain
 
    # Only flag stable areas (noisy areas produce false alarms)
    change_mask = (significant_decrease | significant_increase) & baseline["stable_mask"]
 
    # Compute change statistics
    total_changed = np.sum(change_mask)
    loss_pixels = np.sum(significant_decrease & baseline["stable_mask"])
    gain_pixels = np.sum(significant_increase & baseline["stable_mask"])
 
    return {
        "change_mask": change_mask,
        "decrease_mask": significant_decrease & baseline["stable_mask"],
        "increase_mask": significant_increase & baseline["stable_mask"],
        "deviation": deviation,
        "total_changed_pixels": total_changed,
        "loss_pixels": loss_pixels,
        "gain_pixels": gain_pixels,
        "alert": total_changed > 100,  # flag if significant area changed
    }
 
 
def generate_change_report(observation_date, changes, pixel_size_m=10):
    """Generate a text-based change report."""
    report = []
    report.append(f"CHANGE DETECTION REPORT — {observation_date}")
    report.append("=" * 60)
 
    if not changes["alert"]:
        report.append("STATUS: No significant changes detected.")
        report.append(f"Total changed pixels: {changes['total_changed_pixels']}")
        return "\n".join(report)
 
    report.append("STATUS: *** CHANGES DETECTED ***")
    report.append("")
 
    area_m2 = changes["total_changed_pixels"] * pixel_size_m ** 2
    report.append(f"Total changed area: {area_m2:.0f} m^2 ({area_m2/10000:.2f} ha)")
    report.append(f"  Vegetation/surface loss: {changes['loss_pixels']} pixels "
                  f"({changes['loss_pixels'] * pixel_size_m**2:.0f} m^2)")
    report.append(f"  New vegetation/surface: {changes['gain_pixels']} pixels "
                  f"({changes['gain_pixels'] * pixel_size_m**2:.0f} m^2)")
    report.append("")
    report.append("ANALYST ACTION REQUIRED:")
    report.append("  1. Visually inspect change areas in before/after imagery")
    report.append("  2. Cross-reference with OSINT (news, social media)")
    report.append("  3. Check SAR data for same period (cloud-independent confirmation)")
    report.append("  4. Update facility assessment")
 
    return "\n".join(report)

SAR as Complement

def sar_change_assessment(sar_before, sar_after, threshold_db=3):
    """
    SAR amplitude ratio for change detection.
    Use when optical is cloudy — SAR fills the gap.
    """
    ratio_db = 10 * np.log10(
        np.maximum(sar_after, 1e-10) / np.maximum(sar_before, 1e-10)
    )
 
    new_structures = ratio_db > threshold_db     # new metal/hard structures
    removed = ratio_db < -threshold_db           # structures removed/flooding
 
    return {
        "ratio_db": ratio_db,
        "new_structures": new_structures,
        "removed": removed,
        "note": "SAR detects metallic structures, new construction, "
                "ground disturbance. Does NOT detect vehicle type or "
                "discriminate between types of new construction.",
    }

Phase 3: Analysis

Correlate with OSINT

def osint_correlation_checklist(observed_changes):
    """
    Checklist for OSINT correlation of imagery findings.
    Each imagery observation should be checked against open sources.
    """
    checklist = {
        "new_vehicles_detected": {
            "check": [
                "Search social media for soldiers posting about deployment",
                "Check defense ministry press releases for exercise announcements",
                "Monitor local news for reports of military movement",
                "Check ADS-B for increased military air traffic to nearest airfield",
                "Review rail/road traffic imagery for military logistics convoys",
            ]
        },
        "new_construction": {
            "check": [
                "Search government procurement databases for construction contracts",
                "Check planning/zoning records (if democratic country)",
                "Monitor defense budget announcements for infrastructure spending",
                "Look for construction company social media/portfolio updates",
            ]
        },
        "earthworks_defensive": {
            "check": [
                "Search for military doctrine publications (field manual references)",
                "Check if similar earthworks appeared at other facilities simultaneously",
                "Monitor troop exercise announcements that might explain activity",
                "Review satellite historical imagery for seasonal patterns",
            ]
        },
    }
    return checklist
 
 
def apply_ach_to_findings(findings):
    """
    Structure findings into ACH format.
    """
    # Standard hypotheses for military facility monitoring
    hypotheses = [
        "H1: Routine activity within normal operational tempo",
        "H2: Planned military exercise (announced or unannounced)",
        "H3: Increased operational readiness / force buildup",
        "H4: Facility modernization / infrastructure improvement",
    ]
 
    print("\nANALYSIS OF COMPETING HYPOTHESES")
    print("=" * 60)
    print("\nHypotheses:")
    for h in hypotheses:
        print(f"  {h}")
 
    print("\nEvidence to evaluate against each hypothesis:")
    for finding in findings:
        print(f"\n  Finding: {finding['description']}")
        print(f"  Source: {finding['source']} (reliability: {finding['reliability']})")
        print(f"  Date: {finding['date']}")
        for h in hypotheses:
            print(f"    vs {h[:20]}...: [CONSISTENT / INCONSISTENT / NEUTRAL] ?")
 
    return hypotheses
 
# Example
findings = [
    {"description": "Vehicle count increased from 5 to 25 in motor pool",
     "source": "IMINT (Sentinel-2)", "reliability": "High", "date": "2024-03-15"},
    {"description": "New earthworks along eastern perimeter (200m length)",
     "source": "IMINT (Sentinel-2)", "reliability": "High", "date": "2024-03-15"},
    {"description": "Local news: Ministry announces spring readiness inspection",
     "source": "OSINT", "reliability": "Moderate", "date": "2024-03-10"},
    {"description": "SAR shows new metallic structures near vehicle park",
     "source": "IMINT (Sentinel-1)", "reliability": "Moderate", "date": "2024-03-18"},
]
 
hypotheses = apply_ach_to_findings(findings)

Phase 4: Report

Intelligence Report Template

def write_facility_assessment(facility_name, period, findings, ach_results,
                               confidence, save_path=None):
    """
    Generate structured facility assessment report.
    """
    report = f"""
FACILITY ASSESSMENT — {facility_name.upper()}
{'=' * 70}
Assessment Period: {period}
Date of Report: {datetime.now().strftime('%Y-%m-%d')}
Overall Confidence: {confidence.upper()}
 
1. BOTTOM LINE UP FRONT (BLUF)
   [1-2 sentence summary of the key finding and its significance]
 
2. KEY FINDINGS
   [Numbered list of significant observations, each with:
    - What was observed
    - When it was first detected
    - Source (sensor/OSINT)
    - Significance]
 
3. IMAGERY ANALYSIS TIMELINE
 
   Month 1 (baseline):
   - Permanent structures: [count, type, condition]
   - Vehicle baseline: [count by type if identifiable]
   - Training area activity: [normal/elevated]
 
   Month 2:
   - Changes from baseline:
   - New construction:
   - Vehicle changes:
 
   [... continue for each month]
 
4. OSINT CORROBORATION
   [For each imagery finding, what OSINT supports or contradicts it]
 
5. ANALYSIS OF COMPETING HYPOTHESES
   [ACH matrix and results]
   Most likely explanation:
   Alternative explanations:
 
6. CONFIDENCE AND LIMITATIONS
   Confidence: {confidence}
   Basis for confidence:
   Key assumptions:
   Information gaps:
   What would change this assessment:
 
7. INDICATORS TO WATCH
   [List of specific observable indicators that would:
    a) Confirm the assessment
    b) Contradict the assessment
    c) Indicate escalation/de-escalation]
 
8. RECOMMENDED COLLECTION
   [What additional collection (sensor, source) would fill gaps]
"""
    if save_path:
        with open(save_path, "w") as f:
            f.write(report)
        print(f"Report saved to {save_path}")
 
    return report

Automated Monitoring Pipeline

Bringing it all together: a pipeline that runs periodically to download new imagery, compare against baseline, and flag changes.

import numpy as np
from datetime import datetime
 
class FacilityMonitor:
    """
    Automated monitoring pipeline for a single facility.
    Run periodically (e.g., weekly) to check for changes.
    """
 
    def __init__(self, lat, lon, name, pixel_size_m=10):
        self.lat = lat
        self.lon = lon
        self.name = name
        self.pixel_size_m = pixel_size_m
        self.baseline = None
        self.history = []
 
    def establish_baseline(self, ndvi_stack, dates):
        """Build baseline from initial observations."""
        stack = np.stack(ndvi_stack, axis=0)
        self.baseline = {
            "mean": np.nanmean(stack, axis=0),
            "std": np.nanstd(stack, axis=0),
            "stable_mask": np.nanstd(stack, axis=0) < 0.1,
            "n_obs": np.sum(~np.isnan(stack), axis=0),
        }
        print(f"Baseline established from {len(dates)} observations "
              f"({dates[0]} to {dates[-1]})")
 
    def check_new_observation(self, ndvi, date, scl=None):
        """
        Check a new observation against baseline.
        Returns alert status and change report.
        """
        if self.baseline is None:
            raise ValueError("Baseline not established. Call establish_baseline first.")
 
        # Apply cloud mask if provided
        if scl is not None:
            valid = np.isin(scl.astype(int), [4, 5, 6, 7, 11])
            ndvi = np.where(valid, ndvi, np.nan)
 
        # Deviation from baseline
        deviation = (ndvi - self.baseline["mean"]) / (self.baseline["std"] + 1e-10)
 
        # Detect changes
        loss = (deviation < -2.5) & self.baseline["stable_mask"] & ~np.isnan(ndvi)
        gain = (deviation > 2.5) & self.baseline["stable_mask"] & ~np.isnan(ndvi)
 
        result = {
            "date": date,
            "loss_area_m2": np.sum(loss) * self.pixel_size_m ** 2,
            "gain_area_m2": np.sum(gain) * self.pixel_size_m ** 2,
            "loss_mask": loss,
            "gain_mask": gain,
            "alert": np.sum(loss) + np.sum(gain) > 50,
        }
 
        self.history.append(result)
 
        if result["alert"]:
            print(f"*** ALERT *** {date}: Significant change detected at {self.name}")
            print(f"  Loss: {result['loss_area_m2']:.0f} m^2")
            print(f"  Gain: {result['gain_area_m2']:.0f} m^2")
        else:
            print(f"{date}: No significant change at {self.name}")
 
        return result
 
    def plot_monitoring_timeline(self, save_path=None):
        """Plot change history over monitoring period."""
        if not self.history:
            print("No observations recorded.")
            return
 
        dates = [h["date"] for h in self.history]
        loss_areas = [h["loss_area_m2"] / 10000 for h in self.history]  # hectares
        gain_areas = [h["gain_area_m2"] / 10000 for h in self.history]
        alerts = [h["alert"] for h in self.history]
 
        fig, ax = plt.subplots(figsize=(14, 5))
        x = range(len(dates))
        ax.bar(x, loss_areas, color="red", alpha=0.7, label="Vegetation/surface loss (ha)")
        ax.bar(x, gain_areas, bottom=loss_areas, color="green", alpha=0.7,
               label="New vegetation/surface (ha)")
 
        for i, alert in enumerate(alerts):
            if alert:
                ax.axvline(i, color="orange", linestyle="--", alpha=0.5)
                ax.text(i, max(loss_areas) * 0.9, "ALERT",
                        rotation=90, fontsize=8, color="orange")
 
        ax.set_xticks(x)
        ax.set_xticklabels([str(d)[:10] for d in dates], rotation=45, ha="right")
        ax.set_ylabel("Changed area (hectares)")
        ax.set_title(f"Monitoring Timeline — {self.name}")
        ax.legend()
        ax.grid(True, alpha=0.3)
        plt.tight_layout()
 
        if save_path:
            plt.savefig(save_path, dpi=150)
        plt.show()
 
 
# Demo
np.random.seed(42)
monitor = FacilityMonitor(59.0, 25.0, "Facility Alpha")
 
# Simulate baseline (3 observations)
size = 200
baseline_ndvis = [
    np.random.normal(0.5, 0.05, (size, size)) for _ in range(3)
]
monitor.establish_baseline(baseline_ndvis,
                            ["2024-01-15", "2024-01-25", "2024-02-04"])
 
# Simulate monthly observations
for month, changes in enumerate([
    None,                     # month 1: no change
    None,                     # month 2: no change
    {"area": (80, 120, 60, 110), "value": 0.15},  # month 3: construction started
    {"area": (80, 120, 60, 110), "value": 0.10},  # month 4: construction continues
    {"area": (80, 130, 55, 115), "value": 0.10},  # month 5: construction expanded
    None,                     # month 6: no new change
]):
    obs = np.random.normal(0.5, 0.05, (size, size))
    if changes:
        r1, r2, c1, c2 = changes["area"]
        obs[r1:r2, c1:c2] = np.random.normal(changes["value"], 0.02, (r2-r1, c2-c1))
 
    date = f"2024-{month+3:02d}-15"
    monitor.check_new_observation(obs, date)
 
monitor.plot_monitoring_timeline(save_path="monitoring_timeline.png")

Real-World Methodology References

Bellingcat

  • Open-source investigation methodology for conflict monitoring
  • Extensive use of Sentinel-2, Planet, Google Earth historical
  • Key technique: systematic timeline construction with before/after pairs

CSIS Satellite Analysis Project

  • Monitors North Korean, Chinese, and Russian military facilities
  • Uses commercial high-res imagery (MAXAR, Planet)
  • Publishes reports with annotated satellite imagery

Nuclear Facility Monitoring (IAEA Style)

  • Regular collection schedules aligned with inspection cycles
  • Defined observable indicators: cooling water discharge, construction, waste storage
  • Statistical baseline with anomaly detection

Exercises

Exercise 1: Establish a Baseline

  1. Pick a military facility visible on Google Earth
  2. Download 6+ Sentinel-2 scenes over 3 months
  3. Compute NDVI baseline (mean and standard deviation)
  4. Identify the most stable areas (low variance) and the most variable (seasonal)

Exercise 2: Detect and Report Changes

  1. Continue monitoring for the next 3 months (download newer imagery)
  2. Apply the change detection pipeline
  3. Investigate any flagged changes: real or false alarm?
  4. Write a 1-page assessment following the report template above

Exercise 3: Multi-Source Assessment

  1. For the same facility, gather OSINT:
    • Search news for the military unit stationed there
    • Check social media for geo-tagged posts nearby
    • If airfield: check ADS-B (flightradar24, adsbexchange)
  2. Fuse imagery findings with OSINT
  3. Apply ACH with 3 hypotheses
  4. Produce a final assessment with confidence level

Self-Test Questions

  1. Why is a baseline essential before starting change detection?
  2. How many cloud-free optical observations per month can you realistically expect in Northern Europe?
  3. When optical imagery is cloudy, what alternative sensor fills the gap?
  4. You detect a change but OSINT shows nothing unusual. What are possible explanations?
  5. How does the CSIS methodology differ from automated pixel-level change detection?

See also: Change Detection | Multi-Source Intelligence Fusion | Case Study - Maritime Domain Awareness Next: Case Study - Maritime Domain Awareness