Real-Time Material Tracking & Blending System
A real-time material tracking and blending optimization system maintaining a digital twin of ore flow from mine pits through conveyors and stockpiles to processing plant inputs. Significantly improved blending compliance.
Business Context
Mining operations extract ore from multiple sources with varying grades and mineralogy. This material flows through conveyors, accumulates in stockpiles, and eventually reaches the processing plant. Without real-time visibility into what is actually in each stockpile and on each belt, blending decisions rely on delayed lab results — hours to days old — and individual operator experience. The information gap between extraction and processing means optimization potential is systematically left on the table.
Strategic Value
The system created real-time visibility into ore flow that simply did not exist before, significantly improving blending compliance and reducing out-of-spec feed to the processing plant. Two synchronized layers work in concert: a physical tracking layer maintains time-resolved stockpile composition models with segregation modeling (because stockpiles are not homogeneous), while an optimization layer formulates blending as a constrained optimization problem with scenario simulation. Hourly tracking and 4-hourly optimization cycles run on Kedro/PySpark/Delta Lake, with multi-division deployment sharing a core engine while allowing per-site parameter configuration.
The Challenge
Without material tracking, blending decisions relied on delayed lab results (hours to days old) and individual operator experience. The information gap between mine extraction and plant processing left optimization potential untapped.
Our Approach
Two synchronized layers: Physical layer tracking (conveyor sensors, weightometers, time-resolved stockpile composition modeling with segregation) and Optimization layer (constrained optimization minimizing deviation from target grade specifications, scenario simulation evaluating extraction sequences). Configurable per mining division with shared core engine.
Key Performance Indicators
| KPI | Baseline | Result | Impact |
|---|---|---|---|
| Blending Visibility | Delayed lab results (hours to days) | Real-time stockpile state estimation | Decisions based on current ore properties |
| Feed Compliance | Reactive correction after off-spec | Proactive blending optimization | Reduced out-of-specification feed |
| Decision Cadence | Shift-based manual assessment | Hourly tracking, 4-hourly optimization | Continuous improvement loop |
Proprietary — source code not publicly available
Architecture
material tracking
The Information Gap
Before this system, blending decisions relied on delayed lab results — hours to days old — and individual operator experience. The mine extracts ore from multiple sources with varying grades and mineralogy. This material flows through conveyors, accumulates in stockpiles, and eventually reaches the processing plant. Without real-time visibility into what’s actually in each stockpile and on each belt, blending is educated guesswork.
The gap between extraction and processing meant optimization potential was systematically left on the table.
Two Synchronized Layers
The physical tracking layer maintains a time-resolved model of each stockpile’s composition. Conveyor sensors, weightometers, and sampling points feed continuous data. The system tracks not just what went into a stockpile, but how it’s distributed internally — segregation modeling accounts for the fact that stockpiles aren’t homogeneous. The material deposited first isn’t necessarily what comes out first during reclaim.
The optimization layer formulates blending as a constrained optimization problem: minimize deviation from target grade specifications, subject to tonnage constraints, equipment availability, and extraction sequence feasibility. The solver simulates multiple scenarios to find the best extraction plan before committing to it.
Stockpile state updates hourly. Optimization runs every 4 hours. Each mining division has its own parameter configuration but shares the core engine — a design that enabled multi-division deployment without rebuilding the system for each site.
The system runs on Kedro pipelines with PySpark and Delta Lake on Databricks, providing versioned, time-travel-capable data storage that supports both real-time tracking and historical analysis.
Technology Stack
Visual assets for this project are not publicly available.
This is a proprietary project. Source code and external resources are not publicly available.