Technology

The right AI model
for every signal

We don't believe in one-model-fits-all. Our platform intelligently selects, combines, and orchestrates multiple AI architectures — each chosen for its strengths — to deliver the most accurate, explainable forecasts possible.

Model Class 1

Ensemble Trees & Gradient Boosting

For slow-moving, intermittent, and discrete demand patterns, tree-based ensemble methods excel. Algorithms like gradient boosting and SVM capture non-linear relationships between demand drivers without requiring continuous data streams.

Gradient Boosting SVM Random Forest XGBoost

Best suited for:

Intermittent demand items consumed at irregular intervals at the individual location level. Products with complex feature interactions where traditional statistical methods struggle. Classification tasks that route items to the optimal forecasting pipeline.

Root Split Split Ensemble of 100+ trees Aggregated Prediction
Model Class 2

LSTM Recurrent Neural Networks

Long Short-Term Memory networks excel at learning long-range temporal dependencies. Unlike traditional statistical methods, LSTMs discover that today's consumption is meaningfully connected to patterns from weeks, months, or even years ago — capturing seasonality, trends, and regime changes simultaneously.

LSTM-RNN Sequence Modeling Temporal Learning

Best suited for:

High-volume items with strong temporal patterns. Products where consumption at time t depends on complex interactions with consumption at times 0 through t-1. Multi-step ahead forecasting where maintaining context over long horizons is critical.

LSTM t-2 LSTM t-1 LSTM t Output t+1 Cell State (Long-term Memory) Input Input Input
Model Class 3

Temporal Fusion Transformers

The TFT architecture combines the power of attention mechanisms with temporal processing. It handles both static features (e.g., store size, facility capacity, location type) and dynamic features (e.g., promotions, seasonal events, economic indicators, weather) while providing built-in interpretability through attention weights and variable importance scores.

Multi-Horizon Forecasting Attention Mechanism Variable Selection

Key advantage — built-in explainability:

TFT's architecture inherently reveals which input variables drove each prediction and which time steps the model attended to most. Analysts see exactly why the model expects a demand surge — whether it's a seasonal pattern, a policy change, or an emerging crisis signal.

Static Features Dynamic Features Known Future Variable Selection Network LSTM Encoder LSTM Decoder Multi-Head Attention Quantile Forecasts
Model Class 4

Zero-Shot Forecasting Foundation Models

Trained on billions of data points, our foundation models develop a deep understanding of demand behavior across all contexts — routine operations, crises, pandemics, and conflicts. Unlike traditional models, they don't need retraining for new scenarios. They "remember" how systems behaved during past disruptions and apply that intelligence to new situations.

Transfer Learning Zero-Shot Prediction Crisis-Aware

The vision — a "Logistics Brain":

A foundation model that accumulates 10+ years of operational experience, including macro events like pandemics and military conflicts. When a crisis emerges, it generates context-aware predictions — accounting for facility closures, supply disruptions, demand surges, and behavioral changes — without manual reconfiguration.

Foundation Model 10 Years History External Events 1000+ Locations Crisis Patterns Trained on 6 billion+ data points
Foundational Technology

Intelligent Feature Selection & Elimination

Before any model trains, our proprietary technology determines what to learn from and how deep to look — ensuring every model is fed exactly the right inputs for its segment.

Segment-Based Optimization

Not all time series behave the same. A high-volume seasonal product requires different features and training history than a slow-moving spare part. Our technology automatically groups individual time series — and their hierarchical aggregates — into distinct behavioral segments, then independently optimizes the feature set and training depth for each one.

Hierarchical Grouping Per-Segment Optimization Feature Elimination Training Depth Tuning

How it works:

1. Hierarchical Segmentation — Individual time series are analyzed alongside their aggregated views (by category, region, channel) to identify clusters with similar demand patterns, volatility profiles, and feature sensitivities.

2. Feature Combination Search — For each segment, the system evaluates which combination of available features (promotions, seasonality indicators, economic signals, weather, events) actually improves predictive accuracy — and which introduce noise. Irrelevant features are eliminated.

3. Training Depth Calibration — Each segment's optimal lookback window is determined independently. Newly launched products may need only 6 months of history; mature seasonal items might benefit from 5+ years. The system finds the sweet spot automatically.

RAW TIME SERIES Item-Location Category Agg. Region Agg. BEHAVIORAL SEGMENTATION Seg A Seasonal Seg B Steady Seg C Sporadic Seg D New PER-SEGMENT FEATURE SELECTION Season Promo Trend Weather Events Lag-12 Category History TRAINING DEPTH CALIBRATION 5 yrs 3 yrs 2 yrs 6 mo OPTIMIZED MODEL TRAINING
30-50%

Noise Reduction

Eliminating irrelevant features prevents models from learning spurious correlations, dramatically improving forecast stability.

Per-Segment

Tailored Precision

Each behavioral cluster gets its own optimized configuration — the right features, the right history depth, the right model inputs.

Automatic

Self-Optimizing Pipeline

No manual feature engineering required. The system continuously re-evaluates segments and feature relevance as new data arrives.

Explainability

No black boxes. Only transparent AI.

Every model in our toolkit is designed for explainability. Analysts don't just get a number — they understand the reasoning, the confidence, and the key drivers behind every forecast.

Feature Importance

See which variables — season, demographics, events, trends — contributed most to each prediction. Ranked and quantified.

Temporal Attention Maps

Visualize which historical time periods the model focused on. Understand if a forecast is driven by recent trends, seasonal cycles, or crisis patterns.

Confidence Intervals

Quantile forecasts provide uncertainty ranges. Know when the model is highly confident and when it flags higher risk — enabling smarter safety stock decisions.

Natural Language Querying

Ask the model questions in plain language: "Why is demand expected to spike in March?" and receive reasoned, data-backed explanations.

Scenario Simulation

What-if analysis: Define a crisis scenario and see how the model projects demand changes across locations, categories, and time horizons.

Planner Learning Loop

The model learns from planner overrides and adjustments, understanding real-world constraints — then explains how these corrections influenced future predictions.

See it in action

Schedule a technical deep-dive to understand how our AI models can transform your forecasting.

Request a Demo