Inflight Documentation

Simulator

Multi-fidelity prediction engine

The Simulator is the prediction engine at the heart of Inflight. It lets you test configuration changes before deploying them, using models built from your actual production data. No more guessing—know exactly what will happen before you make a change.

Why Simulation Matters

Traditional performance tuning is trial and error. The Simulator transforms it into a predictable, low-risk process.

0%

Zero Risk Testing

Test any configuration change without touching production systems

4

Multi-Fidelity

Four simulation modes from fast statistical to full discrete event simulation

24/7

Continuous Learning

Models continuously calibrate from your production data

The Prediction Gap

Before Inflight, predicting the impact of configuration changes meant either risky production deployments or synthetic tests that don't reflect reality:

Without Simulation

  • Trial and error in production
  • Load tests that don't match reality
  • Unexpected production incidents
  • Conservative changes due to fear

With the Simulator

  • Predict outcomes before deployment
  • Models built from real production data
  • Safe exploration of optimization space
  • Confident, data-driven decisions

Multi-Fidelity Simulation

Not every prediction needs the same level of detail. The Simulator automatically selects the appropriate fidelity based on the change complexity and required confidence.

STATISTICAL

Fastest

Quick predictions using statistical models for simple, well-understood changes.

Best for: Minor parameter adjustments with ample historical data

HYBRID

Fast

Combines statistical models with targeted simulation for balanced accuracy and speed.

Best for: Moderate changes where some simulation adds confidence

FULL

Thorough

Complete discrete event simulation for maximum accuracy on complex scenarios.

Best for: Major changes, new configurations, or high-stakes decisions

DEGRADED

Best-effort

Provides predictions with reduced confidence when data or time is limited.

Best for: New services or when calibration data is incomplete

Automatic Fidelity Escalation

When a lower fidelity mode produces low-confidence results, the system automatically escalates to a higher fidelity mode for better accuracy.

Core Capabilities

The Simulator provides sophisticated prediction capabilities while remaining easy to use and understand.

Production-Based Models

Unlike synthetic load testing, the Simulator builds models from your actual production traffic patterns, workload characteristics, and resource usage.

Real traffic patternsActual user behaviorProduction workloadsTrue resource usage

Automatic Model Calibration

Models continuously learn from production data, automatically recalibrating when your application behavior changes.

Continuous calibrationDrift detectionAuto-recalibrationModel versioning

Safety Validation

Every simulation runs through platform-aware safety checks that understand Kubernetes limits, cloud provider quotas, and runtime constraints.

Kubernetes awarenessCloud quota checksRuntime constraintsThreshold validation

Backtesting Validation

Predictions are validated against historical data to ensure accuracy. You can see how well the model would have predicted past scenarios.

Rolling-origin validationHistorical accuracyPrediction vs actualConfidence metrics

What Gets Predicted

The Simulator predicts comprehensive impact across multiple dimensions:

Performance Impact

  • Response time changes (p50, p95, p99)
  • Throughput predictions
  • Latency distribution shifts
  • Error rate projections

Resource Utilization

  • Memory consumption changes
  • CPU utilization impact
  • GC pause time predictions
  • Container resource usage

Stability Assessment

  • OOM risk evaluation
  • Throttling probability
  • Restart likelihood
  • Contention predictions

Confidence Metrics

  • Model confidence scores
  • Data quality indicators
  • Prediction uncertainty
  • Calibration quality

Safety Verdicts

Every simulation produces a clear verdict to guide your decision:

APPROVED

All safety thresholds met. Win probability exceeds requirements. Safe to deploy.

WARNING

Some risk factors detected. May work but validate in staging first.

REJECT

Critical issues detected. Configuration change will not achieve intended outcome.

Model Governance

Trust requires transparency. The Simulator provides complete visibility into how predictions are made:

Calibration Transparency

See which data was used to calibrate models and when

Accuracy Tracking

Historical accuracy metrics for model predictions

Version History

Complete audit trail of model changes over time

Parameter Priors

Hierarchical defaults ensure sensible starting points

Ready to Predict Before You Deploy?

Stop guessing and start knowing. See how the Simulator validates configuration changes for your services.