what is orbital
Performance Metrics
Benchmarking Orbital
as the new State-of-the-Art
What is considered state-of-the-art (SOTA) in AI is constantly evolving. The only meaningful way to evaluate and define SOTA is through direct testing and comparison. In refining & PetroChem and other critical industries, real benchmarking remains the clearest signal of technical superiority and practical capability.
While most vendors hide behind vague promises or avoid direct comparisons altogether, we believe credibility is earned through proof. That’s why we openly benchmark Orbital against leading AI research and industry best practices, across large language models, time-series forecasting, and physics-based modeling.
Across every benchmarking exercise to date, Orbital consistently outperforms alternative systems delivering faster results, higher accuracy, and more robust handling of complex operations.
We believe real credibility comes from direct comparison. Orbital is consistently benchmarked against the best. It’s a bold approach, but being transparent is essential to give our users confidence in our product.
Sam Tukra - CAIO, Applied Computing
Time-Series Models
Orbital’s AI for time-series data (e.g. sensor readings, production metrics, energy usage over time) is benchmarked for both predictive accuracy and drift robustness. We use standard error metrics and compare against cutting-edge research models to highlight our model’s forecasting excellence and reliability in industrial conditions.
Key Metrics:
Forecast Accuracy
MAE, MSE, MAPE (short & long horizon)
Anomaly Detection
Precision, Recall, F1-score, AUC
Drift Robustness
Performance under data drift, missing values, noise injection
Benchmark: Advanced Time-Series Forecasting Models
Orbital was benchmarked against leading research models, including physics-informed neural networks (PINNs), Fourier neural operators (FNOs), and hybrid CNN-LSTM architectures, advanced approaches designed to learn, represent, and predict complex physical system dynamics with high fidelity.
These models were evaluated on widely recognized benchmark datasets including; Burger’s Equation, Navier-Stokes, and Lorenz, which test a model’s ability to accurately simulate dynamic, time-dependent systems.
Orbital outperformed the competition across every dataset, achieving higher predictive accuracy and significantly lower mean squared error (MSE) compared to alternative approaches.
Upcoming Benchmarks
Informer
Coming Soon
Autoformer
Coming Soon
ARIMA
Coming Soon
Prophet
Coming Soon
Chronos
Coming Soon
Moirai
Coming Soon
Large Language Models (LLM)
Orbital’s domain-trained LLMs are built on a foundation of chemistry and chemical engineering expertise. Beyond general industrial knowledge, they capture reaction kinetics, catalyst behaviour, process control logic, and plant design principles. This ensures outputs are not only linguistically accurate but technically reliable, enabling deeper reasoning across refinery operations.
Key Metrics:
Retrieval Accuracy
% correct fact retrieval from domain docs
QA Performance
Domain-specific accuracy on industrial process questions
Hallucination Rate
% fabricated/inaccurate responses
Domain Expertise Score
Expert evaluations on process reasoning tasks
Upcoming Benchmarks
GPT-4.1
Coming Soon
LLaMA 3
Coming Soon
Physics-Based Models
Orbital’s physics-based models ensure all AI outputs remain grounded in first-principles science. Using mass balance, energy conservation and reactor dynamics, these models guarantee physically plausible predictions under varying conditions critical for refinery optimisation.
Key Metrics:
Coefficient Estimation Accuracy
Error vs true physical parameters
Physical Law Consistency
Residual error on governing equations
Robustness to Drift/Noise
Stability under changing process conditions
Explainability
Alignment between model outputs and known physics
Upcoming Benchmarks
Fourier Neural Operator
Coming Soon
LLaMA 3
Coming Soon