what is orbital

Performance Metrics

Benchmarking Orbital
as the new State-of-the-Art

What is considered state-of-the-art (SOTA) in AI is constantly evolving. The only meaningful way to evaluate and define SOTA is through direct testing and comparison. In refining & PetroChem and other critical industries, real benchmarking remains the clearest signal of technical superiority and practical capability.

While most vendors hide behind vague promises or avoid direct comparisons altogether, we believe credibility is earned through proof. That’s why we openly benchmark Orbital against leading AI research and industry best practices, across large language models, time-series forecasting, and physics-based modeling.

Across every benchmarking exercise to date, Orbital consistently outperforms alternative systems delivering faster results, higher accuracy, and more robust handling of complex operations.

We believe real credibility comes from direct comparison. Orbital is consistently benchmarked against the best. It’s a bold approach, but being transparent is essential to give our users confidence in our product.

Sam Tukra - CAIO, Applied Computing

Benchmark Category & Metrics

Benchmark Category & Metrics

Benchmark Category & Metrics

Time-Series Models

Orbital’s AI for time-series data (e.g. sensor readings, production metrics, energy usage over time) is benchmarked for both predictive accuracy and drift robustness. We use standard error metrics and compare against cutting-edge research models to highlight our model’s forecasting excellence and reliability in industrial conditions.

Key Metrics:

Forecast Accuracy

MAE, MSE, MAPE (short & long horizon)

Anomaly Detection

Precision, Recall, F1-score, AUC

Drift Robustness

Performance under data drift, missing values, noise injection

Benchmark: Advanced Time-Series Forecasting Models

Orbital was benchmarked against leading research models, including physics-informed neural networks (PINNs), Fourier neural operators (FNOs), and hybrid CNN-LSTM architectures, advanced approaches designed to learn, represent, and predict complex physical system dynamics with high fidelity.

These models were evaluated on widely recognized benchmark datasets including; Burger’s Equation, Navier-Stokes, and Lorenz, which test a model’s ability to accurately simulate dynamic, time-dependent systems.

Orbital outperformed the competition across every dataset, achieving higher predictive accuracy and significantly lower mean squared error (MSE) compared to alternative approaches.

Mean Square Error (MSE) Across Benchmark Datasets

Dataset

Dataset

PINN

PINN

FNO

FNO

Hybrid CNN + LSTM

Hybrid CNN + LSTM

Burger's

0.04

0.12

0.16

0.19

Navier-Stokes

0.05

0.23

0.2

0.28

Lorenz

0.07

0.14

0.18

0.15

Mean Square Error (MSE) Across Benchmark Datasets

Burger's

0.04

PINN

0.28

FNO

0.15

Hybrid CNN + LSTM

0.15

Navier-Stokes

0.05

PINN

0.23

FNO

0.20

Hybrid CNN + LSTM

0.28

Lorenz

0.07

PINN

0.14

FNO

0.18

Hybrid CNN + LSTM

0.15

Upcoming Benchmarks

Informer

Coming Soon

Autoformer

Coming Soon

ARIMA

Coming Soon

Prophet

Coming Soon

Chronos

Coming Soon

Moirai

Coming Soon

Benchmark Category & Metrics

Benchmark Category & Metrics

Benchmark Category & Metrics

Large Language Models (LLM)

Orbital’s domain-trained LLMs are built on a foundation of chemistry and chemical engineering expertise. Beyond general industrial knowledge, they capture reaction kinetics, catalyst behaviour, process control logic, and plant design principles. This ensures outputs are not only linguistically accurate but technically reliable, enabling deeper reasoning across refinery operations.

Key Metrics:

Retrieval Accuracy

% correct fact retrieval from domain docs

QA Performance

Domain-specific accuracy on industrial process questions

Hallucination Rate

% fabricated/inaccurate responses

Domain Expertise Score

Expert evaluations on process reasoning tasks

Upcoming Benchmarks

GPT-4.1

Coming Soon

LLaMA 3

Coming Soon

Benchmark Category & Metrics

Benchmark Category & Metrics

Benchmark Category & Metrics

Physics-Based Models

Orbital’s physics-based models ensure all AI outputs remain grounded in first-principles science. Using mass balance, energy conservation and reactor dynamics, these models guarantee physically plausible predictions under varying conditions critical for refinery optimisation.

Key Metrics:

Coefficient Estimation Accuracy

Error vs true physical parameters

Physical Law Consistency

Residual error on governing equations

Robustness to Drift/Noise

Stability under changing process conditions

Explainability

Alignment between model outputs and known physics

Upcoming Benchmarks

Fourier Neural Operator

Coming Soon

LLaMA 3

Coming Soon

Orbital Challenge

If you’ve built a model that you believe performs better on the same tasks, we’d like to see it.
Reach out and we will run the comparison.

If you’ve built a model that you believe performs better on the same tasks, we’d like to see it. Reach out and we will run the comparison.

See Orbital in action

Interrogate and instruct 100% of your data
in real-time

See Orbital in action

Interrogate and instruct 100% of your data
in real-time

See Orbital in action

Interrogate and instruct 100%
of your data in real-time

© Applied Computing Technologies 2025

Applied Computing Technologies is a remote first company headquartered in London, UK

© Applied Computing Technologies 2025

Applied Computing Technologies is a remote first company headquartered in London, UK

© Applied Computing Technologies 2025

Applied Computing Technologies is a remote first company headquartered in London, UK