The Enduring Dominance of C++ in Finance
When microseconds matter — and in trading, they often do — C++ remains the dominant choice. Matching engines at major exchanges are written in C++. High-frequency trading firms build their entire stack in C++. Derivatives pricing libraries that need to evaluate millions of paths in real time use C++.
The reason is straightforward: C++ gives you control over everything. Memory layout, cache utilisation, instruction selection, allocation patterns — you can optimise at a level that managed languages simply do not allow. When you are competing on speed and every nanosecond counts, that control is the competitive advantage.
That said, modern C++ (C++17, C++20, C++23) is a very different language from the C++ of the 1990s. The modern features make it significantly more productive and safer while retaining the performance characteristics.
Modern C++ for Finance
Smart Pointers: Memory Safety
Raw pointers (new/delete) are the primary source of bugs in legacy C++ code. Modern C++ uses smart pointers that manage memory automatically:
#include <memory> #include <vector> class Order { public: std::string symbol; int quantity; double price; Order(std::string sym, int qty, double px) : symbol(std::move(sym)), quantity(qty), price(px) {} }; // unique_ptr: single owner, automatically freed auto order = std::make_unique<Order>("AAPL", 100, 150.25); // shared_ptr: multiple owners, reference counted auto shared_order = std::make_shared<Order>("GOOGL", 50, 2800.0); // No manual delete needed — memory is managed automatically
Containers and Algorithms
The Standard Template Library (STL) provides high-performance data structures:
#include <unordered_map> #include <algorithm> #include <numeric> // Hash map for O(1) lookups std::unordered_map<std::string, double> prices; prices["AAPL"] = 150.25; prices["GOOGL"] = 2800.50; // Vectors with pre-allocated memory std::vector<double> returns; returns.reserve(252); // Avoid reallocations // Standard algorithms std::vector<double> daily_returns = {0.01, -0.005, 0.02, -0.01, 0.015}; double mean = std::accumulate(daily_returns.begin(), daily_returns.end(), 0.0) / daily_returns.size(); auto max_return = *std::max_element(daily_returns.begin(), daily_returns.end()); // Sort in descending order std::sort(daily_returns.begin(), daily_returns.end(), std::greater<>());
Performance Patterns for Trading
Cache-Friendly Data Structures
CPU caches are the single biggest factor in low-latency C++ performance. Data that is accessed together should be stored together in memory:
// Bad: Array of Structs (AoS) — poor cache utilisation for column operations struct Trade { std::string symbol; // 32 bytes double price; // 8 bytes int quantity; // 4 bytes char side; // 1 byte // padding // 3 bytes }; std::vector<Trade> trades; // Mixed data interleaved // Better for analytics: Struct of Arrays (SoA) — cache friendly struct TradeData { std::vector<double> prices; // All prices contiguous std::vector<int> quantities; // All quantities contiguous std::vector<char> sides; // All sides contiguous }; // Summing all prices: reads contiguous memory, CPU prefetcher is happy double total = 0; for (size_t i = 0; i < data.prices.size(); ++i) { total += data.prices[i]; }
This can produce 5-10x speedups for operations that scan a single field across many records.
Lock-Free Programming
For the highest performance concurrent systems, C++ offers atomic operations that avoid mutex overhead:
#include <atomic> class AtomicCounter { std::atomic<int64_t> count_{0}; public: void increment() { count_.fetch_add(1, std::memory_order_relaxed); } int64_t get() const { return count_.load(std::memory_order_relaxed); } }; // Lock-free SPSC (Single Producer, Single Consumer) queue // Common pattern in trading systems for passing data between threads template<typename T, size_t Size> class SPSCQueue { std::array<T, Size> buffer_; std::atomic<size_t> head_{0}; std::atomic<size_t> tail_{0}; public: bool push(const T& item) { size_t head = head_.load(std::memory_order_relaxed); size_t next = (head + 1) % Size; if (next == tail_.load(std::memory_order_acquire)) return false; // Queue full buffer_[head] = item; head_.store(next, std::memory_order_release); return true; } // ... pop() similarly };
Template Metaprogramming
C++ templates let you write generic code that the compiler specialises for specific types — with zero runtime overhead:
template<typename PricingModel> class PricingEngine { PricingModel model_; public: double price(const Instrument& inst, const MarketData& data) { return model_.calculate(inst, data); } }; // The compiler generates optimised code for each model type PricingEngine<BlackScholes> bs_engine; PricingEngine<MonteCarlo> mc_engine; PricingEngine<BinomialTree> tree_engine; // No virtual function overhead — the model is known at compile time
This is similar to the Strategy design pattern but resolved entirely at compile time.
C++ vs Rust
Both languages target the same performance tier. The tradeoffs:
| Aspect | C++ | Rust |
|---|---|---|
| Performance | Slightly more optimisation options | Equivalent for most workloads |
| Safety | Manual discipline required | Compiler-enforced |
| Ecosystem | Decades of libraries | Growing rapidly |
| Hiring | Larger talent pool | Smaller but growing |
| Learning curve | Steep (many footguns) | Steep (borrow checker) |
| Legacy code | Massive existing codebases | Greenfield mostly |
For new systems, Rust is increasingly competitive. For maintaining and extending existing systems — which is the majority of work in finance — C++ knowledge remains essential.
Where C++ Fits in the Stack
Most teams use C++ for the hot path only: the matching engine, the market data handler, the signal processor. Everything else — reporting, monitoring, analysis, configuration — uses higher-level languages like Python.
The interop story is important: C++ libraries can be called from Python (via pybind11), from Rust (via FFI), and from virtually any other language. This lets you put C++ where it matters most while keeping development velocity high everywhere else. For understanding when you need even more performance, see our guide on hardware acceleration.
Want to go deeper on C++ in Quantitative Finance?
This article covers the essentials, but there's a lot more to learn. Inside Quantt, you'll find hands-on coding exercises, interactive quizzes, and structured lessons that take you from fundamentals to production-ready skills — across 50+ courses in technology, finance, and mathematics.
Free to get started · No credit card required