A modern multi-threaded C++ HTTP server implementation with routing, testing, and CI-driven quality checks.
- Features
- Benchmark Results
- Benchmark Comparison
- Usage
- Project Structure
- Using Docker
- Using Nix
- Building the Source Code
- Running the Server
- Running the Tests
- Code Coverage
- CI / GitHub Actions
- Project Management
- Key Skills/Lessons Learned
- Challenges
- Route compilation and matching with support for path parameters
- Easily map functions to a new route
- Multi-threading workloads to handle multiple requests at once
- Automated test suite using GoogleTest (gtest) with CI/CD pipeline
- Code coverage reporting using
lcov- Automated code coverage with CI/CD pipeline
- GitHub Actions CI pipeline
- Build verification
- Automated test execution
- Coverage report generation
- Docker for reproducible builds and containerization
- Nix flake for reproducible development environments
- Project tracking using GitHub Projects (Kanban board)
This project was benchmarked using k6 to evaluate HTTP request throughput, latency, and stability under sustained load. It sustained ~29,600 RPS locally with p99 < 7 ms (using a Ryzen 5 5600G). Keep in mind, that this was a local benchmark that does not reflect real networks and it does not utilize TLS, auth, databases, or disk usage. Check out benchmarks/benchmark-results.txt to see the benchmark results captured with k6.
Load generator: k6 (local execution)
Scenario: constant_request_rate
Target rate: 30,000 requests/second
Duration: 10 seconds
Max VUs: 20,000 (only reached a maximum of 945 VUs)
Client machine: AMD Ryzen 5 5600G (local Linux system)
The load generator and server were executed locally. As a result, the benchmark reflects the combined limits of the server and the client machine.
- The server maintained low and stable latency even at very high request rates.
- No errors or timeouts occurred, indicating the server was not saturated.
- The test experienced dropped iterations, which suggests the benchmark was limited by the local load generator and PC hardware, not the server itself.
As such, these results demonstrate that the server can reliably handle at least ~27k requests per second on the tested hardware.
I made sure that the node express server implemented multi-threading to provide a more fair comparison with the cpp_http_server.
Check out benchmarks/node_server to see the node express server implementation.
The cpp_http_server is ~1.45x faster in throughput or about 44.9% higher request rate. Nice!
Calculation: 29,675.882135 RPS / 20,468.045127 RPS = ~1.449
| Metric | C++ Server | Node + Express |
|---|---|---|
| Target RPS | 30,000.00 | 30,000.00 |
| Actual RPS | 29,675.882135 | 20,468.045127 |
| Total requests | 296,824 | 205,983 |
| Dropped iterations | 3,180 | 123,126 |
| Avg latency | 1.12 ms | 38.99 ms |
| Median latency | 404.76 µs | 30.25 ms |
| p90 latency | 3.24 ms | 81.37 ms |
| p95 latency | 4.3 ms | 102.58 ms |
| p99 latency | 6.58 ms | 149.76 ms |
| Max latency | 22.7 ms | 318.66 ms |
| Error rate | 0.00% | 0.00% |
| Peak VUs | 332 | 1,547 |
#include "HttpServer.h"
HttpServer server{};
server.get_mapping(
"/test", [](HttpServer::Request &req, HttpServer::Response &res) {
res.body = "testing new api route";
});
server.post_mapping(
"/test2/{id}/foo/{user}",
[](HttpServer::Request &req, HttpServer::Response &res) {
std::stringstream ss;
try {
ss << "{\"id\":" << "\"" << req.path_params.get_path_param("id").value() << "\","
<< "\"user\":" << "\"" << req.path_params.get_path_param("user").value() << "\""
<< "}";
res.body = ss.str();
} catch (const std::bad_optional_access &e) {
res.body = "could not get path parameter";
}
});
try {
server.listen(3490); // use any port you like
} catch (const std::exception &err) {
std::cerr << err.what() << '\n';
return EXIT_FAILURE;
}.
├── src/ # Server implementation
├── include/ # Public headers
├── test/ # gtest-based test suite
├── demo/ # demo of the library implementation
├── benchmarks/ # k6 performance benchmark outputs with a node server comparison
├── .github/ # GitHub Actions workflows
└── README.md
# build the docker image
docker build -t cpp_http_server .
# run a container in interactive mode
# discards after use
docker run --rm -it cpp_http_serverThe flake.nix can only be used if your system has the Nix package manager with flakes enabled.
nix developThis project uses Conan for dependency management and CMake for builds.
conan build . --build=missing -s build_type=Releaseconan build . --build=missing -s build_type=DebugCommand to run the HTTP server implementation example:
# in release mode
./build/Release/demo/server_demo_bin
# in debug mode
./build/Debug/demo/server_demo_binCommand to execute the automated test suite:
# run tests
./build/Debug/server_tests_binCoverage reports are generated using lcov.
make -C build/Debug coverage# specific to Linux
xdg-open build/Debug/CMakeFiles/server_library.dir/src/out/index.htmlThe project includes a GitHub Actions workflow that automatically:
- Builds the project
- Runs the gtest suite
- Generates an lcov coverage report
Development tasks and progress are tracked using a GitHub Projects Kanban board.
- Designing extensible routing systems in C++ requires careful separation between route compilation and request matching
- Understanding of TCP and port implementation on Linux
- Multi-threading and atomic operations on shared memory
- Parsing of strings (client requests) to get individual strings (HTTP method, route, request body)
- Implementing Test driven development (TDD) to improve confidence in changes and reduce regressions when refactoring
- Setting up code coverage tools (lcov) provide insight on what portions of the codebase are covered by testing
- Setting up CI automation with GitHub Actions to automate testing, code coverage reports, and enforce consistent quality for PRs
- Better understanding of HTTP REST standards, such as knowing which HTTP methods should have request bodies ignored
C++ specific lessons learned:
- Multi-threading and atomic operations on shared memory
- Creating atomic data structures (e.g. Atomic Queues)
- Using Conan to simplify dependency management across environments
- Learning CMake
- Creating functions in CMake to automatically generate code coverage reports
- Implementing RAII to improve memory safety and ensure automatic cleanup
- Implementing more modern C++ practices:
- std::threads, lambdas for route handlers, RAII-style object lifetime, ...
- exception usage instead of C-style exits, thread sleeping (std::this_thread::sleep_for)
- Use
string_viewto avoid copying an entirestringobject
- Implementing C++ practices while utilizing the Linux port library written in C
- I had to encapsulate many parts of the C-specific library functions and employ RAII-style C++ objects (ex: closing ports before deconstruction)
- Designing a routing system that allows a programmer to easily map a function pointer to a route
- Designing the routing system to pick up path parameter values from client requests
- Refactoring the routing system to allow for path parameters


