It is a common belief that RUST APIs are the most performant APIs because of its language semantics and manual memory management. It’s often compared to C++ in terms of performance. An aspect that makes me curious is also how does RUST compare with Golang. Can I gather some numbers that can help with a comprehensive evaluation? So, I decided to establish a baseline for a quite simple API written in RUST. This article highlights the results for the performance.
Disclaimer: Do not rely on just this article for making decisions on Golang and Rust. It’s not an A vs B article. ** The intention is to provide a few data points for curiosity when one must evaluate** . These two languages may have different use cases in the real world, and a careful evaluation must be done of the use cases before choosing either one.
I would recommend reading an article I wrote a few days back comparing Golang and RUST. That article is the motivation for this one. Golang, being a highly productive systems language, can be compared with rust for performance reasons. But what is the real difference between the performance between Golang and rust. Can we quantify this? That’s the reason I wanted to see this for myself.
The setup is divided into two phases and tests.
The first setup I created was a new REST API in Rust. It’s a simple API that takes a JSON input, deserializes it, validates and then returns the response.
The second setup would include computing a MD5 hash with every request. And see how that impacts the performance. I believe that this being a slightly CPU intensive operation will have some impact on API performance. I may be wrong!
I Dockerize this build and then run a Performance test by starting up a new VM on GCP. This would ensure that I can standardize some settings and repeat it again for the same API in Golang.
The docker file for my rust application is a simple one
**FROM** rust:1.63.0 **AS _build
_WORKDIR /**src**/**openab
**COPY** . .
**RUN** cd management-server **&&** cargo install **--**path .
**RUN** ls **-**al **/**usr**/**local**/**cargo**/**bin
**FROM** debian:stable-slim
**COPY --**from=**_build_ /**usr**/**local**/**cargo**/**bin**/**management-server **/**bin
**CMD** [**"/bin/management-server"**]
The machine where we will fire all requests from will be a separate Docker instance with the same machine configuration. We’ll use k6
to test our load. Below is the script and the command to run the test.
import http from 'k6/http';export default function () {
const url = '[http://x.x.x.x:3000/experiments'](http://10.128.0.2:3000/experiments');
const payload = JSON.stringify({
"name":"new_home_page",
"variants":[
{
"name":"blue_button",
"allocation_percent":50.0
},
{
"name":"red_button",
"allocation_percent":50.0
}
]
}
);const params = {
headers: {
'Content-Type': 'application/json',
},
};http.post(url, payload, params);
}
k6 run --vus 3000 --iterations 1000000 script.js
The test uses a 2 Core 4 GB Ram e2-medium
machine. We’ll run the test for 1 Million Requests with 3000 virtual users concurrently hitting our server. This should be a significant load to test our performance.
Below are the results for our first test.
Looking at the results, it felt pretty good. With just a 2 Core 4 GB RAM, we were able to achieve a throughput of 9.5K with 3000 virtual users. CPU peaked at 83% for the machine.
For this scenario, I updated our codebase to generate an MD5 hash, for the experiment name and unix timestamp concatenated. I replace the experiment name with the hash and return that as a response.
**let** digest = md5::compute(format!(**"{}-{}"**,e.**name** , since_the_epoch.as_secs()));
e.name = format!(**"{:x}"**, digest) ;
I run the same test with the exact same setup.
The results are quite comparable. Even with the added md5 hash, the results had a drop of 30 Reqests / second. There is a very tiny impact to latency (received response times) but that’s in response to the hash.
It was interesting to see the results, and I’m quite happy with the throughput. 9.5k/Sec on 3000 virtual users — with JSON serialization and deserialization is a particularly good baseline for an e2-medium
machine. One of the things I noticed was that CPU usage was consistent and there weren’t any spikes. There’s not a lot of variance on our request latencies or CPU performance for 3000 virtual users simultaneously hitting our service. The results are very predictable.
That’s all for Rust Baseline comparison. In the next articles, I'll write the exact same program in Golang, and try to get some baseline numbers for this API. It would be interesting to see what happens. I’m excited about this one!