Search thousands of free JavaScript snippets that you can quickly copy and paste into your web pages. Get free JavaScript tutorials, references, code, menus, calendars, popup windows, games, and much more.
A team of 3 developers built a neat platform called Spare Cores that makes cloud instance pricing more transparent. A deepdive on how exactly they did it.
A team of 3 developers built a neat platform called Spare Cores that makes cloud instance pricing more transparent. A deepdive on how exactly they did it.
Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer Newsletter. In every issue, I cover topics related to Big Tech and startups through the lens of engineering managers and senior engineers. In this article, we cover one section from this week’s from last week’s The Pulse issue. To get full issues twice a week, subscribe here.
I came across an interesting, useful product for backend-heavy applications.
There is an increasing number of cloud providers offering the ability to rent virtual machines, the largest being AWS, GCP, and Azure. Other popular services include Oracle Cloud Infrastructure (OCI), Germany-based Hetzner, France-headquartered OVH, and Scaleway. Virtual machine pricing across these providers can get confusing – and wildly different!
A startup called Spare Cores attempts to help compare prices between AWS, GCP, Azure and Hetzner by monitoring offerings in close to realtime. The name comes from the concept of “spare cores:” machines currently unused, which can be reclaimed at any time, that cloud providers tend to offer at a steep discount to keep server utilization high.
Spare Cores attempts to make it easier to compare prices across cloud providers. Source: Spare Cores.
Interested in how the site works, and what the business model is for a service like this, I reached out to Spare Cores founder Gergely Daróczi, who shared in-depth details about the company, including lots of specifics about the tech stack.
In this article, we cover:
Funding and team size. A €150K ($165K) grant, three people, and 10 months to build it.
Tech stack. Python, Angular, SSR, SQLite, DuckDB, Cockroach DB, and many others.
Benchmarking tools. Using tools like stress-ng, Lmbench, wrk, binserve and other utilities to get a sense of performance characteristics of virtual machines.
The cost of benchmarking. It costs about $10K to do the benchmarking that this team did — plus the occasional thousand-dollar mistakes they are also open about.
Creating a viable business from cloud benchmarking. Offering a free service alone won’t pay the bills when the grant runs out: the team plans to create a container-as-a-service (CaaS) product, plus also explores raising a modest seed round of around $250K.
As always, I have not been paid to write about this company and have no affiliation with it – see more in my ethics statement.
1. Funding and team size
The company got started thanks to a €150K ($165K) EU grant. The startup was able to start operations thanks to getting access to an EU grant called NGI Search grant. The company receives €150K in 2024, which is the maximum amount that a company can get within this category. This grant is designed to “support entrepreneurs, tech-geeks, developers, and socially engaged people, who are capable of challenging the way we search and discover information and resources on the internet”. The team is tiny; onlythree people. It’s one front-end dev and two part-time backend devs.
How the product works: they currently monitor four cloud providers (AWS, GCP, Hetzner Cloud, Azure.) The solution has three parts:
Checking prices: spot prices are scheduled to update every five minutes. Server details, storage, traffic, and IPv4 pricing are updated hourly. The current database includes 2,000 server types in 130 regions and 340 zones. This means about 275,000 up-to-date server prices, and around 240,000 benchmark scores.
Storing data: data collected is stored to allow for historical comparisons. The historical dataset is over 20M records at the time of writing!
Benchmarking: for new server types identified – or ones that need an updated benchmark executed to avoid data becoming stale – those instances have a benchmark started on them. Each benchmarking task is evaluated sequentially. Results are stored in git and their database, together with benchmarking metadata.
Visualizing the data: the frontend that allows querying of live and historic data.
4 cloud providers across 100+ regions end up with more than 100,000 different server prices. Source: Spare Cores
2. Tech stack
Backend:
Most tools are Python covering extract-transform-load (ETL), server management and benchmarking
APIs utilize packages such as SQLAlchemy (SQL toolkit and Object Relational Mapper), SQLModel (interacting with SQL databases from Python code, as Python objects), Alembic (migration tools for SQLAlchemy), Pydantic (data validation library), Rich (library for writing rich text), FastAPI (high-performance web framework), and Typer (a library for building command line interface applications).
Web frontend:
Angular 17 with server-side rendering support (SSR).
Monitoring the public-facing components and status page: BetterStack.
Other infrastructure:
Primarily AWS (S3 for cloud object storage, Parameter Store for hierarchical storage, Elastic Container Service (ECS) for container deployment and orchestration)
The team manages AWS via infrastructure-as-a-code Pulumi.
Heavily using GitHub Actions for things like getting warehouse data from vendor APIs, starting cloud servers, running benchmarks, processing results, and cleaning up after tuns.
Spare Cores is in the business of benchmarking virtual machines. They use GitHub Actions and Pulumi templates to kick off benchmark tasks: that plumbing code to start benchmarking can be found in the sc-runner repo. Benchmarking results for each instance type are stored in sc-inspector-data repo, together with the benchmarking task hash and other metadata.
There are several useful open source tools benchmarking which the team uses:
stress-ng: a stress-testing library with 350+ stress tests – including 80+ CPU stress tests and 20+ virtual memory stress tests.
OpenSSL: the cryptography and SSL/TLS toolkit comes with a built-in performance benchmarking capability
Lmbench: tools for performance analysis for UNIX/POSIX system. Includes bandwidth benchmarks (like cached file read, memory read, memory write) and latency benchmarks (process creation, memory read latency, file system create and delete latency and others)
binserve: a fast and static web server, used to benchmark web operations
Geekbench 6: a cross-platform performance benchmark
Lossless compression algorithms: running popular ones to benchmark performance for compression and decompression. Ones executed include gzip (one of the most popular ones), bzip2 (compresses a bit more than gzip), bzip3 (a successor to bzip2), brotli (developed by Google), zstd (developed by Facebook), zpaq (performs especially well when duplicate files are present) and lz4 (an algorithm aiming for a midway tradeoff between compression and decompression speed).
4. The cost of benchmarking
Given the team has relatively little funding: how much does infrastructure cost? The team shared:
“Freshly benchmarking a new cloud provider is the most expensive – in the realm of up to a few thousand dollars. This is because we need to start all the server types for a couple of hours to run all our benchmarks. After the initial run, the cost depends on the number of newly released instance types for the vendor. Also, whenever we add or update benchmark implementations, we need to run those again.
We’ve spent about $10K on cloud compute in 2024. The “regular” monthly bill is usually a tiny fraction of the cloud provider onboarding costs. Most of our infrastructure cost is thankfully covered by credits generously provided by the cloud vendors thanks to our startup’s involvement in the NVIDIA Inception program.”
Like most startups, Spare Cores also made their own “expensive mistake” while building the product:
“We accidentally accumulated a $3,000 bill in 1.5 days. Our GitHub Actions script used our standard Pulumi templates to provision a massive server that costs $200 per hour to operate – one we were using to do a benchmark. However, this machine did not boot with the usual Ubuntu image.
The way our scripts work is once we have a machine boot up, we wait for it to commit metadata to the sc-inspector-data repository. Then we wait for the actual data and/or final metadata (e.g. task error) to be commited, and we shut down the machine.
You already see the problem: the initial commit did not happen! We discovered this machine still running after a day and a half! We immediately shut down this machine: but were now faced with a large bill for a machine that sat idle.
In the end, this was a $1,500 lesson coming out of our pocket to not leave virtual machines running without doing any useful work. The cost was reduced from $3,000 thanks to our vendor being helpful, and offering a 50% discount – after acknowledging the technical issue: the machine cannot boot an official image. Still, the hit was painful because back then, we did not yet get the green light for startup cloud compute credits.”
5. Creating a viable business from cloud benchmarking
I really liked the idea of Spare Cores because pricing comparison across cloud providers is increasingly useful. However, it also sounds like a business that’s hard to do properly for long because it could be hard to monetize, and turn into a profitable small business. I asked Gergely at Spare Cores about this and he shared their plans:
“We are approaching the second major milestone of the NGI Search grant, but there is still plenty to accomplish by the end of the year. We plan to add more features:
Integrate further vendors
Support more application-specific benchmarks such as
Database benchmarking for cloud providers: including RDBMS, key-value stores
Elasticsearch benchmarking
Common webservice workloads benchmarking like JVM, Node.js, Python, PHP
ML model training benchmarks like GBM and random forest implementations
LLM inference speed
Benchmarking for more general GPU tasks.
Personalizing the web interface for different needs. For example, free account registration will enable users e.g. to create lists of servers and set preferred benchmark workloads for sorting servers.
Our mid-term goals include offering a simple and cheap CaaS (Container-as-a-Service) product. We envision building something comparable to AWS Fargate, or Google Cloud Run. Building such a service would help sustain our open-source projects after the EU grant runs out.
This offering will primarily target individual developers not yet committed to any existing cloud providers, allowing them to provide a Docker image reference, a command to run, and a payment method – while we seamlessly handle the rest in the most cost-efficient way.
Also, by the time we build this offering, we have to have enough brand recognition that people come to our site looking for reasonably-priced cloud resources: and so they will discover this CaaS product of ours as well.”
These are sensible mid-term plans: but they do not answer for what happens to the startup starting 1 January 2025, when their grant funding runs out. The plan, as the team told me:
“We still need to build out our CaaS and we will try to raise a modest seed round of $250K starting at the end of this year. The open source benchmarking tool is almost fully built: and we expect that this current product brings organic traffic and traction that will help with the fundraise.
If we are unable to raise investment, the team is still committed to make progress, even if this is working part time. This would mean that we execute somewhat slower. One way or the other, we want to continue this startup, and turn it into a moderately profitable, and very useful product!”
I’m impressed by what a useful tool this tiny team built purely from a modest EU grant – and the optimistic outlook the team has, even as the grant runs out. It’s also interesting to see just how many different SaaS platforms even a small startup like this one is using to operate some backend services, store data, and run a website. Clearly, the team is doing things in a slightly more complex way because they are prepared to process large amounts of data, and build additional features they might be able to charge for later.
I hope you enjoyed a peek inside the tech stack of a young, ambitious startup. Good luck to the Spare Cores team turning this neat idea into a viable business! If you’d like to contact the Spare Cores team, you can do so at daroczig@sparecores.com.
You’re on the free list for The Pragmatic Engineer. For the full experience, become a paying subscriber. Many readers expense this newsletter within their company’s training/learning/development budget.
If you enjoyed this post, you might enjoy my book, The Software Engineer's Guidebook. Here is what Tanya Reilly, senior principal engineer and author of The Staff Engineer's Path said about it:
"From performance reviews to P95 latency, from team dynamics to testing, Gergely demystifies all aspects of a software career. This book is well named: it really does feel like the missing guidebook for the whole industry."
No comments:
Post a Comment