Loading Stories...
Loading Stories...
- Pricing looks awesome.
- I'm not currently the target audience because everything I'm doing right now is open source with free GitHub actions.
- I'm left wondering what the catch is / why it's cheaper and faster.
- Visual nit: lacking horizontal padding from 990px to ~1200px, a common window size on my 14" MBP.
> Ubicloud is an open source cloud. Think of it as an open alternative to cloud providers, like what Linux is to proprietary operating systems. You can self-host Ubicloud or use our managed service.
I find this hard to parse and the first few times I thought you were saying it's a linux alternative. I just clicked to the docs and the "What is Ubicloud?" section is clearer because you say concretely and directly what it is rather than how I should think of it metaphorically: "infrastructure-as-a-service (IaaS) features on providers that lease bare metal instances, such as Hetzner, OVH, and AWS Bare Metal. It’s also available as a managed service."
There's some old "counterintuitive" adage I'm too lazy to look up about how the best marketing to engineers is just saying in concrete language what it is rather than the benefit it provides. In this case, I'd do both: tell me what it actually is and why that makes it cheaper/better. Also a minor note, there's a typo in that paragraph (missing space "systems.Ubicloud").
> Imagine to do more
I'd get rid of this. I don't understand the phrase, and it sounds like fluff.
> Fast runs even at this price point
I'd get rid of the "point". "Price point" isn't a synonym for "price", which I think is what's being attempted here. I'd be tempted to just have no tagline, and retitle this section "Faster than GitHub Actions". You've already said it's cheaper.
> Ubicloud is an open, free, and portable cloud. Think of it as an open alternative to cloud providers, like what Linux is to proprietary operating systems. You can check out our source code in GitHub or see Ubicloud runners in action for our GitHub Actions. An open and portable cloud gives you the option to manage your own VMs and runners, should you choose to.
This is woolly. How about: Ubicloud is an open and free cloud. You can run it on the hosting provider of your choice, or bring your own hardware. Check out our source code on GitHub!
We have dozens of customers using Ubicloud runners in production today. We’re now designing our caching layer (Docker instance registry, Docker layer cache, or package cache). We wanted to put this out there for any comments.
Also, if you have any points related to the broader topic of an open and portable cloud, please pass them along!
The main reason is their platform is hosted on Hetzner dedicated instances.
I assume this is done to cover the time that the VM reboots between jobs?
EDIT: per responses, it looks like this is outdated information and the project now uses AGPL!
"Additionally, regardless of whether an Action is using self-hosted runners, Actions should not be used for: the provision of a stand-alone or integrated application or service offering the Actions product or service, or any elements of the Actions product or service, for commercial purposes"
Does this provide guarantee that subsequent job won't be able to recover the data?
Several builds connect to internal resources, so running them on external nodes suboptimal and expensive when it comes to network egress.
Does it have on demand workers like those Kubernetes providers, where you use shared master nodes (or with a very small fee), and scalable node pools on demand (charged when used).
Edit: the TOS for OS X says this:
3. Leasing for Permitted Developer Services. A. Leasing. You may lease or sublease a validly licensed version of the Apple Software in its entirety to an individual or organization (each, a “Lessee”) provided that all of the following conditions are met: (i) the leased Apple Software must be used for the sole purpose of providing Permitted Developer Services and each Lessee must review and agree to be bound by the terms of this License; (ii) each lease period must be for a minimum period of twenty-four (24) consecutive hours;
[1] https://docs.warpbuild.com/runners#macos-m2-pro-on-arm64
E.g. don’t use Github as Infrastructure as a Service.
But in the meantime I’ve recently released RunsOn [1] with the same promise (10x cheaper, faster) but the whole thing runs in your AWS account.
We completely shut down the VM and remove the block device (all files associated with the block device).
what about: 'Cheaper doesn't mean slower'? It's pithier, and (in particular for anyone reading before/without looking at the page) better IMO in its place as a subheading. Scans better. Or even 'Cheaper != slower' (again, subhead).
Of course, hosting myself means I also gotta own the uptime of it..
So, while this is the first time I've heard of ubicloud, I use gha extensively.
Abd frankly, I think it's just because Github has a crazy markup on their actions compute over the raw compute. Taking a quick look, it appears that their base rate is like $0.008 per-minute!! That's a rate that wouldn't look crazy out of line with EC2's hourly rate.
I've worked on projects where we saved significant money, and improved build times just by launching a single EC2 instance and connecting it to Actions.
Their pricing page looks amazing. Not because of any funky UI stuff but because of the slight displacement of the decimal point.
I agree with the stuff you've mentioned, obviously, but also empathize a bit with the GitHub folks because of all the licences and limitations Mac runners come with.
The perf for comparable operations is squarely on them though.
We did the same, and set up GitHub actions runners on hetzner
Halfed the integration test time and made them more reliable.
also main reason for switching was performance, not cost for us.
One counter-intuitive thing we found is that it's slow to save and restore caches, but the machines have good CPU, so for us it's been faster to disable cache entirely and just redo everything on each build.
The link to SPDK was very interesting: https://www.ubicloud.com/blog/building-block-storage-for-clo.... I use filesystems for very high performance applications, and I've found ZFS to often be the limiting factor when compared to simpler solutions of XFS +- mdadm +- encryption.
It's a controversial point, but others have made similar findings: https://klarasystems.com/articles/virtualization-showdown-fr... : "Although I suspect this will surprise many readers, it didn’t surprise me personally—I’ve been testing guest storage performance for OpenZFS and Linux KVM for more than a decade, and zvols have performed poorly by comparison each time I’ve tested them"
OpenZFS seems to be starting to consider optimizations to better perform on modern drivers (SSD, NVMe) which have very different performance profiles to what ZFS was built for (spinning rust)
In the SPDK summary they say "To make VM provisioning times go faster, we changed our host OS from ext4 to btrfs" (...) "Also, when we switched the host filesystem to btrfs, our disk performance degraded notably. Our disk throughput dropped to about one-third of what it was with ext4."
Ubicloud: the problem seems to be generic to CoW filesystems, and it's interesting you came with a slight variation (CoA) but have you considered the even simpler alternative of any journaling filesystems (XFS, Ext4...) with overlays?
Or just UFS2 + snapshots to restore from a given state (initialized, ready for each test) then restore to this state between tests?
I think customers finding that disabling cache works better means the CoA has similar issues to CoW.
Personally, I'd have just tried to using SR-IOV with a namespace per customer, and call it a day instead of bringing extra complexity, but there must be good reasons for it. I'd love to know what these reasons are.
In your case your workflow can run in less than 5 minutes on AWS ephemeral machines, for the same price as ubicloud: https://github.com/runs-on/arroyo/actions/runs/7723361513/jo...
Saved over $25k in CI costs compared to GH Actions - they're also using super powerful bare metal servers with Hetzner - we got about a 94% reduction in build time!
Absolutely chuffed, good to see more companies on the market though.
I was reading https://www.ubicloud.com/blog/ubicloud-hosted-arm-runners-10... but seems like a bit misleading to compare an ARM workload over QEMU vs a native ARM.
What about native x86 vs native x86 or at least native x86 vs native arm?
Edit: Oh, Hetzner. They’re really ubiquitous in cheap computing these days.
Been using this setup for year, very happy with it. I don't see the point of Github to be honest?
Sure would be good to see competition for the MacOS and Windows runners too, these are the ones that tend to cost us the most.
I wonder what the consequential "carbon footprint" of this is, but at scale for all companies/all jobs of similar nature
Ubicloud runs on bare metal providers and they don't lease Mac hardware. Technically, we could run MacOS VMs on arm64. However, our interpretation of Apple's End User License Agreement (EULA) tells us that we can't do this.
This repo has some good references on the topic: https://github.com/kholia/OSX-KVM?tab=readme-ov-file#is-this...
They absolutely support custom runners, it's how all these work, they don't need to stop them via ToS, they can just not allow it as an option. `runs-on: ubicloud` only works because GitHub implements it right.
The VM reboot times add up quickly and that's likely the rationale. However, it gets hard when there are users running linting jobs that take ~2s but running it with 16vcpu instances so we kept the 1 min floor.
This would be quite useful for users of other GitHub Actions clones like act [0].
Although we have a KEK and DEK code for regular VMs, they are not operative on GHA...yet. The reason has to do with a technical conflict with copy-on-write we aim to close, not least of which because Ubicloud needs to grow its own copy-on-write features for block device snapshots, things we lack today.
I expect within a few months, all expired GHA vms will be cryptoshredded upon their deletion. This is already true for regular virtual machines or managed postgres machines.
I don't think it's very expensive to run a build server.
It's really kinda impossible to judge the carbon footprint for these kinds of things, and if it's justified. Consider the carbon footprint of all the dumb AI feature rolled out at Facebook, Google, etc. Consider the carbon footprint of everyone trying to use ChatGPT now. Do you know how much power GPUs are guzzling to give each little answer? It's really huge, compared to a traditional google search! Is it worth it? ... who can judge eh
https://github.blog/changelog/2024-01-30-github-actions-intr... ?
Today, GitHub is excited to announce the launch of a new M1 macOS runner! This runner is available for all plans, free in public repositories, and eligible to consume included free plan minutes in private repositories. The new runner executes Actions workflows with a 3 vCPU, 7 GB RAM, and 14 GB of storage VM, which provides the latest Mac hardware Actions has to offer. The new runner operates exclusively on macOS 14 and to use it, simply update the runs-on key in your YAML workflow file to macos-14.
The numbers are obviously very rough; there’s a LOT of factors to consider and which vary from one node to another. But the methodology is quite comprehensive, e.g. they factor in power consumption of the end user’s device while waiting for the data to transfer over the wire
Green software engineering is interesting to me because it seems like a good way to impact greenhouse gases _without_ requiring consumers to change anything about their lifestyle (which is the hard part of climate change…) There’s a cool presentation from Rasmus Lerdorf from around the time when PHP7 released where he estimated a 50% adoption rate of PHP7 would result in a saving of 3.5B kg CO2/year iirc, purely because of the compute efficiency gains from PHP6 -> PHP7.
I used their numbers to calculate that swapping our CI/CD clones at my day job over to shallow clones saved (maybe) 5.5 kg/week of CO2 emissions [2]. Not quite as impressive as the PHP7 figures, but I still think it’s kinda neat.
The way I got it in a kernel debugger was to constantly run the problematic race condition inside a VM, when it detected that it hadn't hit, it would roll back the VM to a save state of the code already running in progress ... Took an overnight run to hit it on a machine in the office.
All that said, I'd still say it's not very expensive to run a build machine.
The linked page at https://www.ubicloud.com/docs/github-actions-integration/qui... still says "Source open under the Elastic V2 license", and https://www.ubicloud.com/docs/about/pricing still says "it's open and free under the Elastic V2 license". Not sure if those were missed or if the docs just need some time to refresh from their sources.
I put a calculator for RunsOn at https://runs-on.com/calculator/
Yes, that is correct. We are also working on implementing our own caching, which should speed up cache downloads/uploads significantly.
[1]: https://runs-on.com
I should say we also use BuiltJet for docker builds because of their arm support. Now that ubicloud has arm we may switch those over as well.
Just, why? CoW (or your own CoA) are ripe with performance problems. How exactly do you benefit from their use?
As a side note, isn’t it nuts that GHA operates on a principle of “installing the universe” (and apparently the universe is 86GB) and updating about every week, and it’s not total chaos? I was surprised, but it seems to work.
As someone who has authored a GitHub Action to delete like 85% of the hodge-podge stuff blasted all over in a default runner image to free up more space for a Nix store. It's "nuts" indeed.