What I really care about is the worst build time, because with all the overbooking and over-provisioning going on, this is what I really get. I first created rankings for average build times, but then realized that with so much variance, these averages make little sense. I also threw in results for my own local build machine (a PC next to my desk), with no virtualization (but the build was still dockerized), and a dedicated EX62- NVMe server from Hetzner. And yes, there is very signi!cant variance, which was a surprise.įor some cloud providers (Linode and IBM) the build times were so abysmal that I decided to abandon the e#ort after just two builds. I tried to get six builds done, over the course of multiple days, to check if there is variance in the results. I always did one build to prime the caches and discarded the first result. The total wall clock time for the build was measured. MethodologyĪ single test consisted of starting a cloud server, provisioning it with Docker (both were sometimes done automatically by docker-machine), copying my source code to the server, pulling all the necessary docker images, and performing a build. It does require a fair bit of I/O to store the resulting !les, but I wouldn’t call it heavily I/O-intensive. It mounts a volume with the source code, where output artifacts go, too. The build is dockerized for easy and consistent testing. Technically, both builds are parallel and use multiple cores to a certain extent, but the one called “parallel” uses “make -j2” to really load up every core the machine has, so that all cores are busy nearly all of the time. I started signing up at various cloud providers and running the task.Īfter several tries, I decided to split the benchmark into two: a sequential build and a parallel build. A build task that takes about two and a half minutes on my (slightly overclocked) i7-6700K machine at home. I used the best benchmark I possibly could: my own use case. Some con!gurations will be inaccessible due to weird limitations and you will have no idea why. With AWS or Azure, you will be spending hours dealing with resources, resource groups, regions, availability sets, ACLs, network security groups, VPCs, storage accounts and other miscellanea. With Digital Ocean, Vultr or Linode you will be up and running in no time, with simple web UIs that make sense. I !nd this to be rather silly and I don’t understand why in the age of global cloud computing I still have to ask and specify which instances I’d like to use in which particular regions before Microsoft kindly allows me to.Īssuming you can actually get access to VM instances, there is a big di#erence in how complex the management is. There was a moment when I was spending more time dealing with various tiers of Microsoft support than testing. This process can be quick and painless with smaller providers, but can also explode into a major time sink, like it does with Azure. As a new user, you get access to the basic VM types, and have to ask your vendor nicely so that they allow you to spend more money with them. One thing I quickly discovered is that what the vendors advertise is often not available. What do they call a computer, is it a server, plan, droplet, size, node, horse, beast, or a daemon from the underworld? This is rather annoying, as one has to learn all the cute names that the particular vendor has invented. The other drivers are non-existent, $aky or limited, so one has to use vendor-speci!c tools. You can use Digital Ocean, AWS and Azure directly from within docker-machine. I think every cloud operator should contribute their driver to docker-machine, and I don’t understand why so few do. Sadly, this is only possible with a select few providers. In an ideal world, I would sign up on a web site, get an API key, put that into docker-machine and use docker-machine for everything else. Signing up and trying to run the various virtual machines o#ered by cloud operators was very telling. Setting up and differences between cloud providers Also, those are the ones that at least promise fast CPUs (for example, Google famously doesn’t much care about individual CPU speed, so I didn’t try their servers). Why those? Well, those are the ones I could quickly find and sign up for without too much hassle. Linode (dedicated 8GB 4vCPU, dedicated 16GB 8vCPU).Amazon AWS (c5.xlarge, c5.2xlarge, c5d.2xlarge, z1d.xlarge).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |