News

Google Cloud “breaks” the Pi number and reaches 100,000 million digits

It has done it again for the third consecutive time, after the records obtained in 2019 and 2021 respectively. Scientists at the Graubünden University of Applied Sciences in Switzerland have surpassed 100 billion pi digits, up from 31.4 billion in 2019 and 62.8% in 2021. Google Cloud has become the best way to calculate this mathematical constant That has tripled in just three years.

To achieve this calculation it has been essential to use Compute Engine – Google Cloud’s secure and customizable computing service, Compute Engine’s N2 family of machines, 100 Gbps bandwidth, Google’s virtual NIC, and balanced persistent disks. And all this in just 157 days.

To understand this landmark It must be understood that reaching this level of digits requires significant computing resources, storage and network. A calculation that aims to know what the limits of scientific experimentation are; know the reliability of the products used for its calculation and know how to use Google Cloud’s scalable compute, network and storage infrastructure for high-performance computing (HPC) workloads

This is how the number pi is calculated in Google Cloud

First, you think about the storage that will be necessary and in a first stay, these professionals from this university in Switzerland calculated that about 554 TB of temporary storage were needed. A cluster was designed with one compute node and 32 storage nodes, for a total of 64 iSCSI block storage targets.

The primary compute node is an n2-highmem-128 machine running Debian Linux 11, with 128 vCPUs and 864 GB of memory supporting 100 Gbps outbound bandwidth. Each storage server is an n2-highcpu-16 machine configured with two 10,359 GB zonal balanced persistent disks.

They also used Terraform to set up and manage the cluster; wrote shell scripts to automate critical tasks such as deleting old snapshots and rebooting from snapshots. The amount of available memory and network bandwidth were the two most important factors, so an n2-highmem-128 (Intel Xeon, 128 vCPU, 864 GB RAM) was selected. The n2-highmem-128 machine type provides output throughput of up to 100 Gbps.

They also replaced the virtio network controller with the new Google Virtual NIC (gVNIC). gVNIC is a new device driver that tightly integrates with Google’s Andromeda virtual network stack for higher performance and lower latency.

And finally, scheduled automatic backups every other day using a shell script which checks the time since the last snapshots, runs the fstrim command to discard all unused blocks, and runs the gcloud compute disks snapshot command to create snapshots of the persistent disks. To store the final results, two 50 TB disks were used directly at the compute node.

The precision of the adjustments and the comparative tests made it possible to reach the one hundred trillionth decimal of Pi, which is also 0. Once the calculation was finished, the definitive figures were verified with another algorithm, the Bailey-Borwein-Plouffe formula.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *