Google Cloud announced the launch of Filestore High Scale, a new storage option. It will act in balancing the workload on the customer’s end by migrating applications to the Google Cloud platform and giving a distributed high-performance storage option.
Though Google Cloud Filestore is already offering support to some of the use cases, it is specifically built for high-performance computing (HPC) workloads.
Users can use shared file systems with hundreds of thousands of IOPS, 10s of GB/s of throughput, and at a scale of 100s of TBs.
“Virtual screening allows us to computationally screen billions of small molecules against a target protein in order to discover potential treatments and therapies much faster than traditional experimental testing methods,” says Christoph Gorgulla, a Postdoctoral Research Fellow at Harvard Medical School’s Wagner Lab., which already put the new service through its paces. “As researchers, we hardly have the time to invest in learning how to set up and manage a needlessly complicated file system cluster, or to constantly monitor the health of our storage system. We needed a file system that could handle the load generated concurrently by thousands of clients, which have hundreds of thousands of vCPUs,” he added.
And to meet the advanced security needs, Google is adding beta support for NFS IP-based access controls to Filestore tiers.
The announcement gave more details on the Filestore – it is a good fit for workloads that need high performance and capacity. It is inclusive of electronic design automation (EDA), genomics, video processing, manufacturing, and financial modeling.
This feature will go forward to support tens of thousands of concurrent clients, but Google isn’t claiming such kind of capability for every use case. Developers who are looking for this kind of power can now get it on Google Cloud.
Newsletter
[rdp-linkedin-login]