Panasas’s double NAS supply goals at a number of analytics workloads

Panasas has broadened its scale-out NAS supply to embody high-performance and capability choices with normal availability of ActiveStor Flash and ActiveStor Extremely XL. The 2 merchandise purpose at a variety of workloads by way of file measurement and I/O profile that fall throughout the high-performance computing (HPC) to synthetic intelligence/machine studying (AI/ML) continuum.

Speaking to, the corporate additionally revealed the bounds of its curiosity in object storage, in addition to its ideas on cloud storage, the place it has no presence at the moment.

The Panasas ActiveStor techniques have been tailor-made to a variety of workloads, which might imply file storage profiles that go from many, many very small information to a smaller variety of very massive ones.

ActiveStor Flash is a solely NVMe flash-based {hardware} equipment geared toward smaller file sizes the place speedy entry is required. Its ASF-100 nodes are 4U kind issue and take as much as 3.84TB of M.2 and 46TB of U.2 NVMe. DRAM and NVDIMM supply quicker cache-level technique of storing working knowledge.

In the meantime, ActiveStor Extremely XL is geared toward bigger capacities and greater file sizes. An ASU-100XL node runs to 160TB – however quadruple that for minimal configuration – largely comprised of spinning disk HDD plus some quicker M.2 NVMe capability.

The 2 techniques, each operating PanFS, have benefited from controller OS and file system upgrades in model 9.2 that enable for the client to deploy storage blades beneath a single namespace. “However with volumes created to go well with workloads of differing I/O traits – so, smaller and quick, or cooler and bigger – beneath one single pane of glass,” mentioned Curtis Anderson, software program architect at Panasas.

He added: “We had been a one-platform firm till Might. Then we had two new platforms that are constructed on the power to make use of a number of media varieties, with metadata going to NVMe for instance, SSDs for small information as much as 1.5MB and HDD for giant information.”

The Panasas identify for the performance is Dynamic Disk Acceleration, which is the automated path to totally different tiers of storage.

The rationale for the shift? “The difficulty was, what if a buyer is operating HPC and desires to run one other workload?” mentioned Anderson.

The enhancements to PanFS enable for that and the engineering behind it was, mentioned Anderson, a “moderately- sized carry” that concerned refactoring PanFS to deal with new {hardware} varieties and to pick and qualify these merchandise to be used with the system.

However what about object storage, provided that a lot unstructured knowledge – Panasas’s bread and butter – is now in object storage format?

Anderson mentioned: “Panasas is constructed as a Posix file system however on high of an object retailer, which was developed by 1999, so earlier than Amazon’s S3. It has the traits of scaling and progress, and so on, that object storage has, however we don’t supply entry. It really works in a different way to S3.”

Advertising and marketing and merchandise VP Jeff Whitaker added: “Object storage is of curiosity, however with regards to how the overwhelming majority of individuals entry knowledge, it’s file-based. The event facet of AI/ML typically occurs within the cloud, nevertheless, so it’s positively one thing we’re desirous about as we transfer ahead.”

In a context the place the cloud is turning into more and more vital and lots of suppliers supply the likelihood to retailer knowledge within the cloud, what’s the Panasas technique right here?

The corporate is firmly nonetheless within the on-prem {hardware} camp however, as with object storage, it’s taking a look at potentialities, mentioned Whitaker. “Proper now, we’re an appliance-based datacentre platform, not software-only, and from what we’ve seen available in the market, 85-90% of the market remains to be on-prem.”

He added: “Prospects wrestle to get efficiency from cloud-based storage. Cloud suppliers should throttle storage so their networks aren’t saturated. Completely, clients are transferring to the cloud and doing extra there, so we’re taking a look at totally different eventualities and dealing with S3, with companions.”

Supply hyperlink

Leave a Comment