Tiempo de lectura: 5 minutos

What is cloud tiering, and why do I need it?

Cloud tiering? What the heck is that?

Everyone wants a hybrid cloud strategy that’s cost-effective and provides quick accessibility. One that seamlessly blends flash and cloud to maximize the strengths and benefits of each while mitigating their limitations. One that doesn’t cause months of angst and chaos addressing millions (or billions!) of enterprise files.

Enter cloud tiering. Yes, you may already be using cloud storage, but you’re probably not using cloud tiering, which means you are wasting money and time, not to mention building out significantly more storage infrastructure than you actually need. 

So, what exactly is cloud tiering?

Cloud tiering is all about archiving data to specific cloud tiers based on usage. Infrequently-used data, also known as cold data, is automatically pushed to the cheapest cloud storage tier leveraging rules-based protocols. From there, it’s still accessible to users if they need it, but it’s not clogging up the far more expensive flash-based or NAS storage. Hot data, or the more frequently-accessed data, is kept in the faster, more accessible storage tier. 

Essentially, you offload files to the cloud over time based on use and do it without messing up the end-user experience for your team. Simple, right? 

This strategy is critical for hybrid cloud deployments because of its flexibility, transparency, and scalability. While data tiering has been around for some time, cloud tiering is a newer iteration that leverages inexpensive object storage classes, like Amazon S3, to address long-term archival in the least expensive manner possible. 

It sounds simple enough, but getting the strategy right requires understanding how to prioritize hot and cold data. It also needs a file system infrastructure that blends seamlessly between cloud tiers, whether that’s cold object-based storage and hot NAS storage or an all-cloud approach with cost efficiencies baked into the backend. 

How does cloud tiering work?

Simply put, hot data that needs to be frequently accessed on-demand lives on some kind of expensive disk storage, while rarely-accessed cold data gets archived in the cloud. This reduces costs, frees storage space, and presents a hybrid cloud storage strategy that checks a lot of boxes, especially for large enterprise organizations with voluminous quantities of data. 

Cloud tiering operates on rules which govern the balance of cost vs. performance. These rules typically operate based on an age threshold — the number of inactive days at which a file is moved from hot to cold storage. As with all rules-based protocols, anyone can mark specific data exempt based on its value and immediacy. 

(There may be files that you don’t often need. But when you need them, you need them ASAP! )

You can also set rules based on volume-free space rather than age. This approach would prioritize a total storage threshold, at which point the oldest/coldest files are offloaded to the cloud to make room for newer/hotter data. 

At its core, cloud tiering is simply a rules-based allocation of storage designed to put data where it belongs in the most cost-efficient and performance-positive manner possible. The winning strategy is found in how you implement cloud tiering and who your partners are.

How can you use cloud tiering to your advantage?

With most cloud providers, you’re charged not only storage fees but also egress fees. These “get” and “put” API calls apply to every piece of data read from outside of the cloud. By prioritizing cold data that hasn’t been accessed in a predetermined amount of time, you can minimize the likelihood of incurring retrieval fees. This may seem like a minor advantage, but for large enterprise organizations with billions of files, every tiny cost-efficiency adds up to significant savings. 

Some of the most frequent files targeted for cold storage in cloud tiering are logs, backups, system snapshots, and other data you will need archived but likely won’t need every day. If at all. 

Archiving these files reduces the required flash storage for a deployment, lowering the overall IT infrastructure cost. By allocating the hot, frequently-used data to the fastest, most direct storage, you can match your infrastructure investment to your data use patterns far more efficiently.

Cloud tiering also shrinks the backup footprint and costs by eliminating cold data from active mirroring or replication processes and focusing your active backups on the hot data only. This increases your disaster recovery speeds as well by reducing the amount of data that must be restored in the event of a loss. 

The most important caveat in strategizing a cloud tiering process for your company is to understand the relationship between cloud object storage and your existing storage array. Without a clear understanding of how to properly designate cold and hot data, you could find yourself dealing with costly cloud egress fees because of misallocated important data onto the cloud tier. 

What does bat365 do with cloud tiering?

Globally distributed, object-based cloud storage is the future of data infrastructure. bat365’s CloudFS platform is much more than a cloud gateway — it’s a robust file system with global file locking, access control lists, deduplication, and data encryption. And, it seamlessly integrates with intelligent, automated cloud tiering. 

At bat365, we don’t sell storage. We optimize your ability to access critical sources of data. Case in point, our award-winning CloudFS global file system allows you to access any file from anywhere at any time. That’s already faster, more secure, and more intelligent than anything else. When you add a cloud tiering strategy to the solution, you’re also significantly more cost-effective than any alternative on the market. 

For example, bat365 CloudFS pairs seamlessly with Amazon S3 Intelligent-Tiering. This Amazon storage class analyzes an AWS user’s stored data and automatically moves it between tiers based on a usage frequency of 30 days. Because bat365’s CloudFS system is built as object storage with immutable data, it’s simple to allocate storage from more expensive to less expensive cloud tiers without users noticing a difference. 

Or, consider how bat365 and Cloudian HyperStore work together: Cloudian HyperStore is highly secure, exabyte-scalable, S3-compatible on-premise object storage that reduces storage costs by up to 70% and is hybrid multi-cloud ready. The bat365/Cloudian integration is designed for maximum storage efficiency and optimal performance. 

bat365’s granular deduplication process identifies and strips out duplicated data at the 128kb block level, and Cloudian’s storage density further reduces the storage space your enterprise requires. This approach to cloud tiering reduces the workload on the front end by intelligently optimizing file storage for every data point in your system. 

Lastly, bat365 and Azure Blob also work together by design. With Azure Blob as the back-end of a bat365 CloudFS implementation, Azure provides the true data source, while bat365 handles file caching at each geographic location. This reduces infrastructure by hundreds of terabytes for many companies just by consolidating redundant data and resolving compression on files. However, it also allows data replication that wouldn’t be possible on traditional on-premises storage solutions. Without bat365, the unstructured data in Azure Blob would be unwieldy. However, the intelligent integration between these two partners empowers higher trust, lower cost, and significantly more streamlined infrastructure due to the cloud tiering strategy built into the system. 

Whatever hybrid-cloud storage architecture you use, cloud tiering offers an opportunity to offload cold data to cheaper storage, free up your precious expensive storage, and increase performance in the process. bat365 illustrates the wide range of opportunities to integrate with public and private cloud platforms and achieve these results, meaning cloud tiering is a winner, for sure.