News

Don’t let your workloads get stuck in the mud

As companies try to optimize their costs to face economic crises, the increase in cloud spending can cause some headaches. Although there are many different options to help mitigate this situation, from moving workloads to a more cost-effective environment (and even back on-premises), or rearchitecting to save costs; organizations often lack the technical agility to get the most out of them.

With modern businesses saddled with so much data, legacy or homegrown applications that don’t allow for transfer, and cloud lock-in to deal with, businesses are faced with a real challenge. All this against the backdrop of cyberthreats like ransomware, making it necessary to find the right balance between cost and security for each workload. To avoid this, IT teams are increasingly designing and tuning their environments with portability in mind and asking themselves a few questions first:

Why move the data?

It is obvious that modern business computing environments are enormously complex. They can be monolithic and widely dispersed, and the increasing data severity of some environments makes many companies essentially “digital hoarders.” This is problematic in itself, as holding on to data they don’t need exposes them to unnecessary cybersecurity and compliance risks. But too much data in the cloud also has serious financial consequences and the dreaded “bill shock” when it arrives.

So while many companies have turned to the cloud to optimize costs, the flexibility it offers can be a double-edged sword. While the appeal of the cloud is that you only pay for what you need, the other side of the coin is that there is no “spend ceiling” so costs can easily spiral out of control. To address this, better data hygiene can help, but for the data that is needed, it’s important to choose the right platform for the workload. This may imply a new platform or a new architecture to optimize costs. This is where data governance and data hygiene come into play: before moving data or improving processes, you need to know exactly what data you have and where.

What data can we transfer?

Once you have determined what data needs to be moved, be it to another environment, server, or storage tier, the next difficult question to answer is what data can be moved. Unfortunately, many organizations face this problem. Data portability is crucial to being able to move data when needed and simply to maintain data hygiene in the long term. But there are several factors that can make it difficult to move or transfer workloads from one place to another.

The first is “technical debt”, that is, the work and maintenance required to update older or built-from-scratch applications so that they are portable and compatible with other environments. These problems can be caused by taking shortcuts, making mistakes, or simply not following standard procedures during software development. But leaving it unfixed makes it impossible to optimize environments and can cause additional problems in things like backup and recovery.

The other, perhaps more infamous issue that can affect data portability is cloud lock-in. At this point, companies can easily get locked into using specific cloud vendors. This can be due to dependencies like integrations with services and APIs that can’t be replicated elsewhere, the enormous “data severity” that can be in a single cloud, and a simple information gap means teams know how to use their current cloud. , but they lack the experience to work with a different manufacturer.

Of course, this will only affect moving workloads out of the cloud, so it is still possible to create better portability to offer better storage options and promote better data hygiene. Basically, wherever possible, companies need to create some standardization across their environments, making data more consistent and portable, and mapping and categorizing it so they know what they have and what it’s for.

The (constant) question of security

Finally, when creating and taking advantage of data portability, it is essential not to neglect security. Of course, improving security can (and should) be a reason to move workloads in the first place, but if you’re migrating workloads to optimize costs, this must be balanced against necessary security considerations. Security needs to be part of the data hygiene process, so teams need to ask: what do we have? what things do we not need? And what are the critical workloads that we absolutely cannot afford to lose? Beyond this, you have to keep patching your servers and moving data to cooler storage etc., removing internet access when not needed.

Having backup and recovery processes in place is also key when moving workloads. To close the loop, easy data portability is also important for disaster recovery. In a critical case like ransomware, the original environment, whether it’s a cloud or on-premises server, is typically not available to recover damaged workloads (via backup), as it’s often cordoned off as the scene of a crime, and the environment may still be in danger. To recover quickly and avoid costly downtime, it is sometimes necessary to recover workloads to a new temporary environment, such as a different cloud, for example.

As organizations struggle to manage their computing environments and avoid cybersecurity and financial surprises, it is important to constantly assess what data and applications are held and where they are kept. But to manage this and adjust as needed, companies must continue to build with portability in mind. In this way, companies can create a more agile and cost-effective cloud environment, and have an easier time recovering from disasters like ransomware.

Signed: Rick Vanover, Veeam Senior Director of Product Strategy

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *