Gartner has estimated that the average cost of downtime per minute is $5,600 – and rising. However, in spite of this, a large number of businesses appear to lack a proper high availability (HA) strategy.
Unplanned downtime is a risk that is often exacerbated by multi-product, multi-vendor approaches, which in turn limit agility and availability within an organisation’s backup and recovery architecture. The best way to counteract this additional risk from the use of excessive numbers of different products, is by deploying a comprehensive data platform from a single vendor.
A recent IDC worldwide survey revealed that enterprises employing a single platform approach to backup and recovery experienced up to 55% less downtime than those using multiple products and vendors. Along with infrastructure simplification, higher IT productivity, and reduced annual spending, single vendor solutions can deliver faster, more reliable backup and restore operations, reduced downtime and increased productivity.
These capabilities are then able to drive improved business outcomes and provide organisations with the complete coverage for their data needs, taking away the associated risk of unplanned downtime.
The Foundations for Effective HA Strategies
An effective HA strategy will account for the increasing need for data protection in fast moving multi-cloud, hybrid IT environments that require flexibility and security to deliver business agility. Data backup and protection are critical for HA, as is using a platform with flexibility as well as versatile features, such as deduplication in cloud environments, and virtual infrastructure-friendly licensing policies.
See also: Redis Overload to Blame for 17-Hour Azure MFA Login Crisis
Enterprises also need flexibility to ensure HA and reduce risk across data infrastructure, necessitating in-built adaptability as well as a wide range of cloud storage options to accommodate these requirements.
Additionally, companies must now also meet a growing number of specific regulatory requirements, such as GDPR. This is alongside the increasing enterprise expectations for functionality, such as data protection by design, 72-hour data breach notification, data minimisation principles, data transfers and portability.
How Businesses can Integrate Cyber Security into an overall HA Strategy
Cyber attacks are another increasingly major cause of downtime. When a cyber attack gets through security defences, an organisation must have data threat detection techniques applied to its data environment to indicate anomalies in standard operations and behavioural patterns.
A business is responsible for not only its own data, but often that of its customers too. This means it must have the correct processes and mechanisms in place to ensure that data is properly backed-up and available for recovery on-demand. Businesses need to practice suitable disaster recovery and business continuity strategies that are fully audited, and can provide encryption of data, both at rest and in transit in order to maximise data security.
With last year’s global ransomware attacks of WannaCry, Petya and NotPetya hitting more than 150 countries, a single use platform approach can aid organisations to easily, automatically and securely back up data, reducing the risk of data loss from breaches, thus ensuring on-going high levels of availability for data.
The Impact of Cloud on HA Strategies
When it comes to the impact of cloud with regard to availability, a carefully considered approach is necessary where integration is concerned. Choosing a platform that can work across different cloud solutions can help to streamline and simplify data availability in the cloud.
Another threat to availability is if a cloud service provider goes down, or if the online connection to the cloud service is interrupted for an extended period of time. Migrating to the cloud may be an appealing option due to its relative ease and accessibility, but there are also potential challenges to face when it comes to restoring data and ensuring its availability.
HyperScale, multi-cloud and service solutions provide clear benefits in the ability to quickly adapt systems to changing conditions and make data available regardless of time or (even virtual) location.
Why a DR Plan is Important Too
Despite the acknowledgement that a company’s data is its core asset, more than 60 percent of companies do not have a fully documented disaster recovery plan, both man-made and natural.
Best practice disaster recovery plans can include; off-site data recovery at a secondary location; working with a trusted partner; or even working with a dedicated disaster recovery team. However, it is equally important that businesses continuously update their disaster recovery plan and test it by regularly running through different scenarios with their teams.
Essentially, achieving an effective HA strategy requires more than just awareness, it also involves using comprehensive tools that span a wide range of hybrid IT environments, as well as a continued recognition of the need to test, retest, evolve and adapt the tools that are being used, by all levels of the business, not just the IT department.
The post What Should a Proper High Availability Strategy Look Like? appeared first on Computer Business Review.