In the frenzy of getting started with your cloud journey, keep in mind that not every application is a good fit for the cloud. Armed with this knowledge, you can take the right steps in your cloud migration journey.
We have heard about the numerous benefits of small and large enterprises moving over to the cloud, and gaining in terms of scalability, flexibility, security and economics. What is not that well reported are applications that are not ideal to be moved to the cloud, or are simply not possible to be moved to the cloud.
Till a few years ago, the primary reason why some applications were not considered good candidates to cloud had to do with security and availability concerns. The initial set of concerns were related to the possibility of data leakage to unauthorised parties. This concern was plugged by the wide choices of security and encryption of data in transit and at rest. The shared responsibility model clearly draws the line between the Cloud Service Provider’s and Customer’s responsibility using the tools provided by the platform.
Then came the concerns of data residency, where the regulatory bodies feared losing control and access of the data, if it was stored outside the country. This concern was plugged by expanding the physical footprint of the cloud service providers into other geographies. This expansion will continue taking place in order of priority based on the market size.
The availability concerns were initially about the cloud service provider having enough redundancy for power, network and shielding from localised disasters. The market leader AWS obliterated this concern by bringing in the concept of Availability Zones, which are essentially a set of data centers located within a radius of a few miles, on different flood plains, powered by electricity feeders from different paths, and having dark fiber interconnectivity. The Multi-AZ deployment of AWS solves this issue.
Despite these there are good reasons why certain applications should not be considered for cloud, at least currently.
Here are those scenarios, which can help you negate, or at least reconsider them before you start your cloud journey:
1. Low Latency Applications:
Several legacy applications were written considering traditional dedicate and closely connected infrastructure. Such application designs resulted in extremely chatty applications, which do a huge amount of data exchange, either to disk or other directly connected devices. When such an application is brought on to the cloud, performance can severely lag because cloud infrastructure by design works on loosely connected systems. In most cases, the block devices (virtual disks) are connected to the server instance over a high speed fiber backbone, which is fast, but not fast enough in comparison to directly connected storage. Bottlenecks can quickly arise, leading to high I/O times, and eventually causing a deterioration or a complete unavailability of the application.
You may not even know if such applications exist in your ecosystem. For this reason, you will need to use performance analysis tools and observe the IOPS required for the applications. You will need to track the IOPS consumed over a cycle covering the peaks in application demand.
You will also need to observe the throughput between the different devices that the application connects to, such as databases or other I/O devices.
2. Proprietary Hardware Platforms:
Cloud Service providers support instances of OS built for the Intel x86/x64 platform. Enterprises may be currently using proprietary hardware such as IBM P Series on AIX, HP servers based on Itanium processors running HP UX, and so on. If your datacenter application runs on custom hardware, then it may not be possible to virtualize it. If it cannot be virtualized, it cannot be re-hosted in the cloud.
There are various classes of applications requiring custom/proprietary hardware:
a. Applications running on mainframes cannot be migrated to the cloud without modernizing or refactoring for the cloud.
b. Applications like Oracle Exadata, Oracle Exalytics and Oracle Exalogic running on engineered systems that cannot be re- hosted.
c. Applications dependent on a physical machine (e.g.: Scanner requiring a specific MAC address, App requiring specialized hardware drivers that cannot be virtualized, Apps requiring a USB token to validate licence).
d. Applications that require purchase of an appliance (e.g.: Load-balancer, Intrusion Detection/Prevention) may require further investigation to determine if the vendor provides a virtual appliance alternative.
While the above class of applications cannot be re-hosted in the cloud due to hardware dependencies, you may want to check if your appliance vendor provides a software version of the application in the Cloud Marketplace. For example, F5 Big IP products are sold as appliances with hardware optimized to deliver security and performance. However, F5 also makes available F5 Big IP Virtual Edition for AWS in the cloud marketplace – where you can Bring Your Own License (BYOL) to run it in the cloud.
Likewise some software vendors who traditionally provided a USB key for licensing are now providing software tokens as well.
3. Applications Running on Proprietary/Custom Operating Systems:
Applications running on proprietary operating systems that are not supported in the cloud are not good candidates for re-hosting. IBM AIX, HP-UX, macOS are some examples. Like applications running on custom hardware, these applications may require modernising or refactoring for the cloud.
4. Application Clusters Using Shared Disk Architecture or IP Multicast for cluster communication:
Clustering is typically used to improve availability and scalability. Application clusters that use shared-disk architecture (as opposed to shared nothing architecture) or IP multicast for communicating with other nodes in the cluster are not suitable for cloud migrations.
For example, you cannot currently attach an Elastic Block Store (EBS) volumes in AWS cannot to more than one EC2 instance at a time. This implies that any application cluster using shared-disk architecture is not suitable for cloud migration. AWS has launched Elastic File System (EFS) in some regions, which is alike a NFS share. You will however need to check viability due to possible IO bottlenecks and file locking issues.
In AWS and Azure, the network does not natively support multicast; therefore an application cluster using multicast for cluster communication is not suitable for cloud migration.You might also like: Migrating to the cloud? Learn which applications to prioritise in this free e-book.
An example of an application cluster using shared disk architecture is Oracle Real Application Clusters (RAC). The RAC equivalent on AWS would be RDS deployed in a Multi-AZ. However, you will be paying for CPU and storage on both the servers, and hence cost may be an important consideration as well.
Weblogic or JBoss Application Server clusters configured with IP multicast for communicating with other nodes in the cluster are not suitable for cloud migrations. You would first want to use an alternative to using multicast to handle cluster messaging and communications before considering migrating these clusters to the cloud.
About InterPole
InterPole was established in 1996 and has been engaged in web hosting, email, and management of IT infrastructure. InterPole pioneered with Virtual Private Servers in 2004 and Cloud Hosting in 2008. Over the years, InterPole has worked with over 6200 mid-sized businesses and startups, and have assisted them in their journey towards the adoption of modern technologies through the Internet. InterPole is a Standard Consulting Partner of Amazon AWS and Microsoft Azure. With this partnership, provides Managed AWS service and maintains a team of engineers who are trained and certified for the specific cloud platforms. These benefits companies in defining their cloud strategy and making a well-planned journey, reliably and cost-effectively.