Cloud Migration and Availability Index

Infrastructure Migration

The Cloud Transition

The Cloud may not be in-your-face.  But it is pervasive, and gradually taking over many aspects of our traditional IT systems. Companies are not yet making wholesale transitions from existing data-centres and on-premise assets to Cloud. However, when infrastructure reviews occur, whether to upgrade or add new resources, the Cloud beckons. Questions about total cost of ownership (TCO), scalability, time-to-market, etc will influence decision makers.  For each one of these, the Cloud offers a compelling alternative. It is likely that in the next two decades, only a few companies will still maintain their infrastructure on premise.

The Status Quo - On-premise Deployment Design
Let us assume then that ACME plc has made a decision. Business has been persuaded, either by hype or fundamentals, that the Cloud is the strategic target. Architectural leadership has been mobilised and a decision taken to draw up a roadmap for Cloud adoption. What next? In this article, we look at four primary considerations that architects must carefully examine when migrating to the Cloud. These are: sizing, failover, scaling and access. Everything else builds on the foundation that is synthesised from these four dimensions.

Sizing: What Specification of Infrastructure Should be Provisioned

Statistics are invaluable. Node sizing should be empathetic to existing use profile. It may be okay to guess at first, but it saves so much time to know in advance. For each Cloud instance, the node(s) provisioned should be selected to meet latency and throughput required to support 120% of anticipated production load. The sizing could be either singular or plural. Singular, as in one node with enough capacity to bear all load; or plural, i.e. a number of nodes that can, between them, satisfy demand. But the baseline should exceed the present need.

Resizing in the Cloud may be quick and easy, but the decision making might not be so. If in doubt, over-provision. It is easy to downsize later, and the organisation avoids the risk of loss of business due to performance or availability problems. Default sizing is simple, i.e. geography localised and singular. But there could be exceptional scenarios where geographic distribution must be combined with plural sizing. More about that later.

Failover: How is System Failure Mitigated

Given proper sizing, as above, the next dimension to consider is failure and recovery. If or when a properly sized machine fails; what happens next? Let’s take the simple approach first and we will revisit this later. There should be a distribution of node(s) across Cloud locations, so that the failure of one node does not result in service unavailability. Service recovery should occur in a different Cloud location. This reduces the likelihood of contagion from the original failure location while maintaining service continuity. An interesting aspect of failure management is implicit resilience, i.e. what measure of interuption can our infrastructure handle?

The complement of the nodes that provide a service across Cloud location(s) is a resource group. The group resilience is the count of simultaneous failures that can be managed while maintaining SLAs. The higher the count, the larger the number of nodes and Cloud locations involved. Resiliency has a price tag though. More machines (virtual) will multiply cost and increase the percentage of idle/redundant resources in the Cloud platform.

Scaling: How are Additional Resources Provisioned

As resource demand grows organically, or due to unexpected spikes, infrastructure should respond, automagically! Traditionally, scaling was a bureaucractic and technical journey. With Cloud, scaling is merely a change of configuration. Where singular sizing has been used, another node of the same size could be added. This is horizontal scaling. Adding more nodes to singular sized nodes would multiply capacity. It is linear, but there is no guarantee of commensurate increase in demand or resource usage. There is an alternative design that is more efficient: programmatic vertical scaling. A simple algorithm can be applied to automatically scale resources; up or down, by a fraction rather than a multiple.

Cloud platforms record a raft of events about the resources deployed. Customers can tap in to these events to scale in response to demand. On AWS, CloudWatch alarms can trigger a Lambda function, which in turn effects a rolling upgrade on EC2 nodes; upscaling node size before autoscaling. By leveraging statistics for baseline sizing and monitoring demand, we can guarantee day zero availability and decent response in infrastructure provisioning. Increasing capacity as demand grows and shrinking it if or when spikes even out.

Access: How do Clients Connect to Cloud Services

The fourth dimension is access. As on-premise, so also with Cloud. There is no value in having resources that are locked away from everyone and everything. Our clients need access to our Cloud based services, so also partners involved in our service chain. Unless we are migrating all at once, it is likely that we will also need access to some on-premise infrastructure. Our design must provide the access paths and levels, as well as the constraints that keep authorised clients within band and everyone else out. To achieve this we would use such facilities as the Virtual Private Network (VPN), the load balancer, firewalls and others. Beyond the basics of who’s in and who’s out though, there is a service that we must provide to clients and partners.

The key here is to be simple and unobtrusive; placing minimal burdens on clients, partners and our applications/services.

By default we would use load balancers to decouple clients from service providers. Cloud load-balancers spread requests among available service providers. They are not geography specific and simplify access and security for clients and service provider.  Our Cloud landscape is elegant and uncomplicated, with singular entry points for each service.  One consideration could however force radical change to this default: Geographic Affinity (GA).  Geographic affinity is a requirement to pin clients to a specific physical/geographical service provider.  It could be zonal or regional. GA is often driven by regulatory, localisation, performance or security concerns.

But some GA drivers can be conflicting. For example, performance (latency sensitive applications) might be a problem where localisation (regional) is required. Invariably, GA tilts our architecture towards plurality of nodes and complications in managing performance and synchronisation of state. Architects must balance, sometimes conflicting, needs to avoid creating virtual silos in the Cloud.

Cloud Deployment Design
Cloud Chaos

The Availability Index

So far we have been working forwards from an existing status quo to a target architecture. We have also adopted an exclusively technical perspective. What would be better is to take a business perspective. To approach our context top down. We should ask: what infrastructure is needed to support our business vision, now and into the near future? What level of availability is enough to provide service that exceeds client needs. In asking these questions, we encounter a new concept: “the Granularity of Perception”. This can be described as the number of microseconds, milliseconds, seconds, minutes, or more that impacts our service(s), as perceived by clients. Simply put: how slowly can we blink before our clients start to notice that our eyes have moved. As this number (granularity) increases, the required level of availability decreases. The table below provides a rough guide, with descriptions.

Availability Index Description
1 Cluster enabled, auto recovery, no fail 24×7, latency intolerant, high-frequency, geography affinity
3 Cluster enabled, auto recovery, no fail 24×7, latency intolerant, medium frequency
5 Cluster enabled, auto failover, business hours, latency tolerant, low frequency
7 Non clustered, manual failover, business hours, latency tolerant, low frequency

The goal of architects should be to design a Cloud platform that delivers a granularity that is finer than the perception of clients.  Using the table above as a guide, architects should play out scenarios with the service portfolio against each index.  Starting with the least to the highest.  Once the required availability index is determined, it should be relatively easy to identify the dimensions to support it.

Conclusion

As organisations embark on the journey of digital transformation, one early change is often Cloud adoption. This is because the Cloud provides a catalysing medium in which many solutions are easier and quicker to provision.  In moving from on-premise/data-centre resources to the Cloud, architects must resist the temptation to simply lift-and-shift.  Rather, the digital transformation journey should re-examine the fitness-for-purpose of existing solutions, platforms and infrastructure. There is a famous quote by Jack Welch, former CEO of General Electric. He said, If the rate of change on the outside exceeds the rate of change on the inside, then the end is near.. In a rapidly evolving globalised economy, business agility is becoming a precondition for survival.

The availability index is a simple, logical, technology-agnostic technique for conceptual reasoning about a Cloud landscape.  Determination of the availability index helps to reveal shared profiles for similar subsystems.  The profiles are logical and help estimate the resources required to support a genre of subsystem.  Each logical profile can then be mapped to specific Cloud infrastructure and captured as customisable templates.  The logical profiles provide architects with a starting point for solution designs.  The infrastructure templates serve as a baseline for DevOps teams.  Each artefact is likely to go through a number of evolutions.  However, it is vital that both views are kept in sync at all times.

Organisations that leverage this approach will see a marked improvement in the consistency of infrastructure designs.  Other benefits include faster turnaround of solutions, and systems that balance technical capability with business needs and aspirations. Architecture teams that leverage the availability index position their organisations for superior agility and competitiveness in the global economy.


Oyewole, Olanrewaju J (Mr.)
Internet Technologies Ltd.
lanre@net-technologies.com
www.net-technologies.com

1000 Cloud Servers: Start With One Click

[Hybrid] Cloud Infrastructure

Cloud and Open Source

The arrival of Cloud providers and Infrastructure-as-a-Service (IaaS) has opened up options and possibilities for solution architects.  Our company is working with a client on a major transformation initiative. Leveraging Cloud IaaS and open-source integration platforms, together we have explored options, built competence, and delivered incremental solutions while keeping costs to a minimum.  Without Cloud IaaS and open-source this freedom of expression in solution architecture would have been impossible.  Just imagine justifying a multi-tier, multi-server solution to the CFO when one of the key drivers has been cost control!

 

The Basic Idea

In its most primitive expression, our client wanted a public Application Program Interface (API) layer to abstract access to an Integration layer, which in turn connected with all their internal repositories and partner systems to provide services.  The image that follows provides an illustration.  It appears quite straightforward and simple.
DraftInfrastructure

The API layer provides a simple Representation State Transfer (REST) interface as well as security.  It also maintains logs that can be analysed for insights into client behaviour and the usage/performance of services.  The Integration layer serves as an Enterprise Service Bus (ESB), connecting to databases, FTP and/or file servers, as well as internal and partner web services.  In addition, it manages the interactions between all participating systems in the enterprise and ensures that valuable services are made available to the API layer.

 

Enter Cloud (AWS/Azure) and Open Source (WSO2)

The traditional route would have been to procure/secure access to servers in a data-centre or in-house server-room and buy licenses from a vendor.  That would have meant a lead time of several weeks or months, to negotiate the price of licenses and consultancy, arrange for servers and networking, and to secure and disburse requisite financing.  With Cloud and open-source software, upfront costs were near-zero.  The greatest resource demand was the effort required to architect the Cloud infrastructure and to create the code to build, populate and operate it.

 

Building the Foundation

There were many options for building the networking and computing instances.  We chose Kubernetes.  Kubernetes is well established and provides abstractions that make it easy to switch Cloud providers.  Using the analogy of a house for illustration; Kubernetes builds the shell of the house, setting up the rooms, corridors, and spaces for windows and doors.  It keeps a record of each component, and notifies all other components if there has been a change to any one of them.  In our use case, Kubernetes creates a private network in the Cloud (cluster), adds compute-instances, load-balancers, firewalls, subnets, DHCP servers, DNS servers, etc. Kubernetes also provides a dynamic registry of all these components that is kept up to date with any changes in real time.

 

The First Change: Redundancy

In the past, vertical scaling with large singleton servers was standard.  These days, horizontal scaling with smaller machines (compute instances) that adjust to changing needs is the norm.  This new approach also provides fail safety.  If one machine fails, there will be other(s) to take up the load. Fortunately this is a core feature of Kubernetes.  The cluster monitors itself to ensure that all the declared components are kept alive.  If/when a component fails, the management services within the cluster ensure that it is replaced with an identical component.  For this reason, rather than have one instance of each component, two or more are created and maintained.

 

The Second Change: Dynamic Delivery

We could have chosen to install all of our technology stack (software) on each compute instance on creation.  That would be slow though, and it could mean that the instances would need to be restarted or swapped-out more often as memory and/or disk performance degrade.  Instead of that, we used Docker to build Containers that are delivered to the instances. The Docker Containers borrow memory, CPU and other resources from the compute instance at the point of deployment.  Containers can be swapped in and out, and multiple Containers can be run on the same compute instance.  When a Container is stopped or removed, the block of borrowed resources are returned to the compute instance.   A Container can be likened to a prefabricated bathroom; it is built offsite and plumbed in at delivery.  Unlike a technology stack that is built from scratch over minutes or hours, a Container is usually ready for access within a few seconds/minutes of its deployment.
VMInfrastructure

 

Implicit Change: Clustering

When more than one instance of a genre component is running at the same time, the complement of all is referred to as a cluster.  Components running in a cluster have peculiar needs; one of which is the sharing of state (status).  State is a snapshot of the world from the computer’s perspective.  In a cluster, all component instances must share the same configuration and operate on the same data always.  To facilitate this, we introduced two repositories.  A Network File System (NFS) for sharing configuration details, and a database for sharing operational data.  Kubernetes does not create these resources.  We used Terraform, another abstraction technology, to create the NFS and a replicated multi-zone database.  Terraform creates these in two private subnets within the private network created by Kubernetes.  Having created the NFS and database though, there was a need to configure and populate them with necessary data upfront.  While Terraform could be manhandled to achieve this, it is not really it’s raison detre.  Another tool is more suited to operating at a fine detail on remote machines: Ansible.  We created Ansible playbooks to configure users, directories and files on the NFS and to create instances, users and tables in the database.

 

Implicit Change: Discovery

The next challenge that our architecture threw up was discovery.  Within our network, there was an API layer and an EI layer.  In each of these layers, there could be several compute instances, and on each compute instance there could be one or more Docker Containers.  Beyond the API and the EI layers, there were also databases and a network file system.  How would clients or our platform gain access to our components, and how would the machines in one layer find those in another layer?  The Kubernetes configuration includes ClusterIP services that provide a single DNS name that resolves to all the compute instances for a given component.  For example, any API Container could be reached using a DNS name such as: cnt.api.example.com.  Clients of our platform could therefore use a DNS name to connect to an API Container, and any API Container could likewise use a single DNS name to communicate with a Docker Container in the EI layer.  Both the API and EI layers use a DNS name to communicate with the NFS and the database.  The IP address of the underlying components might change, but the DNS name is constant for the life of the platform, giving ease of discovery and stability.

 

Tying it all Up

It is all well and good that the components in the Cloud are in place and can talk with each other.  However, most of our operational systems are still on-premise; how do we join things up?  We created a VPN connection between the network in the Cloud and our on-premise network and set up Firewall rules to allow access to and from the Cloud.  The ClusterIP services were also revised to permanently maintain two static IP addresses.  This makes it easy to integrate them with on-premise DNS servers and thereby open them up to access from clients.  Below is an image that shows what it all looks like.
[Hybrid] Cloud Infrastructure

The Thousand Servers

All of these components, configurations, and customisations have been documented as scripts, configuration files and resources. The creation of a Cloud environment is reduced to running a script with two parameters: the name of the environment and the desired Cloud subnet.  By integrating this script into an on-premise CI/CD server, it is now possible to spin up as many Cloud environments as we like; at the click of a button.

All this is quite high-level and simplified; in the next instalment (One Thousand Servers: Start with a Script), I intend to drop down to eye-level and throw up some of the details of how we implemented all of this.  Watch this space for the activation of link above.


Oyewole, Olanrewaju J (Mr.)
Internet Technologies Ltd.
lanre@net-technologies.com
www.net-technologies.com
Mobile: +44 793 920 3120

AWS vs Oracle Cloud – My Experience

Oracle Cloud

Preamble

Having had a good impression from my use of Amazon Web Services (AWS), I decided to take a look at Oracle Cloud. AWS is of course the market leader but Oracle, Microsoft et al have joined the fray with competing services. What follows is nothing like a Gartner report, rather it is my personal experience of the Oracle and AWS services, as an end user. AWS is already well known and is perhaps the benchmark by which all others are presently measured. This article maintains that same perspective. The narrative is of my experience of Oracle Cloud and how it compares with AWS.  To keep this brief, I will only mention the cons.

Setting Up

As with every online service, Cloud or otherwise, you need to sign up and configure an account with the provider. The Oracle Cloud account set up involved more steps than the AWS. The telling difference though was in the time it took for the account to be available for use. Whereas the AWS account was all ready to go within a day, it took about 3 working days for my Oracle account.

The second difference was in communication. AWS sent me one email with all I needed to get started; Oracle sent me about 5 emails, each with useful information. I had some difficulty logging on to the Oracle service at first. But this was because I thought, wrongly, that the 3 emails I had received contained all that I needed to log in. The 4th email was the one I needed and with it in hand, login was easy and I could start using the serivces – the 5th email was for the VPN details.

Oracle Cloud Services

Having set up the account, I was now able to login and access the services. I describe my experience under four headings: user interface, service granularity, provisioning and pricing.

:User Interface

I will describe the interface usability in three aspects: consistency, latency, and reliability.  First is consistency.  On logging in to the Oracle Cloud, there is an icon on the top left hand corner that brings up the dashboard.  This is similar to AWS.  However, clicking that same button in the default, database, storage and SOA home pages results in a different list of items in the dashboard display.  This can be confusing as users may think they have done something wrong, or lost access to one of their cloud services.  The second, latency, is also a big problem with some in-page menus.  Response time for drop-down lists can be painfully slow and there is no visual indicator (hourglass) of background processing.  In extreme cases, latency becomes an issue of reliability.  There are times when in-page menus simply failed to display despite several clicks and page refreshes.

:Service Granularity

The area of concern is IaaS, and the two services I had issue with were compute and storage.  The choice of OS, RAM, CPU, etc. available when selecting a compute image is quite limited.  Mostly Oracle Linux and a chained increase in RAM and CPU specifications.  When creating storage; it appears that there is only one “online” storage class available – standard container.  The usage of the terms “container” and “application container” was a bit confusing in this context.  This is especially so when trying to create storage for a database that will be used by Oracle SOA Cloud.

:Provisioning

Provisioning is the fulfilment of the request to create a service package (IaaS, PaaS or SaaS).  The time it took to create a database (PaaS) instance was in excess of 30 minutes.  Perhaps this was due to the time of day and high concurrency.  Nevertheless, given that I could run a script to do same on my laptop within 10 minutes, one would expect equal or better from the cloud.  The delay is even longer with creation of the Oracle SOA Cloud instance; this took well over 2 hours to complete.  Scripted creation of instances would be much quicker on a modest PC.  For cloud services, images should provide even quicker initialisation of instances from templates.

:Pricing

This could be the elephant in the room.  Whereas options were few and insufficient in IaaS, the array of pricing for almost identical PaaS offerings was long and rather confusing.  Unlike AWS, there are only two schedules: hourly or monthly.  There are no options to reserve or bid for capacity.  Finally, even the lowest prices are quite high from the perspective of an SME.  The granularity of billing needs to be reduced or the composition of IaaS and PaaS should give greater flexibility.  Either way, entry prices need to be attractive to a larger segment of the market.

Summary

A great impediment to comparison is the short trial period allowed for Oracle Cloud services.  The 30 day allowance is insufficient, except for those with a prepared plan and a dedicated resource.  Such an exercise would in itself amount to no more than benchmarking, leaving little room for gaining a real feel for the services.

We should set aside the latency issues in setup, user interface and provisioning.  These are easy problems to resolve and it is likely that Oracle will fix these very soon.  The output of provisioning for Oracle-specific PaaS and SaaS services was very good and compares favourably with AWS.  One advantage of the Oracle PaaS is the simple configuration of requisite security to gain access for the first time.  This meant that the PaaS services were very quickly available for use without further tweaking of settings.  The shortcoming, as previously stated, is that provisioning can take quite a while.  Overall, the use of the Oracle PaaS was seamless and integration with on-premise resources was easy.  The only exception being JDeveloper, which could not integrate directly with Oracle SOA Cloud instances.

Competition

AWS has the benefit of early entry and has a far richer set of services.  But the feature set is not an issue since Oracle is not into Cloud as an end, but as a means to extend the availability of existing products and services.  However, even in the limited subset where there is an overlap, AWS provides finer granularity of services/features, better interface usability, and a much more alluring pricing model.

Oracle has fought many corporate and technology battles over the years.  The move to Cloud space is yet another frontier and changes will be needed in the following areas.

  • Open up the options and granularity of IaaS offerings
  • Address significant latencies in provisioning PaaS services
  • Revise the pricing model to accommodate SMEs
  • Totally refresh the flow and performance of web pages

The Cloud has arrived, like the Internet, underestimated initially, but it promises likewise to revolutionise IT.  This market will certainly benefit from competition and I surely hope that Oracle will take up the gauntlet and offer us a compelling alternative to AWS – horizontal and vertical.

God bless!

Oyewole, Olanrewaju J (Mr.)
Internet Technologies Ltd.
lanre@net-technologies.com
www.net-technologies.com
Mobile: +44 793 920 3120