news

Getting to Cloud, What are the Key Factors?

Spread the love
Lessons learned from migrating a major bank’s entire application portfolio to the cloud.

With the ongoing stampede to public cloud platforms, it is worth a clearer look at some of the factors leading to such rapid growth. Amazon, Azure, Google, and IBM and a host of other public cloud services saw continued strong growth in 2018 of 21% to $175B, extending a long run of robust revenue growth for the industry.

It is worth noting though that both more traditional SaaS and private cloud implementations also are expected to grow at near 30% rates for the next decade — essentially matching or even exceeding public cloud infrastructure growth rates over the same time period. The industry with the highest adoption rates of both private and public cloud is the financial services industry where adoption (usage) rates above 50% are common and even rates close to 100% are occurring versus median rates for all industries of 19%.

In my recent experience leading IT and Operations for Danske Bank (a large Nordic Bank), we completed a four-year infrastructure transformation program that migrated the entire application portfolio from proprietary dedicated server farms in five obsolete data centers to a modern private cloud environment in two data centers. Of course, the mainframe complex was migrated and updated as well. And we incorporate some SaaS and public cloud usage as well. The migration effort resulted in the elimination of nearly all of the infrastructure layer technical debt, reduced production incidents (These are truly remarkable results that now enable Danske Bank to deliver superior service to our customers, as well as reduced infrastructure cost and improve time to market. But how do you get to private or multi-cloud successfully?

Image: Syda Productions - stock.adobe.com

Quality over cost

First, it is critical to view cloud as a quality solution versus a cost solution. Any major infrastructure re-platforming should be primarily judged based on improved capabilities, increased quality and reduced risk. Infrastructure re-platforming, done primarily for cost rationale rarely delivers — especially considering that most corporations can achieve far better cost savings Second, the effort must be comprehensive. If you only do a portion of your server estate, and allow myriad legacy systems to remain, then you have not reduced your complexity (in fact, you may have actually increased it). This complexity, coupled with dated systems, is a major contributor to defects and issues that reduce security, availability and performance. Your architecture should incorporate a proper migration of all systems to the appropriate “flavor” of cloud platform.

Standardization matters

Today’s legacy environments are often singular, meaning nearly every server is a custom implementation, with slightly, or even widely, varying configurations and platforms, each requiring custom maintenance and expert care to function and keep current. By architecting a comprehensive set of templates, perhaps 20 or even fewer, the administration and maintenance problems complexity is reduced dramatically.

My experience is that most legacy applications can be easily ported to a suitable server template in your private cloud. The remainder can be tougher, but it is important to work through them to deliver them on the new platform. This definition of the templates, and then the initial deliveries with proper middleware and database stacks, is often the toughest part of the engineering. It should be jointly done A third critical principle we followed was to minimize exceptions. This meant we would not just move old servers to the new centers but instead to set up a modern and secure “enclave” private cloud and then migrate all the old to the new. This enabled a far more secure network and a level of data protection than could be built in or overlaid in a typical legacy environment. Further, all applications would migrate to the new server patterns. The exceptions would be sunset with clear time lines.

Last, all new applications had to be built using approved cloud design templates from the start. Of course, this requires proper sponsorship and discipline to properly execute. But the reward is then greater. With far fewer exceptions, design is standardized, maintenance and administration can be made common and automated, security gaps and patch administrations becomes a far lesser task and problem. By minimizing exceptions, you greatly increase standardization, which then increases quality and reduces effort, as well as enabling automation and speed.  

The right stuff

The final factor to success is that you must have the right engineers. Properly designing a private and multi-cloud environment requires a robust, multi-disciplined team of engineers. There is a great deal of excellent material and industry that improves the jumping off point, but the cloud still must be adjusted for your environment, and it still must be built While it was certainly a lengthy and complex process, we were ultimately successful at Danske Bank. And the organization is now reaping the benefits of a fully modernized cloud environment with rapid server implementation times and lower long term costs. In fact, we benchmarked our private cloud environment and it proved to be 20% to 70% less expensive than comparable commercial offerings (including AWS and Azure). It’s a remarkable achievement indeed, but more important is that the new Danske Cloud2 environment provides the solid platform for the further digitalization of the bank.