Skip to content

Lift & Shift to Google Cloud Part 2 – Migrating to Google Cloud BLOG

This series of blog posts intends to share real life experience with an enterprise lift & shift project from on-premise data centers into Google Cloud Platform (from now on referred to as GCP).

Missed part 1? Discover Google’s biggest mistake, before reading any further!

The final step, migrating to GCP.

Once you’re convinced the Cloud is something to take a look at, the fun can finally start. Actually doing shit! So, RDP/SSH into your VM’s, install CloudEndure (see part 3), let it sync for a few days, cutover, switch DNS, done! Or are we?

 

Taking back control

Seems like that shortcut above is not going all too well. It might work if you’re migrating a single system with only a few dependencies, but remember we’re talking about enterprise migrations, where the landscape is a lot more complex. Active Directory, SAP, Citrix, database and file servers are all closely integrated. In an ideal case, a clean, up-to-date CMDB is available, where you can retrieve all information of your systems and their interdependencies. But possible this is either not the case or you would like an extra source of confirmation that will take all info related to a migration into consideration.

Same applied in our case, as the current environment resided in a co-located data center, which was also managed by the hosting partner, using their own proprietary tools. That’s great during the many years you’re running with them, since you can just sit back and have both your servers and software taken care of for you, but less so when you want to take back control. So you need a way to get a clear overview of your landscape.

Luckily, there are many tools that offer just that. With the rise of Cloud, some of them even integrate with GCP, amongst others, to facilitate the inventorisation and in some cases, even migration.

The solution we chose promised auto-discovery of all system dependencies and a full map of network traffic with just the installation of some crawler-like agents on only a few servers. It would know about the type of applications that you are running per server(e.g. based on network ports), detect clusters and show you which other systems depend on it.

This clear overview would then be able to be used in different waves. In case you would select the auto-detected “SAP environment” for a wave to migrate, it would do an intelligent suggestion based on all system information it gathered. The final step then is to migrate those servers into GCP automagically.

We very much liked sound of that. It’s a solid process to follow. Sadly enough, we made the wrong product choice here. Everything that we were promised appeared to be manual work.

So that’s what went down. A lot of hours went into inventorizing the environment. With some delay, we finally had the complete picture of all systems, and were able to start planning our migration waves.

How much is the fish?

For some companies, cost reduction is the first thing they think of when considering moving to the cloud. Apart from all the great things you can do with cloud once you’re in there (see part 1), the agility and elasticity of the cloud can certainly be a valid reason for a Business Case to cut costs. Less load on your web servers during the night? Just scale them down with a managed instance group. Savings!

So having a clear view of the costs, even before actually starting the project, is a great asset. Some tools we encountered actually have those features built-in and can even make a cost comparison between different cloud providers. It’s definitely a quick win.

Can granny come too?

As much as we love and deerly need our grandparents, we don’t often take them to a club night. Similarly, in large organisations, some older systems are an indispensable backbone of the company and have been there for maybe decades. Yes, I’m talking to you, AS/400. As these systems can’t be run on GCP (at time of writing), you want to find a decent retirement home, and a solid VPN connection to visit granny as many times as needed.

Lessons Learned

For each part we go through in this series, I’ll also summarize some lessons learned. While most of them might sound even too straightforward to mention at all, in the end there are so many things to consider that it won’t hurt to have a small list of evident things to check.

  1. Have a clear inventory of your systems before you start looking at anything else, or have a proven way of getting it.
  2. Clearly research and test tools you are given, don’t just go on assumptions when you hear words like ‘auto-discovery’. Ask for a reference customer who has already used it in the past.
  3. Once you have a clear inventory/CMDB in place, create a compatibility overview to make sure you’re environment is a good candidate for a Cloud environment.
  4. Having a solid idea of which servers are ready for cloud or can be replatformed in GCP, create a solid cost estimation based on your own assessment and/or use a dedicated tool.

Ready? Next up we’ll actually be migrating some VM’s!