Cloud Hosting Services And The Future Of Lock-in

Cloud Hosting Services And The Future Of Lock-In

For a software product, identifying a product–market fit, getting to market, and then scaling to meet business demands needs to happen instantaneously. Time is still money and we’re now thinking in terms of dozens of deploys per day that need to happen. Optimizing your time and staying focused on your core product is essential for your business and your customers.

With that increased need for moving fast, many teams started using the cloud to get a large infrastructure off the ground quickly. But those cloud services, and especially the latest iteration of services built on top of the cloud, can lead to a level of lock-in that many teams aren’t looking for.

Cloud revolution, service evolution
Over the past few years, cloud and virtualization have revolutionized how we build applications and the speed with which we can build them. These innovations have leveled the playing field and enabled small development teams to build massive applications; however, this has also pressured them to build, ship and deliver products more quickly than their competitors.

Using the cloud and deploying applications on immutable servers increased overall performance and drastically reduced time spent maintaining infrastructure for many teams. Because systems scale as needed, we are able to stop thinking about the individual server and, instead, focus on the experience that this collection of servers provides our customers. The individual server is irrelevant as it’s short lived, very specialized on one task and gets replaced constantly.

We’re now entering the next evolutionary phase of infrastructure, the service. While services to host our applications, like Heroku and AppEngine, have been around for a while, they’ve now started to hide more of the complexities of running infrastructure. For example, the recently launched Amazon Web Services Lambda and EC2 container services, as well as lightweight VM-like containers like Docker, hide many of the complexities of running background tasks.

These services promise to make building a large infrastructure from micro-services much simpler. Getting those micro-services up and running without having to think about the underlying infrastructure and communication between those services dramatically decreases the time needed to build your systems. Furthermore, it limits the code you would have to write to the absolute minimum, thus making sure you stay focused on building your product, without the distraction of building and maintaining infrastructure.

Lock-in

The biggest concern I’ve heard when discussing this topic was potential lock-in to a specific provider. I see three different scenarios for lock-ins in this kind of infrastructure.

Moving infrastructure takes a base effort

When you change providers, there is always work involved that locks you into the current provider. Even with an infrastructure built on standard tools and frameworks, you’ll have to go through transferring your data, changing DNS, and testing the new setup extensively. Better tooling can help make this easier but not remove it.

Code-level lock-in

Google App Engine is an example of deep code-level lock-in. It requires you to build your application in a very specific way tailored to their system. This can give you major advantages because it’s very tightly integrated into the infrastructure, but for many teams this deep lock-in is too risky.

Architectural lock-in

An example of a service that has minimal code lock-in but major architectural lock-in is Amazon Web Services Lambda. In the first iteration of Lambda, you write Node.js functions and invoke those functions through either the API or they get invoked on specific events in S3, Kinesis or DynamoDB.

For any sufficiently complex infrastructure, this could lead to dozens or hundreds of very small functions that aren’t complex by themselves or have major lock-ins on a code level. But you can’t take those node functions and run them on another server or hosting provider. You would need to build your own system around them which means high architectural lock-in.

On the plus side, there is now a lot of infrastructure we simply don’t have to deal with anymore. Events are fired somewhere in your infrastructure and your functions will be executed and scaled automatically.

Heroku, AWS and other cloud providers have seen that writing on the wall and are decreasing code level lock-in, while providing new services that create architectural lock-in.

It’s up to every team to decide which of those lock-in scenarios are a good trade off. A micro-service oriented architecture that is built on technology you can use on a variety of providers can offset some of that trade-off (e.g. frameworks like Rails or Node). You can build on top of the services in the beginning and move parts of your infrastructure somewhere else for more control in the future. But it does require a different approach to building infrastructure than we have today.