Article Image
Article Image

Although it’s probably been excessively used in the last decade, there’s a real meaning behind the word “Scalability”. Rather than its business sense, this post focuses on what Scalability means from an IT Service and IT Operations Management perspective, as the most important metric of Capacity Planning.

Scaling in the Cloud is easy…

Scalability is your ability to adapt your hosting infrastructure capacity to a growing or decreasing amount of user load. It’s the flexibility to meet demand. A flexibility that has significantly been enhanced by Cloud technologies.

Within minutes, you can now command the vertical scaling of your infrastructure (up and down) and tailor your servers power to your need, or scale horizontally (out and in) by using more or less servers in your computing fleet. You can even build automated strategies around this on-demand scaling capabilities with technologies such as Auto Scaling Groups.

The days of filling server bays are over.

Whatever your scaling strategy, and no matter your provider, the technology is there, widely available and seemingly affordable (at least in the beginning).

…But planning for capacity is hard

Is your company processed to leverage the ease of scaling up, down, out and in to meet the capacity requirements of your growth? When do you need to schedule a scaling operation? What exactly are you going to improve in your infrastructure and how much will it cost?

Nailing down your Capacity Planning strategy is a long, costly process that involves predicting usage, undergoing repeated cycles of performance tests across multiple hypothetical infrastructure configurations, and ultimately scheduling changes for your servers and microservices based on what you can conclude from test results.

So for many engineering teams around the world, it’s easier to literally throw hardware at their Capacity concerns. To some extent, over scaling (also known as over provisioning) has always been a wasteful way to overcome some of the shortcomings of Capacity Planning.

But with unused servers costing companies $30 billion and growing a year, such practice is far from being sustainable. In 2020, datacenter around the world will have the carbon emissions of France. So it doesn’t just cost us money, it is costing us our future too.

Management needs Predictable Performance

Even if you are somehow able to meet demand with wasteful scaling practices, Capacity insights are used by all kind of collaborators in your organization.

It is mission-critical to ensure your ability to collect and share Capacity insights in a language everybody can understand, because they’re an important component of the success of your strategic initiatives.

Nobody wants an outage of their political news website on election day, a failed product launch because of a misconfigured load-balancer, or being unable to handle registrations for the month.

What we’re trying to do about it

Relying on insights that are so hard to obtain in order to make informed, strategic decisions is a recipe for disaster. But instead of dropping Capacity Planning altogether, we decided to reinvent it. Stacktical makes Capacity Planning effortless by reducing its requirements to a minimum.

It streamlines your Capacity Planning efforts by doing most of the heavy lifting in your stead using predictive technologies so you get the most out of your servers and microservices, without the hassle of never-ending tests and meetings.

All with very minimal performance testing effort, almost no server provisioning at all and the smallest possible operational footprint. It also provides you with easy ways to infuse Capacity knowledge across your entire company so everybody is ready to rise to the occasion.

Blog Logo




But will it scale?

A Blog by Stacktical: the Scalability Testing Automation Platform powered by Predictive Technologies

Read another story