A microservice architecture lets you structure an application as a collection of small, loosely coupled services.
Their size and independance enable development teams to produce code that is focused on doing one job, and do it right.
That makes designing, improving and maintaining software more predictable, as the company scales its infrastructure and its human resources.
Microservices done right give you extra modularity and better control on your execution as a team, as a company.
While they make software more testable, operable and scalable, microservices also introduce their fair share of challenges when it comes to developing code locally.
An architecture that separates concerns across multiple services makes validating a change exponentially more difficult.
Either they imply replicating the architecture on the developer’s machine, or they require you to build and deploy each and every change in your Continous Integration pipeline.
Neither seem like entirely viable options.
Should I install Kubernetes locally then?
As long as your service count is low enough, replicating a local microservice architecture on your machine might be worth it.
We’d venture to say that it’s a pretty common onboarding step for newscomers in nowadays software development teams.
Depending on your organization maturity, especially regarding documenting practices like employee onboarding, this installation process could somehow feel like chaining
git clone commands of small monolithic projects.
Obviously, while developing, you’d have to constantly ensure that repositories are in the right state (branching), that services are able to properly communicate with each other (resolution) and that you have relevant data to play with (fixtures).
In such case, it would probably make sense to leverage the capabilities of Kubernetes in terms of services orchestration and resolution.
Minikube is a tool that makes it easy to run Kubernetes locally.
Supercharge your local development lifecycle with Telepresence
If you want to be both be future-proof and avoid spending time with replication and installation at all, Telepresence could prove an excellent solution.
Telepresence enables you to transparently access other microservices in an OpenShift and Kubernetes cluster (including a Minikube one).
By effortlessly proxying network connections across your microservice architecture, it empowers your local workspace with all the benefits that a normal Kubernetes microservices pod would have access to.
No need to run an entire architecture anymore
Let’s see how Telepresence makes it super fast and easy to work amongst existing Kubernetes microservices.
Here, we are using our own Stacktical staging Kubernetes cluster for demonstration purposes.
First, let’s install Telepresence
Nothing fancy, it’s as easy as running the following commands:
brew cask install osxfuse
brew install datawire/blackbird/telepresence
You can find instructions for your own operating system at https://www.telepresence.io/reference/install.
Then let’s list Kubernetes services
Now that Telepresence has been installed, we need information about our Kubernetes services:
kubectl get service frontend backend db
Our objective is to be able to communicate to
backend on the 8888 port from our local
We don’t want to install or manage our API locally, and we want to use our existing staging Stacktical project data without relying on fixtures.
Telepresence will help us with just that.
Swapping the remote frontend service with our local one
Thanks to the handy
telepresence --swap-deployment command, and for the entire duration of our session, our local machine can transparently communicate with the Stacktical Kubernetes staging cluster with very minimal effort:
telepresence --swap-deployment frontend --expose 8080:80 --run-shell
Whenever we call the remote
backend service, it’ll answer back to our local frontend application.
This two-way proxying system is at the foundation of the very ability of Telepresence to letting you replace any microservice in your architecture with its local counterpart.
Now let’s actually try talking with
curl -k https://backend:8888/swagger
I can now magically display the remote
backend Swagger definition from my local machine.
This could be a useful way to access an always up to date API documentation of your project, as a frontend developer.
Getting fancy with realtime integration of dependencies
Being able to effortlessly work amongst existing microservices and bypass CI to make changes is great.
But you’re part of a team: your code still needs to be merged and interact with the code of your peers at some point.
So we’ve been wondering: What happens if two developers use Telepresence at the same time, on the same Kubernetes cluster?
Let’s verify just that with a simple Request versus Response scenario.
Developer A works on the frontend service of Stacktical using Telepresence. He tries to sign up from the application.
As expected, Telepresence proxies the signup request to Kubernetes. The
[email protected] email address already exists in the staging database and Developer A is alerted accordingly.
Developer B works on the backend service of Stacktical.
The signup response sent to Developer A has actually been provided by the backend of Developer B !
Developer A and B can communicate across Kubernetes using Telepresence.
One can probably imagine how handling dependencies between frontend and backend development like that could increase the team’s overall velocity for the ongoing sprint.
Installing your project locally as a developer has always been tedious, even more so with a microservice architecture.
But tools like Telepresence makes it easy to contribute to existing, Kubernetes-powered projects and re-imagine how we collaborate around shipping code faster than ever before.
It’s not just a matter of shipping scalable services. It’s a matter of improving the scalability of your team as well.
For more information about Telepresence, go to https://www.telepresence.io