Local development for Kubernetes done right
2024.02.01 | Daniel Roth
When operating within a company that leverages modern cloud-native technologies, it's not uncommon to encounter a shared "Development" Kubernetes cluster. This cluster serves as a testing ground for numerous developers, each granted full access or extensive privileges. Intended for testing applications, deployment scripts (such as Helm charts), and various integrations, this shared space quickly resembles a digital version of Pandora's box.
The shared development cluster problem
In practice, this cluster becomes a melting pot for pipelines deploying development branches or even individual feature branches. Combined with engineers experimenting with diverse technologies, the result is an expansive Kubernetes cluster teeming with namespaces. However, a staggering 90% of the pods often find themselves stuck in a perpetual state of "CrashLoopBackoff", leading to the demise of the cluster every other day. Unfortunately, this setup tends to be a breeding ground for productivity issues across all teams.
Several factors contribute to this predicament:
- Sandbox Mentality: With broad access, teams often treat the cluster as a sandbox environment, leading to clutter and chaos.
- Testing Challenges: As infrastructure-as-code becomes integral, testing becomes crucial. Tools like Kuttl exist for a reason, emphasizing the importance of testing Kubernetes configurations on an engineer's local machine.
- Resource Limitations: Unchecked resource consumption becomes a significant challenge, with resource limits becoming a constant source of issues.
Engineers unfamiliar with Kubernetes face a steep learning curve and a frustrating experience. This frustration can hinder motivation, discouraging them from properly delving into Kubernetes.
Consider also the challenges faced by operations teams tasked with maintaining this sprawling cluster. Continuous cleanup of broken deployments becomes a routine, with ops personnel constantly seeking permission to delete obsolete resources. Amidst this, they grapple with the cacophony of alerts, gradually becoming immune to the noise within their office.
Teach to fish, feed for a lifetime
In tackling these challenges, we advocate for the use of local development clusters that are not only easy to set up but, most importantly, easy to understand. We firmly believe that providing engineers with the right tools and knowledge ensures a smoother experience, fostering collaboration and productivity for both developers and operations teams.
Rather than giving out solutions directly, our philosophy is to teach the fundamentals. Kubernetes is a potent toolset, intricately designed with well-thought-out core concepts. Instead of oversimplifying for developers, we advise against hiding these core concepts. Avoid the temptation to automate every potential hiccup, as it may lead to a complex and unwieldy setup. Instead, invest in your team's know-how, ensuring everyone comprehends how to manage a straightforward, default Kubernetes cluster.
Rather than maintaining a large development cluster, consider setting up a local Kubernetes environment that includes everything your teams need to kickstart their work. Provide them with the opportunity and responsibility to run tests, deploy applications, and conduct end-to-end testing on their own machines.
Encourage engineers to take ownership of their clusters and machines. Foster their curiosity by allowing them to experiment, break their clusters, and learn to fix issues independently. We strongly believe that Kubernetes is a powerful, lovable, and remarkable technology when engineers have the chance to enjoy it.
To create a more positive work environment, minimize the noise of failed pipelines and alerts from broken development clusters. Let teams find joy and have fun while working on clusters and pipelines, aiming for green lights instead of red ones.
Set up a local dev Kubernetes environment
We adhere to the best practice of ensuring that the local development environment closely mirrors the CI/CD, staging, and production environments. Our primary objective is to foster development using real and configurable URLs, with TLS/SSL-enabled data traffic as the default, even on local machines. It's time to bid farewell to the era of relying solely on http://localhost.
Certainly, there are challenges along the way, such as:
- Certificate Handling: Managing certificates poses a challenge, but there are various approaches to tackle this issue effectively.
- Local DNS: Local DNS setup is crucial for ensuring a seamless transition to real and configurable URLs in the development environment. Multiple strategies exist to address this challenge.
- Local Registries: The need for local registries is recognized, and there are diverse solutions available to manage this aspect efficiently.
Addressing these challenges is no small feat, and we are eager to present an effective strategy to handle them. By surmounting these hurdles, we not only establish a local development environment that closely aligns with the production setup but also elevates the development experience with authentic and secure configurations. This strategic approach lays the groundwork for seamless transitions between different development stages, fostering a robust and reliable development workflow.
DenktMit eG local-dev-cluster to the rescue
We prepared the local-dev-cluster as a baseline to provide you with such a local development environment. It includes local certificate management with certmanager, Ingress (Traefik), Keycloak, and Kafka.
Certificate handling
There are several ways to serve your applications with https on your local machine. With mkcert you can create a local CA and install them into your truststore with just a few commands. Later you can use certmanager to use your already trusted CA to create certificates for your ingresses. You can also simply create a valid wildcard certificate using certbot with Letsencrypt or ZeroSSL and reference them in your ingresses. ZeroSSL even supports one year wildcard certificates, so you don't have to care about them that much.
Local DNS
When using a local DNS setup to test your deployments like helm charts, argocd apps and so on you might also want to test accessibility of your application using ingress. There are a few ways to access them. Of course, you can put your ingresses into your etc/hosts. But can potentially be very messy when you have lots of ingresses because you cant configure wildcards here. You can use an actually dns zone and create an A entry that is pointing to 127.0.0.1 (or whatever your local ingresses IP might be) Another option could be dnsmasqs local DNS server which is a little bit more complex to set up.
Local registries
Local kubernetes tools like kind or minikube have the possibility to push your containers into the cluster using their cli. But there are several use cases where you want to have an actually local container registry where you can build and push your containers into.
Wrap-Up
We outlined the pitfalls of shared development Kubernetes clusters and advocate for a shift towards local development environments. The proposed approach encourages alignment with production setups, teaching core Kubernetes concepts, and empowers your engineers. It addresses challenges like certificate handling, local DNS, and local registries, offering practical solutions. The suggested local-dev-cluster serves as a working example and foundational tool for creating an effective and enjoyable local development experience.