Kubernetes Secrets: Your Complete Guide to Getting Started

Kubernetes has changed the way we deploy software. And there’s more to Kubernetes than meets the eye. That’s right; I’m talking about Kubernetes secrets. But let’s scale back for a moment here.

Over recent years, containerization has really taken off because of the microservices and agile trends. There’s a huge need for deploying new code quickly, in a reliable way. Therefore, the industry needed more advanced tooling to handle the increasing demands for container management and deployment.

Kubernetes, an open-source tool originally developed by Google, allows you to meet these higher needs for automation. The tool is therefore very popular among startups to improve their time to market and solve the scalability problem.

In this post, you’ll get an introduction to Kubernetes and the terminology that goes with it. Besides that, you’ll learn five Kubernetes secrets to level up your skills.

What Is Kubernetes?

Before we dive into Kubernetes secrets, let’s review what Kubernetes is.

As mentioned in the introduction, Google originally developed Kubernetes. However, the company has open-sourced Kubernetes, which has resulted in an active and supporting community moving the product forward.

You can use Kubernetes for any aspect related to containerization. Therefore, it’s useful for easily scaling services up and down. By default, Kubernetes distributes the load across multiple containers to relieve them. When the load increases, Kubernetes automatically spins up more containers to distribute and handle the load accordingly.

Besides that, it allows for deployment automation and even has the ability to auto-restart. In addition, Kubernetes has developed itself as a modular platform, which means you can use it for any architecture deployment.

Is there anything not to love about Kubernetes? Well, it comes with a pretty steep learning curve. However, when you compare this learning curve to that of other container orchestration tools, it’s still easier to learn than tools such as Docker Swarm or Apache Mesos.

Case Study: Pokemon Go

You may remember the hype surrounding Pokemon Go. It seemed as if every kid on the street played that game.

However, Niantic, the creators of Pokemon Go, did not expect this massive hype. Let’s say they predicted a relative amount of traffic of 1X. In addition, they predicted worst-case traffic of 5X. Sadly, the actual traffic reached 50X, resulting in a massive need for scalability.

To solve the problem, Niantic used the Google Container Engine (GKE), which is powered by Kubernetes. This allowed them to scale to the needed proportions.

Other projects that found their scaling relief in Kubernetes include:

  • Airbnb
  • The New York Times
  • Pinterest
  • Tinder

Next, we need to understand basic Kubernetes terminology to dive into its secrets.

An Introduction to Kubernetes Terminology

The list below includes the most important terminology you’ll need to understand Kubernetes: pods, nodes, clusters, and deployments. Let’s start with understanding the meaning of pods.

Pods

A pod holds one or multiple containers that share resources. This means the containers share their network and storage. This setup is ideal for running tightly coupled services that need to communicate or share data.

To give an example, two services can communicate over a localhost connection as they share the same network. This means a Node.js service that requires data to be cached can be deployed with a Redis container within the same pod.

To summarize, pods allow developers to deploy a complete service at once, providing all the needed tools. The goal is to allow services to operate and communicate as if they weren’t running inside containers. This method makes it much easier to run any kind of application in a containerized way, even though this application isn’t specifically designed for this purpose.

Nodes

Second, nodes refer to a machine that is ready to deploy apps. Every machine, physical or virtual, needs to have some basic infrastructure before you can use Kubernetes to deploy apps. You can easily create this basic infrastructure with a cloud operating system, such as Amazon EC2.

Clusters

Next, a cluster always requires a master node that holds the controls to perform administrative tasks for other nodes in the cluster. It handles the orchestration of the nodes in a cluster through its controls.

Deployments

Last, a deployment acts as a blueprint for pods. A deployment uses this template to create a new pod with the right specifications. Using a deployment allows Kubernetes to scale easily, as it holds the blueprint for creating more services itself.

Next, let’s learn about Kubernetes secrets.

5 Kubernetes Secrets

Let’s explore five aspects of Kubernetes that you probably aren’t familiar with. The first tip is the most obvious one: to use the Secret object. Let’s have a look.

1. Secret Object

Are you familiar with Node.js env files? The concept of the Secret object is very similar. The Kubernetes Secret object allows you to store passwords and usernames to access services such as MongoDB or PostgreSQL.

The benefit of using a Secret object is that it encodes all secret values. In order to use this Secrets object, you need to attach a new volume to a container that holds your Secrets object. Remember, the values inside your Secret object aren’t encoded by default. To enable this, you have to create an encryption configuration.

2. Need for Role-Based Access Control

First of all, check if you’ve enabled role-based access control. By default, this should be enabled unless you’ve upgraded from an older version earlier than Kubernetes 1.6.

Furthermore, avoid setting cluster-wide permissions. It’s much safer to set permissions on a case-by-case basis only for specific namespaces. (You’ll learn more about namespaces in just a moment.)

Some may argue that setting admin privileges for your whole cluster helps with debugging. However, it’s a common mistake to forget to remove these admin privileges after debugging.

3. Kubernetes Namespaces

From the Kubernetes documentation, you can read the following information about namespaces:

Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces.

Namespaces are great for isolating environments. This also means you can apply security measures, such as network policies per namespace. In short, namespaces are great for maintaining a good overview of your pods, and they help you to apply specific security measures.

4. Add Memory Limits

Kubernetes suggests adding memory limits to the above namespaces. For example, let’s say you’re running an application that opens new connections to your MongoDB instance every five seconds. However, the developer is a bit sleepy and forgets to add the code that closes the connection again. This means your application is at risk of running out of memory due to all the unclosed connection objects it has in its memory.

This might not sound like a big risk. On the other hand, if you deploy this application without a memory limit, it might crash a node and affect the cluster’s operating.

Want to learn more about Kubernetes resource management? Read the documentation.

Last, let’s learn why you should run sensitive workloads on dedicated machines.

5. Use Dedicated Machines for Sensitive Workloads

In order to minimize the impact of an attack or faulty configuration, separate sensitive workloads. By “separate,” I mean run sensitive loads on a dedicated machine. You don’t want a less sensitive application to share its memory and runtime with a more sensitive application. Therefore, make sure to run sensitive loads on separate servers.

Final Words About Kubernetes

Kubernetes might feel overwhelming at first. Take your time reading up about concepts such as pods, clusters, namespaces, and so on. Once you understand these terms, you can start experimenting with different setups and implementing the above tips. For example, you might create an application that causes a memory leak and observe how Kubernetes behaves. Next, try to add the memory limit parameter to see what difference it makes and how Kubernetes handles the affected pod.

Good luck with your Kubernetes journey!