Tuesday, December 20, 2016
StatefulSet: Run and Scale Stateful Applications Easily in Kubernetes
Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.5
In the latest release, Kubernetes 1.5, we’ve moved the feature formerly known as PetSet into beta as StatefulSet. There were no major changes to the API Object, other than the community selected name, but we added the semantics of “at most one pod per index” for deployment of the Pods in the set. Along with ordered deployment, ordered termination, unique network names, and persistent stable storage, we think we have the right primitives to support many containerized stateful workloads. We don’t claim that the feature is 100% complete (it is software after all), but we believe that it is useful in its current form, and that we can extend the API in a backwards-compatible way as we progress toward an eventual GA release.
When is StatefulSet the Right Choice for my Storage Application?
Deployments and ReplicaSets are a great way to run stateless replicas of an application on Kubernetes, but their semantics aren’t really right for deploying stateful applications. The purpose of StatefulSet is to provide a controller with the correct semantics for deploying a wide range of stateful workloads. However, moving your storage application onto Kubernetes isn’t always the correct choice. Before you go all in on converging your storage tier and your orchestration framework, you should ask yourself a few questions.
Can your application run using remote storage or does it require local storage media?
Currently, we recommend using StatefulSets with remote storage. Therefore, you must be ready to tolerate the performance implications of network attached storage. Even with storage optimized instances, you won’t likely realize the same performance as locally attached, solid state storage media. Does the performance of network attached storage, on your cloud, allow your storage application to meet its SLAs? If so, running your application in a StatefulSet provides compelling benefits from the perspective of automation. If the node on which your storage application is running fails, the Pod containing the application can be rescheduled onto another node, and, as it’s using network attached storage media, its data are still available after it’s rescheduled.
Do you need to scale your storage application?
What is the benefit you hope to gain by running your application in a StatefulSet? Do you have a single instance of your storage application for your entire organization? Is scaling your storage application a problem that you actually have? If you have a few instances of your storage application, and they are successfully meeting the demands of your organization, and those demands are not rapidly increasing, you’re already at a local optimum.
If, however, you have an ecosystem of microservices, or if you frequently stamp out new service footprints that include storage applications, then you might benefit from automation and consolidation. If you’re already using Kubernetes to manage the stateless tiers of your ecosystem, you should consider using the same infrastructure to manage your storage applications.
How important is predictable performance?
Kubernetes doesn’t yet support isolation for network or storage I/O across containers. Colocating your storage application with a noisy neighbor can reduce the QPS that your application can handle. You can mitigate this by scheduling the Pod containing your storage application as the only tenant on a node (thus providing it a dedicated machine) or by using Pod anti-affinity rules to segregate Pods that contend for network or disk, but this means that you have to actively identify and mitigate hot spots.
If squeezing the absolute maximum QPS out of your storage application isn’t your primary concern, if you’re willing and able to mitigate hotspots to ensure your storage applications meet their SLAs, and if the ease of turning up new “footprints” (services or collections of services), scaling them, and flexibly re-allocating resources is your primary concern, Kubernetes and StatefulSet might be the right solution to address it.
Does your application require specialized hardware or instance types?
If you run your storage application on high-end hardware or extra-large instance sizes, and your other workloads on commodity hardware or smaller, less expensive images, you may not want to deploy a heterogenous cluster. If you can standardize on a single instance size for all types of apps, then you may benefit from the flexible resource reallocation and consolidation, that you get from Kubernetes.
A Practical Example - ZooKeeper
ZooKeeper is an interesting use case for StatefulSet for two reasons. First, it demonstrates that StatefulSet can be used to run a distributed, strongly consistent storage application on Kubernetes. Second, it’s a prerequisite for running workloads like Apache Hadoop and Apache Kakfa on Kubernetes. An in-depth tutorial on deploying a ZooKeeper ensemble on Kubernetes is available in the Kubernetes documentation, and we’ll outline a few of the key features below.
Creating a ZooKeeper Ensemble
Creating an ensemble is as simple as using kubectl create to generate the objects stored in the manifest.
$ kubectl create -f [http://k8s.io/docs/tutorials/stateful-application/zookeeper.yaml](https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/tutorials/stateful-application/zookeeper.yaml)
service "zk-headless" created
configmap "zk-config" created
poddisruptionbudget "zk-budget" created
statefulset "zk" created
When you create the manifest, the StatefulSet controller creates each Pod, with respect to its ordinal, and waits for each to be Running and Ready prior to creating its successor.
$ kubectl get -w -l app=zk
NAME READY STATUS RESTARTS AGE
zk-0 0/1 Pending 0 0s
zk-0 0/1 Pending 0 0s
zk-0 0/1 Pending 0 7s
zk-0 0/1 ContainerCreating 0 7s
zk-0 0/1 Running 0 38s
zk-0 1/1 Running 0 58s
zk-1 0/1 Pending 0 1s
zk-1 0/1 Pending 0 1s
zk-1 0/1 ContainerCreating 0 1s
zk-1 0/1 Running 0 33s
zk-1 1/1 Running 0 51s
zk-2 0/1 Pending 0 0s
zk-2 0/1 Pending 0 0s
zk-2 0/1 ContainerCreating 0 0s
zk-2 0/1 Running 0 25s
zk-2 1/1 Running 0 40s
Examining the hostnames of each Pod in the StatefulSet, you can see that the Pods’ hostnames also contain the Pods’ ordinals.
$ for i in 0 1 2; do kubectl exec zk-$i -- hostname; done
zk-0
zk-1
zk-2
ZooKeeper stores the unique identifier of each server in a file called “myid”. The identifiers used for ZooKeeper servers are just natural numbers. For the servers in the ensemble, the “myid” files are populated by adding one to the ordinal extracted from the Pods’ hostnames.
$ for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done
myid zk-0
1
myid zk-1
2
myid zk-2
3
Each Pod has a unique network address based on its hostname and the network domain controlled by the zk-headless Headless Service.
$ for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
zk-0.zk-headless.default.svc.cluster.local
zk-1.zk-headless.default.svc.cluster.local
zk-2.zk-headless.default.svc.cluster.local
The combination of a unique Pod ordinal and a unique network address allows you to populate the ZooKeeper servers’ configuration files with a consistent ensemble membership.
$ kubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/log
tickTime=2000
initLimit=10
syncLimit=2000
maxClientCnxns=60
minSessionTimeout= 4000
maxSessionTimeout= 40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=1
server.1=zk-0.zk-headless.default.svc.cluster.local:2888:3888
server.2=zk-1.zk-headless.default.svc.cluster.local:2888:3888
server.3=zk-2.zk-headless.default.svc.cluster.local:2888:3888
StatefulSet lets you deploy ZooKeeper in a consistent and reproducible way. You won’t create more than one server with the same id, the servers can find each other via a stable network addresses, and they can perform leader election and replicate writes because the ensemble has consistent membership.
The simplest way to verify that the ensemble works is to write a value to one server and to read it from another. You can use the “zkCli.sh” script that ships with the ZooKeeper distribution, to create a ZNode containing some data.
$ kubectl exec zk-0 zkCli.sh create /hello world
...
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
Created /hello
You can use the same script to read the data from another server in the ensemble.
$ kubectl exec zk-1 zkCli.sh get /hello
...
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
world
...
You can take the ensemble down by deleting the zk StatefulSet.
$ kubectl delete statefulset zk
statefulset "zk" deleted
The cascading delete destroys each Pod in the StatefulSet, with respect to the reverse order of the Pods’ ordinals, and it waits for each to terminate completely before terminating its predecessor.
$ kubectl get pods -w -l app=zk
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 0 14m
zk-1 1/1 Running 0 13m
zk-2 1/1 Running 0 12m
NAME READY STATUS RESTARTS AGE
zk-2 1/1 Terminating 0 12m
zk-1 1/1 Terminating 0 13m
zk-0 1/1 Terminating 0 14m
zk-2 0/1 Terminating 0 13m
zk-2 0/1 Terminating 0 13m
zk-2 0/1 Terminating 0 13m
zk-1 0/1 Terminating 0 14m
zk-1 0/1 Terminating 0 14m
zk-1 0/1 Terminating 0 14m
zk-0 0/1 Terminating 0 15m
zk-0 0/1 Terminating 0 15m
zk-0 0/1 Terminating 0 15m
You can use kubectl apply to recreate the zk StatefulSet and redeploy the ensemble.
$ kubectl apply -f [http://k8s.io/docs/tutorials/stateful-application/zookeeper.yaml](https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/tutorials/stateful-application/zookeeper.yaml)
service "zk-headless" configured
configmap "zk-config" configured
statefulset "zk" created
If you use the “zkCli.sh” script to get the value entered prior to deleting the StatefulSet, you will find that the ensemble still serves the data.
$ kubectl exec zk-2 zkCli.sh get /hello
...
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
world
...
StatefulSet ensures that, even if all Pods in the StatefulSet are destroyed, when they are rescheduled, the ZooKeeper ensemble can elect a new leader and continue to serve requests.
Tolerating Node Failures
ZooKeeper replicates its state machine to different servers in the ensemble for the explicit purpose of tolerating node failure. By default, the Kubernetes Scheduler could deploy more than one Pod in the zk StatefulSet to the same node. If the zk-0 and zk-1 Pods were deployed on the same node, and that node failed, the ZooKeeper ensemble couldn’t form a quorum to commit writes, and the ZooKeeper service would experience an outage until one of the Pods could be rescheduled.
You should always provision headroom capacity for critical processes in your cluster, and if you do, in this instance, the Kubernetes Scheduler will reschedule the Pods on another node and the outage will be brief.
If the SLAs for your service preclude even brief outages due to a single node failure, you should use a PodAntiAffinity annotation. The manifest used to create the ensemble contains such an annotation, and it tells the Kubernetes Scheduler to not place more than one Pod from the zk StatefulSet on the same node.
Tolerating Planned Maintenance
The manifest used to create the ZooKeeper ensemble also creates a PodDistruptionBudget, zk-budget. The zk-budget informs Kubernetes about the upper limit of disruptions (unhealthy Pods) that the service can tolerate.
{
"podAntiAffinity": {
"requiredDuringSchedulingRequiredDuringExecution": [{
"labelSelector": {
"matchExpressions": [{
"key": "app",
"operator": "In",
"values": ["zk-headless"]
}]
},
"topologyKey": "kubernetes.io/hostname"
}]
}
}
}
$ kubectl get poddisruptionbudget zk-budget
NAME MIN-AVAILABLE ALLOWED-DISRUPTIONS AGE
zk-budget 2 1 2h
zk-budget indicates that at least two members of the ensemble must be available at all times for the ensemble to be healthy. If you attempt to drain a node prior taking it offline, and if draining it would terminate a Pod that violates the budget, the drain operation will fail. If you use kubectl drain, in conjunction with PodDisruptionBudgets, to cordon your nodes and to evict all Pods prior to maintenance or decommissioning, you can ensure that the procedure won’t be disruptive to your stateful applications.
Looking Forward
As the Kubernetes development looks towards GA, we are looking at a long list of suggestions from users. If you want to dive into our backlog, checkout the GitHub issues with the stateful label. However, as the resulting API would be hard to comprehend, we don’t expect to implement all of these feature requests. Some feature requests, like support for rolling updates, better integration with node upgrades, and using fast local storage, would benefit most types of stateful applications, and we expect to prioritize these. The intention of StatefulSet is to be able to run a large number of applications well, and not to be able to run all applications perfectly. With this in mind, we avoided implementing StatefulSets in a way that relied on hidden mechanisms or inaccessible features. Anyone can write a controller that works similarly to StatefulSets. We call this “making it forkable.”
Over the next year, we expect many popular storage applications to each have their own community-supported, dedicated controllers or “operators”. We’ve already heard of work on custom controllers for etcd, Redis, and ZooKeeper. We expect to write some more ourselves and to support the community in developing others.
The Operators for etcd and Prometheus from CoreOS, demonstrate an approach to running stateful applications on Kubernetes that provides a level of automation and integration beyond that which is possible with StatefulSet alone. On the other hand, using a generic controller like StatefulSet or Deployment means that a wide range of applications can be managed by understanding a single config object. We think Kubernetes users will appreciate having the choice of these two approaches.
–Kenneth Owens & Eric Tune, Software Engineers, Google
- Download Kubernetes
- Get involved with the Kubernetes project on GitHub
- Post questions (or answer questions) on Stack Overflow
- Connect with the community on Slack
- Follow us on Twitter @Kubernetesio for latest updates
- Introducing kustomize; Template-free Configuration Customization for Kubernetes May 29
- Getting to Know Kubevirt May 22
- Gardener - The Kubernetes Botanist May 17
- Docs are Migrating from Jekyll to Hugo May 5
- Announcing Kubeflow 0.1 May 4
- Current State of Policy in Kubernetes May 2
- Developing on Kubernetes May 1
- Zero-downtime Deployment in Kubernetes with Jenkins Apr 30
- Kubernetes Community - Top of the Open Source Charts in 2017 Apr 25
- Local Persistent Volumes for Kubernetes Goes Beta Apr 13
- Container Storage Interface (CSI) for Kubernetes Goes Beta Apr 10
- Fixing the Subpath Volume Vulnerability in Kubernetes Apr 4
- Principles of Container-based Application Design Mar 15
- Expanding User Support with Office Hours Mar 14
- How to Integrate RollingUpdate Strategy for TPR in Kubernetes Mar 13
- Apache Spark 2.3 with Native Kubernetes Support Mar 6
- Kubernetes: First Beta Version of Kubernetes 1.10 is Here Mar 2
- Reporting Errors from Control Plane to Applications Using Kubernetes Events Jan 25
- Core Workloads API GA Jan 15
- Introducing client-go version 6 Jan 12
- Extensible Admission is Beta Jan 11
- Introducing Container Storage Interface (CSI) Alpha for Kubernetes Jan 10
- Kubernetes v1.9 releases beta support for Windows Server Containers Jan 9
- Five Days of Kubernetes 1.9 Jan 8
- Introducing Kubeflow - A Composable, Portable, Scalable ML Stack Built for Kubernetes Dec 21
- Kubernetes 1.9: Apps Workloads GA and Expanded Ecosystem Dec 15
- Using eBPF in Kubernetes Dec 7
- PaddlePaddle Fluid: Elastic Deep Learning on Kubernetes Dec 6
- Autoscaling in Kubernetes Nov 17
- Certified Kubernetes Conformance Program: Launch Celebration Round Up Nov 16
- Kubernetes is Still Hard (for Developers) Nov 15
- Securing Software Supply Chain with Grafeas Nov 3
- Containerd Brings More Container Runtime Options for Kubernetes Nov 2
- Kubernetes the Easy Way Nov 1
- Enforcing Network Policies in Kubernetes Oct 30
- Using RBAC, Generally Available in Kubernetes v1.8 Oct 28
- It Takes a Village to Raise a Kubernetes Oct 26
- kubeadm v1.8 Released: Introducing Easy Upgrades for Kubernetes Clusters Oct 25
- Five Days of Kubernetes 1.8 Oct 24
- Introducing Software Certification for Kubernetes Oct 19
- Request Routing and Policy Management with the Istio Service Mesh Oct 10
- Kubernetes Community Steering Committee Election Results Oct 5
- Kubernetes 1.8: Security, Workloads and Feature Depth Sep 29
- Kubernetes StatefulSets & DaemonSets Updates Sep 27
- Introducing the Resource Management Working Group Sep 21
- Windows Networking at Parity with Linux for Kubernetes Sep 8
- Kubernetes Meets High-Performance Computing Aug 22
- High Performance Networking with EC2 Virtual Private Clouds Aug 11
- Kompose Helps Developers Move Docker Compose Files to Kubernetes Aug 10
- Happy Second Birthday: A Kubernetes Retrospective Jul 28
- How Watson Health Cloud Deploys Applications with Kubernetes Jul 14
- Kubernetes 1.7: Security Hardening, Stateful Application Updates and Extensibility Jun 30
- Draft: Kubernetes container development made easy May 31
- Managing microservices with the Istio service mesh May 31
- Kubespray Ansible Playbooks foster Collaborative Kubernetes Ops May 19
- Kubernetes: a monitoring guide May 19
- Dancing at the Lip of a Volcano: The Kubernetes Security Process - Explained May 18
- How Bitmovin is Doing Multi-Stage Canary Deployments with Kubernetes in the Cloud and On-Prem Apr 21
- RBAC Support in Kubernetes Apr 6
- Configuring Private DNS Zones and Upstream Nameservers in Kubernetes Apr 4
- Advanced Scheduling in Kubernetes Mar 31
- Scalability updates in Kubernetes 1.6: 5,000 node and 150,000 pod clusters Mar 30
- Five Days of Kubernetes 1.6 Mar 29
- Dynamic Provisioning and Storage Classes in Kubernetes Mar 29
- Kubernetes 1.6: Multi-user, Multi-workloads at Scale Mar 28
- The K8sPort: Engaging Kubernetes Community One Activity at a Time Mar 24
- Deploying PostgreSQL Clusters using StatefulSets Feb 24
- Containers as a Service, the foundation for next generation PaaS Feb 21
- Inside JD.com's Shift to Kubernetes from OpenStack Feb 10
- Run Deep Learning with PaddlePaddle on Kubernetes Feb 8
- Highly Available Kubernetes Clusters Feb 2
- Running MongoDB on Kubernetes with StatefulSets Jan 30
- Fission: Serverless Functions as a Service for Kubernetes Jan 30
- How we run Kubernetes in Kubernetes aka Kubeception Jan 20
- Scaling Kubernetes deployments with Policy-Based Networking Jan 19
- A Stronger Foundation for Creating and Managing Kubernetes Clusters Jan 12
- Kubernetes UX Survey Infographic Jan 9
- Kubernetes supports OpenAPI Dec 23
- Cluster Federation in Kubernetes 1.5 Dec 22
- Windows Server Support Comes to Kubernetes Dec 21
- StatefulSet: Run and Scale Stateful Applications Easily in Kubernetes Dec 20
- Introducing Container Runtime Interface (CRI) in Kubernetes Dec 19
- Five Days of Kubernetes 1.5 Dec 19
- Kubernetes 1.5: Supporting Production Workloads Dec 13
- From Network Policies to Security Policies Dec 8
- Kompose: a tool to go from Docker-compose to Kubernetes Nov 22
- Kubernetes Containers Logging and Monitoring with Sematext Nov 18
- Visualize Kubelet Performance with Node Dashboard Nov 17
- CNCF Partners With The Linux Foundation To Launch New Kubernetes Certification, Training and Managed Service Provider Program Nov 8
- Modernizing the Skytap Cloud Micro-Service Architecture with Kubernetes Nov 7
- Bringing Kubernetes Support to Azure Container Service Nov 7
- Tail Kubernetes with Stern Oct 31
- Introducing Kubernetes Service Partners program and a redesigned Partners page Oct 31
- How We Architected and Run Kubernetes on OpenStack at Scale at Yahoo! JAPAN Oct 24
- Building Globally Distributed Services using Kubernetes Cluster Federation Oct 14
- Helm Charts: making it simple to package and deploy common applications on Kubernetes Oct 10
- Dynamic Provisioning and Storage Classes in Kubernetes Oct 7
- How we improved Kubernetes Dashboard UI in 1.4 for your production needs Oct 3
- How we made Kubernetes insanely easy to install Sep 28
- How Qbox Saved 50% per Month on AWS Bills Using Kubernetes and Supergiant Sep 27
- Kubernetes 1.4: Making it easy to run on Kubernetes anywhere Sep 26
- High performance network policies in Kubernetes clusters Sep 21
- Creating a PostgreSQL Cluster using Helm Sep 9
- Deploying to Multiple Kubernetes Clusters with kit Sep 6
- Cloud Native Application Interfaces Sep 1
- Security Best Practices for Kubernetes Deployment Aug 31
- Scaling Stateful Applications using Kubernetes Pet Sets and FlexVolumes with Datera Elastic Data Fabric Aug 29
- SIG Apps: build apps for and operate them in Kubernetes Aug 16
- Kubernetes Namespaces: use cases and insights Aug 16
- Create a Couchbase cluster using Kubernetes Aug 15
- Challenges of a Remotely Managed, On-Premises, Bare-Metal Kubernetes Cluster Aug 2
- Why OpenStack's embrace of Kubernetes is great for both communities Jul 26
- The Bet on Kubernetes, a Red Hat Perspective Jul 21
- Happy Birthday Kubernetes. Oh, the places you’ll go! Jul 21
- A Very Happy Birthday Kubernetes Jul 21
- Bringing End-to-End Kubernetes Testing to Azure (Part 2) Jul 18
- Steering an Automation Platform at Wercker with Kubernetes Jul 15
- Dashboard - Full Featured Web Interface for Kubernetes Jul 15
- Cross Cluster Services - Achieving Higher Availability for your Kubernetes Applications Jul 14
- Citrix + Kubernetes = A Home Run Jul 14
- Thousand Instances of Cassandra using Kubernetes Pet Set Jul 13
- Stateful Applications in Containers!? Kubernetes 1.3 Says “Yes!” Jul 13
- Kubernetes in Rancher: the further evolution Jul 12
- Autoscaling in Kubernetes Jul 12
- rktnetes brings rkt container engine to Kubernetes Jul 11
- Minikube: easily run Kubernetes locally Jul 11
- Five Days of Kubernetes 1.3 Jul 11
- Updates to Performance and Scalability in Kubernetes 1.3 -- 2,000 node 60,000 pod clusters Jul 7
- Kubernetes 1.3: Bridging Cloud Native and Enterprise Workloads Jul 6
- Container Design Patterns Jun 21
- The Illustrated Children's Guide to Kubernetes Jun 9
- Bringing End-to-End Kubernetes Testing to Azure (Part 1) Jun 6
- Hypernetes: Bringing Security and Multi-tenancy to Kubernetes May 24
- CoreOS Fest 2016: CoreOS and Kubernetes Community meet in Berlin (& San Francisco) May 3
- Introducing the Kubernetes OpenStack Special Interest Group Apr 22
- SIG-UI: the place for building awesome user interfaces for Kubernetes Apr 20
- SIG-ClusterOps: Promote operability and interoperability of Kubernetes clusters Apr 19
- SIG-Networking: Kubernetes Network Policy APIs Coming in 1.3 Apr 18
- How to deploy secure, auditable, and reproducible Kubernetes clusters on AWS Apr 15
- Container survey results - March 2016 Apr 8
- Adding Support for Kubernetes in Rancher Apr 8
- Configuration management with Containers Apr 4
- Using Deployment objects with Kubernetes 1.2 Apr 1
- Kubernetes 1.2 and simplifying advanced networking with Ingress Mar 31
- Using Spark and Zeppelin to process big data on Kubernetes 1.2 Mar 30
- Building highly available applications using Kubernetes new multi-zone clusters (a.k.a. 'Ubernetes Lite') Mar 29
- AppFormix: Helping Enterprises Operationalize Kubernetes Mar 29
- How container metadata changes your point of view Mar 28
- Five Days of Kubernetes 1.2 Mar 28
- 1000 nodes and beyond: updates to Kubernetes performance and scalability in 1.2 Mar 28
- Scaling neural network image classification using Kubernetes with TensorFlow Serving Mar 23
- Kubernetes 1.2: Even more performance upgrades, plus easier application deployment and management Mar 17
- Kubernetes in the Enterprise with Fujitsu’s Cloud Load Control Mar 11
- ElasticBox introduces ElasticKube to help manage Kubernetes within the enterprise Mar 11
- State of the Container World, February 2016 Mar 1
- Kubernetes Community Meeting Notes - 20160225 Mar 1
- KubeCon EU 2016: Kubernetes Community in London Feb 24
- Kubernetes Community Meeting Notes - 20160218 Feb 23
- Kubernetes Community Meeting Notes - 20160211 Feb 16
- ShareThis: Kubernetes In Production Feb 11
- Kubernetes Community Meeting Notes - 20160204 Feb 9
- Kubernetes Community Meeting Notes - 20160128 Feb 2
- State of the Container World, January 2016 Feb 1
- Kubernetes Community Meeting Notes - 20160121 Jan 28
- Kubernetes Community Meeting Notes - 20160114 Jan 28
- Why Kubernetes doesn’t use libnetwork Jan 14
- Simple leader election with Kubernetes and Docker Jan 11
- Creating a Raspberry Pi cluster running Kubernetes, the installation (Part 2) Dec 22
- Managing Kubernetes Pods, Services and Replication Controllers with Puppet Dec 17
- How Weave built a multi-deployment solution for Scope using Kubernetes Dec 12
- Creating a Raspberry Pi cluster running Kubernetes, the shopping list (Part 1) Nov 25
- Monitoring Kubernetes with Sysdig Nov 19
- One million requests per second: Dependable and dynamic distributed systems at scale Nov 11
- Kubernetes 1.1 Performance upgrades, improved tooling and a growing community Nov 9
- Kubernetes as Foundation for Cloud Native PaaS Nov 3
- Some things you didn’t know about kubectl Oct 28
- Kubernetes Performance Measurements and Roadmap Sep 10
- Using Kubernetes Namespaces to Manage Environments Aug 28
- Weekly Kubernetes Community Hangout Notes - July 31 2015 Aug 4
- The Growing Kubernetes Ecosystem Jul 24
- Weekly Kubernetes Community Hangout Notes - July 17 2015 Jul 23
- Strong, Simple SSL for Kubernetes Services Jul 14
- Weekly Kubernetes Community Hangout Notes - July 10 2015 Jul 13
- Announcing the First Kubernetes Enterprise Training Course Jul 8
- Kubernetes 1.0 Launch Event at OSCON Jul 2
- How did the Quake demo from DockerCon Work? Jul 2
- The Distributed System ToolKit: Patterns for Composite Containers Jun 29
- Slides: Cluster Management with Kubernetes, talk given at the University of Edinburgh Jun 26
- Cluster Level Logging with Kubernetes Jun 11
- Weekly Kubernetes Community Hangout Notes - May 22 2015 Jun 2
- Kubernetes on OpenStack May 19
- Weekly Kubernetes Community Hangout Notes - May 15 2015 May 18
- Docker and Kubernetes and AppC May 18
- Kubernetes Release: 0.17.0 May 15
- Resource Usage Monitoring in Kubernetes May 12
- Weekly Kubernetes Community Hangout Notes - May 1 2015 May 11
- Kubernetes Release: 0.16.0 May 11
- AppC Support for Kubernetes through RKT May 4
- Weekly Kubernetes Community Hangout Notes - April 24 2015 Apr 30
- Borg: The Predecessor to Kubernetes Apr 23
- Kubernetes and the Mesosphere DCOS Apr 22
- Weekly Kubernetes Community Hangout Notes - April 17 2015 Apr 17
- Kubernetes Release: 0.15.0 Apr 16
- Introducing Kubernetes API Version v1beta3 Apr 16
- Weekly Kubernetes Community Hangout Notes - April 10 2015 Apr 11
- Faster than a speeding Latte Apr 6
- Weekly Kubernetes Community Hangout Notes - April 3 2015 Apr 4
- Paricipate in a Kubernetes User Experience Study Mar 31
- Weekly Kubernetes Community Hangout Notes - March 27 2015 Mar 28
- Kubernetes Gathering Videos Mar 23
- Welcome to the Kubernetes Blog! Mar 20