Monday, November 07, 2016
Modernizing the Skytap Cloud Micro-Service Architecture with Kubernetes
Editor’s note: Today’s guest post is by the Tools and Infrastructure Engineering team at Skytap, a public cloud provider focused on empowering DevOps workflows, sharing their experience on adopting Kubernetes.
Skytap is a global public cloud that provides our customers the ability to save and clone complex virtualized environments in any given state. Our customers include enterprise organizations running applications in a hybrid cloud, educational organizations providing virtual training labs, users who need easy-to-maintain development and test labs, and a variety of organizations with diverse DevOps workflows.
Some time ago, we started growing our business at an accelerated pace — our user base and our engineering organization continue to grow simultaneously. These are exciting, rewarding challenges! However, it’s difficult to scale applications and organizations smoothly, and we’re approaching the task carefully. When we first began looking at improvements to scale our toolset, it was very clear that traditional OS virtualization was not going to be an effective way to achieve our scaling goals. We found that the persistent nature of VMs encouraged engineers to build and maintain bespoke ‘pet’ VMs; this did not align well with our desire to build reusable runtime environments with a stable, predictable state. Fortuitously, growth in the Docker and Kubernetes communities has aligned with our growth, and the concurrent explosion in community engagement has (from our perspective) helped these tools mature.
In this article we’ll explore how Skytap uses Kubernetes as a key component in services that handle production workloads growing the Skytap Cloud.
As we add engineers, we want to maintain our agility and continue enabling ownership of components throughout the software development lifecycle. This requires a lot of modularization and consistency in key aspects of our process. Previously, we drove reuse with systems-level packaging through our VM and environment templates, but as we scale, containers have become increasingly important as a packaging mechanism due to their comparatively lightweight and precise control of the runtime environment.
In addition to this packaging flexibility, containers help us establish more efficient resource utilization, and they head off growing complexity arising from the natural inclination of teams to mix resources into large, highly-specialized VMs. For example, our operations team would install tools for monitoring health and resource utilization, a development team would deploy a service, and the security team might install traffic monitoring; combining all of that into a single VM greatly increases the test burden and often results in surprises—oops, you pulled in a new system-level Ruby gem!
Containerization of individual components in a service is pretty trivial with Docker. Getting started is easy, but as anyone who has built a distributed system with more than a handful of components knows, the real difficulties are deployment, scaling, availability, consistency, and communication between each unit in the cluster.
Let’s containerize!
We’d begun to trade a lot of our heavily-loved pet VMs for, as the saying goes, cattle.
_____
/ Moo \
\---- /
\ ^__^
\ (oo)\_______
(__)\ )\/\
||-----w |
|| ||
The challenges of distributed systems aren’t simplified by creating a large herd of free-range containers, though. When we started using containers, we recognized the need for a container management framework. We evaluated Docker Swarm, Mesosphere, and Kubernetes, but we found that the Mesosphere usage model didn’t match our needs — we need the ability to manage discrete VMs; this doesn’t match the Mesosphere ‘distributed operating system’ model — and Docker Swarm was still not mature enough. So, we selected Kubernetes.
Launching Kubernetes and building a new distributed service is relatively easy (inasmuch as this can be said for such a service: you can’t beat CAP theorem). However, we need to integrate container management with our existing platform and infrastructure. Some components of the platform are better served by VMs, and we need the ability to containerize services iteratively.
We broke this integration problem down into four categories:
- 1.Service control and deployment
- 2.Inter-service communication
- 3.Infrastructure integration
- 4.Engineering support and education
Service Control and Deployment
We use a custom extension of Capistrano (we call it ‘Skycap’) to deploy services and manage those services at runtime. It is important for us to manage both containerized and classic services through a single, well-established framework. We also need to isolate Skycap from the inevitable breaking changes inherent in an actively-developed tool like Kubernetes.
To handle this, we use wrappers in to our service control framework that isolate kubectl behind Skycap and handle issues like ignoring spurious log messages.
Deployment adds a layer of complexity for us. Docker images are a great way to package software, but historically, we’ve deployed from source, not packages. Our engineering team expects that making changes to source is sufficient to get their work released; devs don’t expect to handle additional packaging steps. Rather than rebuild our entire deployment and orchestration framework for the sake of containerization, we use a continuous integration pipeline for our containerized services. We automatically build a new Docker image for every commit to a project, and then we tag it with the Mercurial (Hg) changeset number of that commit. On the Skycap side, a deployment from a specific Hg revision will then pull the Docker images that are tagged with that same revision number.
We reuse container images across multiple environments. This requires environment-specific configuration to be injected into each container instance. Until recently, we used similar source-based principles to inject these configuration values: each container would copy relevant configuration files from Hg by cURL-ing raw files from the repo at run time. Network availability and variability are a challenge best avoided, though, so we now load the configuration into Kubernetes’ ConfigMap feature. This not only simplifies our Docker images, but it also makes pod startup faster and more predictable (because containers don’t have to download files from Hg).
Inter-service communication
Our services communicate using two primary methods. The first, message brokering, is typical for process-to-process communication within the Skytap platform. The second is through direct point-to-point TCP connections, which are typical for services that communicate with the outside world (such as web services). We’ll discuss the TCP method in the next section, as a component of infrastructure integration.
Managing direct connections between pods in a way that services can understand is complicated. Additionally, our containerized services need to communicate with classic VM-based services. To mitigate this complexity, we primarily use our existing message queueing system. This helped us avoid writing a TCP-based service discovery and load balancing system for handling traffic between pods and non-Kubernetes services.
This reduces our configuration load—services only need to know how to talk to the message queues, rather than to every other service they need to interact with. We have additional flexibility for things like managing the run-state of pods; messages buffer in the queue while nodes are restarting, and we avoid the overhead of re-configuring TCP endpoints each time a pod is added or removed from the cluster. Furthermore, the MQ model allows us to manage load balancing with a more accurate ‘pull’ based approach, in which recipients determine when they are ready to process a new message, instead of using heuristics like ‘least connections’ that simply count the number of open sockets to estimate load.
Migrating MQ-enabled services to Kubernetes is relatively straightforward compared to migrating services that use the complex TCP-based direct or load balanced connections. Additionally, the isolation provided by the message broker means that the switchover from a classic service to a container-based service is essentially transparent to any other MQ-enabled service.
Infrastructure Integration
As an infrastructure provider, we face some unique challenges in configuring Kubernetes for use with our platform. AWS & GCP provide out-of-box solutions that simplify Kubernetes provisioning but make assumptions about the underlying infrastructure that do not match our reality. Some organizations have purpose-built data centers. This option would have required us to abandon our existing load balancing infrastructure, our Puppet based provisioning system and the expertise we’d built up around these tools. We weren’t interested in abandoning the tools or our vested experience, so we needed a way to manage Kubernetes that could integrate with our world instead of rebuild it.
So, we use Puppet to provision and configure VMs that, in turn, run the Skytap Platform. We wrote custom deployment scripts to install Kubernetes on these, and we coordinate with our operations team to do capacity planning for Kube-master and Kube-node hosts.
In the previous section, we mentioned point-to-point TCP-based communication. For customer-facing services, the pods need a way to interface with Skytap’s layer 3 network infrastructure. Examples at Skytap include our web applications and API over HTTPS, Remote Desktop over Web Sockets, FTP, TCP/UDP port forwarding services, full public IPs, etc. We need careful management of network ingress and egress for this external traffic, and have historically used F5 load balancers. The MQ infrastructure for internal services is inadequate for handling this workload because the protocols used by various clients (like web browsers) are very specific and TCP is the lowest common denominator.
To get our load balancers communicating with our Kubernetes pods, we run the kube-proxy on each node. Load balancers route to the node, and kube-proxy handles the final handoff to the appropriate pod.
We mustn’t forget that Kubernetes needs to route traffic between pods (for both TCP-based and MQ-based messaging). We use the Calico plugin for Kubernetes networking, with a specialized service to reconfigure the F5 when Kubernetes launches or reaps pods. Calico handles route advertisement with BGP, which eases integration with the F5.
F5s also need to have their load balancing pool reconfigured when pods enter or leave the cluster. The F5 appliance maintains a pool of load-balanced back-ends; ingress to a containerized service is directed through this pool to one of the nodes hosting a service pod. This is straightforward for static network configurations – but since we’re using Kubernetes to manage pod replication and availability, our networking situation becomes dynamic. To handle changes, we have a ‘load balancer’ pod that monitors the Kubernetes svc object for changes; if a pod is removed or added, the ‘load balancer’ pod will detect this change through the svc object, and then update the F5 configuration through the appliance’s web API. This way, Kubernetes transparently handles replication and failover/recovery, and the dynamic load balancer configuration lets this process remain invisible to the service or user who originated the request. Similarly, the combination of the Calico virtual network plus the F5 load balancer means that TCP connections should behave consistently for services that are running on both the traditional VM infrastructure, or that have been migrated to containers.
With dynamic reconfiguration of the network, the replication mechanics of Kubernetes make horizontal scaling and (most) failover/recovery very straightforward. We haven’t yet reached the reactive scaling milestone, but we’ve laid the groundwork with the Kubernetes and Calico infrastructure, making one avenue to implement it straightforward:
- Configure upper and lower bounds for service replication
- Build a load analysis and scaling service (easy, right?)
- If load patterns match the configured triggers in the scaling service (for example, request rate or volume above certain bounds), issue: kubectl scale –replicas=COUNT rc NAME
This would allow us fine-grained control of autoscaling at the platform level, instead of from the applications themselves – but we’ll also evaluate Horizontal Pod Autoscaling in Kubernetes; which may suit our need without a custom service.
Keep an eye on our GitHub account and the Skytap blog; as our solutions to problems like these mature, we hope to share what we’ve built with the open source community.
Engineering Support
A transition like our containerization project requires the engineers involved in maintaining and contributing to the platform change their workflow and learn new methods for creating and troubleshooting services.
Because a variety of learning styles require a multi-faceted approach, we handle this in three ways: with documentation, with direct outreach to engineers (that is, brownbag sessions or coaching teams), and by offering easy-to-access, ad-hoc support.
We continue to curate a collection of documents that provide guidance on transitioning classic services to Kubernetes, creating new services, and operating containerized services. Documentation isn’t for everyone, and sometimes it’s missing or incomplete despite our best efforts, so we also run an internal #kube-help Slack channel, where anyone can stop in for assistance or arrange a more in-depth face-to-face discussion.
We have one more powerful support tool: we automatically construct and test prod-like environments that include this Kubernetes infrastructure, which allows engineers a lot of freedom to experiment and work with Kubernetes hands-on. We explore the details of automated environment delivery in more detail in this post.
Final Thoughts
We’ve had great success with Kubernetes and containerization in general, but we’ve certainly found that integrating with an existing full-stack environment has presented many challenges. While not exactly plug-and-play from an enterprise lifecycle standpoint, the flexibility and configurability of Kubernetes still remains a very powerful tool for building our modularized service ecosystem.
We love application modernization challenges. The Skytap platform is well suited for these sorts of migration efforts – we run Skytap in Skytap, of course, which helped us tremendously in our Kubernetes integration project. If you’re planning modernization efforts of your own, connect with us, we’re happy to help.
–Shawn Falkner-Horine and Joe Burchett, Tools and Infrastructure Engineering, Skytap
- Download Kubernetes
- Get involved with the Kubernetes project on GitHub
- Post questions (or answer questions) on Stack Overflow
- Connect with the community on Slack
- Follow us on Twitter @Kubernetesio for latest updates
- Introducing kustomize; Template-free Configuration Customization for Kubernetes May 29
- Getting to Know Kubevirt May 22
- Gardener - The Kubernetes Botanist May 17
- Docs are Migrating from Jekyll to Hugo May 5
- Announcing Kubeflow 0.1 May 4
- Current State of Policy in Kubernetes May 2
- Developing on Kubernetes May 1
- Zero-downtime Deployment in Kubernetes with Jenkins Apr 30
- Kubernetes Community - Top of the Open Source Charts in 2017 Apr 25
- Local Persistent Volumes for Kubernetes Goes Beta Apr 13
- Container Storage Interface (CSI) for Kubernetes Goes Beta Apr 10
- Fixing the Subpath Volume Vulnerability in Kubernetes Apr 4
- Principles of Container-based Application Design Mar 15
- Expanding User Support with Office Hours Mar 14
- How to Integrate RollingUpdate Strategy for TPR in Kubernetes Mar 13
- Apache Spark 2.3 with Native Kubernetes Support Mar 6
- Kubernetes: First Beta Version of Kubernetes 1.10 is Here Mar 2
- Reporting Errors from Control Plane to Applications Using Kubernetes Events Jan 25
- Core Workloads API GA Jan 15
- Introducing client-go version 6 Jan 12
- Extensible Admission is Beta Jan 11
- Introducing Container Storage Interface (CSI) Alpha for Kubernetes Jan 10
- Kubernetes v1.9 releases beta support for Windows Server Containers Jan 9
- Five Days of Kubernetes 1.9 Jan 8
- Introducing Kubeflow - A Composable, Portable, Scalable ML Stack Built for Kubernetes Dec 21
- Kubernetes 1.9: Apps Workloads GA and Expanded Ecosystem Dec 15
- Using eBPF in Kubernetes Dec 7
- PaddlePaddle Fluid: Elastic Deep Learning on Kubernetes Dec 6
- Autoscaling in Kubernetes Nov 17
- Certified Kubernetes Conformance Program: Launch Celebration Round Up Nov 16
- Kubernetes is Still Hard (for Developers) Nov 15
- Securing Software Supply Chain with Grafeas Nov 3
- Containerd Brings More Container Runtime Options for Kubernetes Nov 2
- Kubernetes the Easy Way Nov 1
- Enforcing Network Policies in Kubernetes Oct 30
- Using RBAC, Generally Available in Kubernetes v1.8 Oct 28
- It Takes a Village to Raise a Kubernetes Oct 26
- kubeadm v1.8 Released: Introducing Easy Upgrades for Kubernetes Clusters Oct 25
- Five Days of Kubernetes 1.8 Oct 24
- Introducing Software Certification for Kubernetes Oct 19
- Request Routing and Policy Management with the Istio Service Mesh Oct 10
- Kubernetes Community Steering Committee Election Results Oct 5
- Kubernetes 1.8: Security, Workloads and Feature Depth Sep 29
- Kubernetes StatefulSets & DaemonSets Updates Sep 27
- Introducing the Resource Management Working Group Sep 21
- Windows Networking at Parity with Linux for Kubernetes Sep 8
- Kubernetes Meets High-Performance Computing Aug 22
- High Performance Networking with EC2 Virtual Private Clouds Aug 11
- Kompose Helps Developers Move Docker Compose Files to Kubernetes Aug 10
- Happy Second Birthday: A Kubernetes Retrospective Jul 28
- How Watson Health Cloud Deploys Applications with Kubernetes Jul 14
- Kubernetes 1.7: Security Hardening, Stateful Application Updates and Extensibility Jun 30
- Draft: Kubernetes container development made easy May 31
- Managing microservices with the Istio service mesh May 31
- Kubespray Ansible Playbooks foster Collaborative Kubernetes Ops May 19
- Kubernetes: a monitoring guide May 19
- Dancing at the Lip of a Volcano: The Kubernetes Security Process - Explained May 18
- How Bitmovin is Doing Multi-Stage Canary Deployments with Kubernetes in the Cloud and On-Prem Apr 21
- RBAC Support in Kubernetes Apr 6
- Configuring Private DNS Zones and Upstream Nameservers in Kubernetes Apr 4
- Advanced Scheduling in Kubernetes Mar 31
- Scalability updates in Kubernetes 1.6: 5,000 node and 150,000 pod clusters Mar 30
- Five Days of Kubernetes 1.6 Mar 29
- Dynamic Provisioning and Storage Classes in Kubernetes Mar 29
- Kubernetes 1.6: Multi-user, Multi-workloads at Scale Mar 28
- The K8sPort: Engaging Kubernetes Community One Activity at a Time Mar 24
- Deploying PostgreSQL Clusters using StatefulSets Feb 24
- Containers as a Service, the foundation for next generation PaaS Feb 21
- Inside JD.com's Shift to Kubernetes from OpenStack Feb 10
- Run Deep Learning with PaddlePaddle on Kubernetes Feb 8
- Highly Available Kubernetes Clusters Feb 2
- Running MongoDB on Kubernetes with StatefulSets Jan 30
- Fission: Serverless Functions as a Service for Kubernetes Jan 30
- How we run Kubernetes in Kubernetes aka Kubeception Jan 20
- Scaling Kubernetes deployments with Policy-Based Networking Jan 19
- A Stronger Foundation for Creating and Managing Kubernetes Clusters Jan 12
- Kubernetes UX Survey Infographic Jan 9
- Kubernetes supports OpenAPI Dec 23
- Cluster Federation in Kubernetes 1.5 Dec 22
- Windows Server Support Comes to Kubernetes Dec 21
- StatefulSet: Run and Scale Stateful Applications Easily in Kubernetes Dec 20
- Introducing Container Runtime Interface (CRI) in Kubernetes Dec 19
- Five Days of Kubernetes 1.5 Dec 19
- Kubernetes 1.5: Supporting Production Workloads Dec 13
- From Network Policies to Security Policies Dec 8
- Kompose: a tool to go from Docker-compose to Kubernetes Nov 22
- Kubernetes Containers Logging and Monitoring with Sematext Nov 18
- Visualize Kubelet Performance with Node Dashboard Nov 17
- CNCF Partners With The Linux Foundation To Launch New Kubernetes Certification, Training and Managed Service Provider Program Nov 8
- Modernizing the Skytap Cloud Micro-Service Architecture with Kubernetes Nov 7
- Bringing Kubernetes Support to Azure Container Service Nov 7
- Tail Kubernetes with Stern Oct 31
- Introducing Kubernetes Service Partners program and a redesigned Partners page Oct 31
- How We Architected and Run Kubernetes on OpenStack at Scale at Yahoo! JAPAN Oct 24
- Building Globally Distributed Services using Kubernetes Cluster Federation Oct 14
- Helm Charts: making it simple to package and deploy common applications on Kubernetes Oct 10
- Dynamic Provisioning and Storage Classes in Kubernetes Oct 7
- How we improved Kubernetes Dashboard UI in 1.4 for your production needs Oct 3
- How we made Kubernetes insanely easy to install Sep 28
- How Qbox Saved 50% per Month on AWS Bills Using Kubernetes and Supergiant Sep 27
- Kubernetes 1.4: Making it easy to run on Kubernetes anywhere Sep 26
- High performance network policies in Kubernetes clusters Sep 21
- Creating a PostgreSQL Cluster using Helm Sep 9
- Deploying to Multiple Kubernetes Clusters with kit Sep 6
- Cloud Native Application Interfaces Sep 1
- Security Best Practices for Kubernetes Deployment Aug 31
- Scaling Stateful Applications using Kubernetes Pet Sets and FlexVolumes with Datera Elastic Data Fabric Aug 29
- SIG Apps: build apps for and operate them in Kubernetes Aug 16
- Kubernetes Namespaces: use cases and insights Aug 16
- Create a Couchbase cluster using Kubernetes Aug 15
- Challenges of a Remotely Managed, On-Premises, Bare-Metal Kubernetes Cluster Aug 2
- Why OpenStack's embrace of Kubernetes is great for both communities Jul 26
- The Bet on Kubernetes, a Red Hat Perspective Jul 21
- Happy Birthday Kubernetes. Oh, the places you’ll go! Jul 21
- A Very Happy Birthday Kubernetes Jul 21
- Bringing End-to-End Kubernetes Testing to Azure (Part 2) Jul 18
- Steering an Automation Platform at Wercker with Kubernetes Jul 15
- Dashboard - Full Featured Web Interface for Kubernetes Jul 15
- Cross Cluster Services - Achieving Higher Availability for your Kubernetes Applications Jul 14
- Citrix + Kubernetes = A Home Run Jul 14
- Thousand Instances of Cassandra using Kubernetes Pet Set Jul 13
- Stateful Applications in Containers!? Kubernetes 1.3 Says “Yes!” Jul 13
- Kubernetes in Rancher: the further evolution Jul 12
- Autoscaling in Kubernetes Jul 12
- rktnetes brings rkt container engine to Kubernetes Jul 11
- Minikube: easily run Kubernetes locally Jul 11
- Five Days of Kubernetes 1.3 Jul 11
- Updates to Performance and Scalability in Kubernetes 1.3 -- 2,000 node 60,000 pod clusters Jul 7
- Kubernetes 1.3: Bridging Cloud Native and Enterprise Workloads Jul 6
- Container Design Patterns Jun 21
- The Illustrated Children's Guide to Kubernetes Jun 9
- Bringing End-to-End Kubernetes Testing to Azure (Part 1) Jun 6
- Hypernetes: Bringing Security and Multi-tenancy to Kubernetes May 24
- CoreOS Fest 2016: CoreOS and Kubernetes Community meet in Berlin (& San Francisco) May 3
- Introducing the Kubernetes OpenStack Special Interest Group Apr 22
- SIG-UI: the place for building awesome user interfaces for Kubernetes Apr 20
- SIG-ClusterOps: Promote operability and interoperability of Kubernetes clusters Apr 19
- SIG-Networking: Kubernetes Network Policy APIs Coming in 1.3 Apr 18
- How to deploy secure, auditable, and reproducible Kubernetes clusters on AWS Apr 15
- Container survey results - March 2016 Apr 8
- Adding Support for Kubernetes in Rancher Apr 8
- Configuration management with Containers Apr 4
- Using Deployment objects with Kubernetes 1.2 Apr 1
- Kubernetes 1.2 and simplifying advanced networking with Ingress Mar 31
- Using Spark and Zeppelin to process big data on Kubernetes 1.2 Mar 30
- Building highly available applications using Kubernetes new multi-zone clusters (a.k.a. 'Ubernetes Lite') Mar 29
- AppFormix: Helping Enterprises Operationalize Kubernetes Mar 29
- How container metadata changes your point of view Mar 28
- Five Days of Kubernetes 1.2 Mar 28
- 1000 nodes and beyond: updates to Kubernetes performance and scalability in 1.2 Mar 28
- Scaling neural network image classification using Kubernetes with TensorFlow Serving Mar 23
- Kubernetes 1.2: Even more performance upgrades, plus easier application deployment and management Mar 17
- Kubernetes in the Enterprise with Fujitsu’s Cloud Load Control Mar 11
- ElasticBox introduces ElasticKube to help manage Kubernetes within the enterprise Mar 11
- State of the Container World, February 2016 Mar 1
- Kubernetes Community Meeting Notes - 20160225 Mar 1
- KubeCon EU 2016: Kubernetes Community in London Feb 24
- Kubernetes Community Meeting Notes - 20160218 Feb 23
- Kubernetes Community Meeting Notes - 20160211 Feb 16
- ShareThis: Kubernetes In Production Feb 11
- Kubernetes Community Meeting Notes - 20160204 Feb 9
- Kubernetes Community Meeting Notes - 20160128 Feb 2
- State of the Container World, January 2016 Feb 1
- Kubernetes Community Meeting Notes - 20160121 Jan 28
- Kubernetes Community Meeting Notes - 20160114 Jan 28
- Why Kubernetes doesn’t use libnetwork Jan 14
- Simple leader election with Kubernetes and Docker Jan 11
- Creating a Raspberry Pi cluster running Kubernetes, the installation (Part 2) Dec 22
- Managing Kubernetes Pods, Services and Replication Controllers with Puppet Dec 17
- How Weave built a multi-deployment solution for Scope using Kubernetes Dec 12
- Creating a Raspberry Pi cluster running Kubernetes, the shopping list (Part 1) Nov 25
- Monitoring Kubernetes with Sysdig Nov 19
- One million requests per second: Dependable and dynamic distributed systems at scale Nov 11
- Kubernetes 1.1 Performance upgrades, improved tooling and a growing community Nov 9
- Kubernetes as Foundation for Cloud Native PaaS Nov 3
- Some things you didn’t know about kubectl Oct 28
- Kubernetes Performance Measurements and Roadmap Sep 10
- Using Kubernetes Namespaces to Manage Environments Aug 28
- Weekly Kubernetes Community Hangout Notes - July 31 2015 Aug 4
- The Growing Kubernetes Ecosystem Jul 24
- Weekly Kubernetes Community Hangout Notes - July 17 2015 Jul 23
- Strong, Simple SSL for Kubernetes Services Jul 14
- Weekly Kubernetes Community Hangout Notes - July 10 2015 Jul 13
- Announcing the First Kubernetes Enterprise Training Course Jul 8
- Kubernetes 1.0 Launch Event at OSCON Jul 2
- How did the Quake demo from DockerCon Work? Jul 2
- The Distributed System ToolKit: Patterns for Composite Containers Jun 29
- Slides: Cluster Management with Kubernetes, talk given at the University of Edinburgh Jun 26
- Cluster Level Logging with Kubernetes Jun 11
- Weekly Kubernetes Community Hangout Notes - May 22 2015 Jun 2
- Kubernetes on OpenStack May 19
- Weekly Kubernetes Community Hangout Notes - May 15 2015 May 18
- Docker and Kubernetes and AppC May 18
- Kubernetes Release: 0.17.0 May 15
- Resource Usage Monitoring in Kubernetes May 12
- Weekly Kubernetes Community Hangout Notes - May 1 2015 May 11
- Kubernetes Release: 0.16.0 May 11
- AppC Support for Kubernetes through RKT May 4
- Weekly Kubernetes Community Hangout Notes - April 24 2015 Apr 30
- Borg: The Predecessor to Kubernetes Apr 23
- Kubernetes and the Mesosphere DCOS Apr 22
- Weekly Kubernetes Community Hangout Notes - April 17 2015 Apr 17
- Kubernetes Release: 0.15.0 Apr 16
- Introducing Kubernetes API Version v1beta3 Apr 16
- Weekly Kubernetes Community Hangout Notes - April 10 2015 Apr 11
- Faster than a speeding Latte Apr 6
- Weekly Kubernetes Community Hangout Notes - April 3 2015 Apr 4
- Paricipate in a Kubernetes User Experience Study Mar 31
- Weekly Kubernetes Community Hangout Notes - March 27 2015 Mar 28
- Kubernetes Gathering Videos Mar 23
- Welcome to the Kubernetes Blog! Mar 20