Skip to main content
Contact our team to know more about our services
select webform
By submitting, you acknowledge that you've read and agree to our privacy policies, and Opcito may use the information provided for business purposes.
Become a part of our team
select webform
One file only.
1.5 GB limit.
Allowed types: gif, jpg, jpeg, png, bmp, eps, tif, pict, psd, txt, rtf, html, odf, pdf, doc, docx, ppt, pptx, xls, xlsx, xml, avi, mov, mp3, mp4, ogg, wav, bz2, dmg, gz, jar, rar, sit, svg, tar, zip.
By submitting, you acknowledge that you've read and agree to our privacy policies, and Opcito may use the information provided for business purposes.
Kubernetes 1.14 - Welcoming Windows workloads now
28 Mar 2019

Kubernetes 1.14 - Welcoming Windows workloads now

The first release of 2019 for Kubernetes is here! And it's a big one. A total of 31 updates with a full third of them being moved to stable and another third of them moving to beta. This is the most enhancements that have graduated to stable in one Kubernetes release! A big theme of this release is supporting more workloads on Kubernetes.

There are security enhancements too in this release with updates to RBAC. Let's have a look at the major features that have graduated to stable.

Windows Nodes - Production level support

Windows node support was in beta till now and with this release, Kubernetes will support the addition of Windows Nodes as worker nodes and the scheduling of windows containers. This opens up the Kubernetes ecosystem to companies and people running Windows workloads. And Enterprises with mixed Linux and Windows workloads can now use a single orchestrator Kubernetes to schedule and manage both sets of workloads.

Some of the key features that enable supporting Windows workloads in Kubernetes are

  • Windows Server 2019 for worker nodes and containers.
  • Networking support with Azure-CNI, OVN-Kubernetes, and Flannel
  • Closely matched support for metrics/quotas service types, pods, and workload controllers compared to Linux containers

Kubectl Updates

New documentation for Kubectl
Kubectl can manage resources in a declarative or imperative manner. The documentation for kubectl has now been rewritten with a focus on declarative resource configuration to manage resources declaratively. The documentation is also now available in a standalone book and is available online at the following link https://kubectl.docs.kubernetes.io

Kustomize is now integrated into kubectl via the -k flag to allow declarative Resource Config authoring. The power of being able to reuse Resource Configuration is now available natively within kubectl. Admins can now apply directories with kustomization.yaml to a cluster via kubectl apply -k dir/. Resource config can now also be outputted to stdout without applying it to the cluster via kubectl kustomize dir/

The kustomize subcommand will be developed in a separate Kubernetes owned repo and the latest features will be available from a standalone binary in this new repo. Kubectl will be updated with the latest prior to each Kubernetes release.

kubectl Plugin Mechanism is now classified as Stable
The kubectl plugin system allows developers to publish custom subcommands that take the form of standalone binaries. These subcommands can be used to extend kubectl with new functionality. Plugins need to have the prefix kubectl- and exist in the Users $PATH

Local Persistent Volumes is now GA

This enhancement makes locally attached storage available as a persistent volume. Databases and distributed file systems are the ideal primary use cases for this feature. Local storage is faster and better performing than remote or networked disks and is typically cheaper.

PID Limiting is Moving to Beta

One interesting feature of moving to Beta is PID limiting. Since Process IDs are a fundamental resource on Linux when running a large number of pods it's sometimes trivial to hit the task limit without actually hitting any other resource limits. This can cause the host machine to become unstable. Admins need a mechanism to ensure pod workloads don’t exhaust PIDs and prevent host daemons (Kubelet, kubeproxy, and the container runtime) from running. Additionally, it’s important to limit PIDs among pods to ensure a limited impact on other workloads on that node.

Admins can now provide PID isolation between pods by setting defaults to the number of PIDs per pod. Node to Pod isolation is also possible as an alpha feature by reserving a number of allocatable PIDs to pods in that node.

A few more notable features

Pod priority and preemption give the Kubernetes scheduler another parameter to control pod scheduling by enabling the scheduling of more critical pods first when the cluster is hitting resource limits. Less important pods can be pre-empted and removed to make room for the higher-priority pods. Relative importance is decided by priority.

Pod Readiness Gates allows you an extension point where external feedback can be sought to determine the readiness of a pod.

Harden the default RBAC discovery clusterrolebindings. This feature removes discovery from APIs that allow unauthenticated access by default, thereby improving security for CRDs and the default cluster too.

Availability

Kubernetes 1.14 is available for download on GitHub.

Subscribe to our feed

select webform