Sergey Nivens - Fotolia

Manage Learn to apply best practices and optimize your operations.

ICYMI: 6 tips to master Kubernetes performance and management

The more skilled the team, the better a Kubernetes implementation can be. Learn how to get the most out of service discovery, infrastructure management, CI and more.

As more IT teams use microservices and containers in tandem to support a distributed application architecture, they must choose container management and orchestration tools. Many of those enterprise teams land on Kubernetes due to its versatility and popularity. And there are plenty of ways to get better Kubernetes performance that new users should learn.

However, while Kubernetes promises easy service and container orchestration, it comes with its fair share of management challenges.

In case you missed it, these articles dive into some common problems that developers and architects face with Kubernetes management and share ways to maximize your development potential with the popular orchestration system. Gain valuable, Kubernetes-specific insight from three of our experienced IT industry journalists, including advice on combining microservices and Kubernetes, service discovery management, load balancing, monitoring tasks, and infrastructure provisioning.

How to use microservices and Kubernetes in tandem

Kubernetes adopters face a learning curve when it comes to managing certain aspects of the technology stack, such as storage, networking and security. Experienced application developer and tech writer Twain Taylor explains how Kubernetes management requires a shake-up from traditional infrastructure management practices.

In this piece, Taylor recommends that software teams use a Kubernetes and microservices architecture to their advantage with detailed strategies for persistent data storage, multilayered networking, context-specific Kubernetes pods and data security.

Strategies for optimal Kubernetes service discovery

In a dynamic container and microservices architecture, Kubernetes can support and automate network service discovery to enhance traffic flow and efficiency. Taylor returns with this article to walk you through the approach to customize Kubernetes service discovery features.

This piece examines two popular Kubernetes service discovery approaches: the environment variable method and the domain name system method. Taylor also exhibits how to master specific elements, like labels and selectors, IP addresses, and replication controllers, in order to attain peak service discovery performance with Kubernetes.

Enhance Kubernetes load balancing with gRPC

Load balancing in a container-based environment involves routing requests across various nodes and steering traffic away from failed nodes. For load balancing in Kubernetes, software teams should consider using the gRPC protocol, a Google-developed open source framework for remote procedure calls. GRPC handles Kubernetes load balancing requests thanks to its HTTP/2 network protocol foundation, which conducts multiplexing, a method to concurrently send multiple signals over a network.

In this network protocol deep dive, Taylor details gRPC's benefits and explains how it handles load balancing requests, including a briefing on the four types of gRPC service requests. Discover how gRPC enables Kubernetes users to focus on their application logic rather than worrying about handling request calls over the network.

Docker and Kubernetes monitoring tools for microservices

There are many ways that container deployments can fail. In order to alleviate these problems, organizations need to invest in monitoring tools that are tailored to a Docker and Kubernetes ecosystem.

In this piece, IT industry analyst Kerry Doyle explains how, with the right monitoring tools, developers can see events happening in different levels of the deployment and repair failed deployments automatically. Explore Doyle's analyses of open source and commercial tools for Kubernetes and Docker monitoring such as cAdvisor, Sysdig Monitor, Prometheus and Stackify Retrace.

How Jenkins adapted to a Kubernetes world

Jenkins is an open source automation tool that enables DevOps and CI/CD processes. It works with the help of Jenkins-specific expansions, build packs and plugins for Java developers. However, Jenkins wasn't built or designed for the integration and automation needs of modern cloud and container architectures.

Enter Jenkins X. This open source project offers an updated version of Jenkins that brings CI/CD capabilities to cloud- and Kubernetes-based deployments. Here, Taylor helps readers get familiar with Jenkins X and uncovers how this Kubernetes-specific CI/CD tool improves upon the original Jenkins.

Alleviate pesky Kubernetes infrastructure work

There are a number of up-and-coming tools and platforms designed to diminish Kubernetes infrastructure work.

In this feature, tech industry writer Paul Korzeniowski chronicles first-person accounts on the tribulations that come with Kubernetes infrastructure work and explores tools that aim to democratize the development of distributed container systems. These tools and platforms include Metaparticle, ZooKeeper, Chef, Puppet and Pulumi, as well the Ballerina programming language.

This was last published in May 2019

Dig Deeper on Service orchestration

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

What challenges has your team faced when managing Kubernetes performance?
Cancel

-ADS BY GOOGLE

SearchSoftwareQuality

SearchCloudApplications

SearchAWS

TheServerSide.com

SearchWinDevelopment

DevOpsAgenda

Close