Managing Kubernetes Traffic with F5 NGINX

Tác giả : Amir Rawdat
  • Lượt đọc : 208
  • Kích thước : 1.60 MB
  • Số trang : 124
  • Đăng lúc : 1 năm trước
  • Số lượt tải : 58
  • Số lượt xem : 1.290
  • Đọc trên điện thoại :
  • Đọc Managing Kubernetes  Traffic with F5 NGINX trên điện thoại
Microservices architectures introduce several benefits to the application development and delivery process.
Microservices-based apps are easier to build, test, maintain, and scale. They also reduce downtime through better fault isolation.
While container-based microservices apps have profoundly changed the way DevOps teams deploy applications, they have also introduced challenges. Kubernetes – the de facto container orchestration platform – is designed to simplify management of containerized apps, but it has its own complexities and a steep learning curve. This is because responsibility for many functions that traditionally run inside an app (security, logging, scaling, and so on) are shifted to the Kubernetes networking fabric.
To manage this complexity, DevOps teams need a data plane that gives them control of Kubernetes networking. The data plane is the key component that connects microservices to end users and each other, and managing it effectively is critical to achieving stability and predictability in an environment where modern apps are evolving constantly.
Ingress controller and service mesh are the two Kubernetes-native technologies that provide the control you need over the data plane. This hands-on guide to F5 NGINX Ingress Controller and F5 NGINX Service Mesh includes thorough explanations, diagrams, and code samples to prepare you to deploy and manage production-grade Kubernetes environments.
Chapter 1 introduces NGINX Ingress Controller and NGINX Service Mesh and walks you through installation and deployment, including an integrated solution for managing both north-south and east-west traffic.
Chapter 2 steps through configurations for key use cases:
• TCP/UDP and TLS Passthrough load balancing – Supporting TCP/UDP workloads
• Multi-tenancy and delegation – For safe and effective sharing of resources in a cluster
• Traffic control – Rate limiting and circuit breaking
• Traffic splitting – Blue-green and canary deployments, A/B testing, and debug routing
Chapter 3 covers monitoring, logging, and tracing, which are essential for visibility and insight into your distributed applications.
You’ll learn how to export NGINX metrics to third-party tools including AWS, Elastic Stack, and Prometheus.
And of course, we can’t forget about security. Chapter 4 addresses several mechanisms for protecting your apps, including centralized authentication on the Ingress controller, integration with third-party SSO solutions, and F5 NGINX App Protect WAF policies for preventing advanced attacks and data exfiltration methods.
I’d like to thank my collaborators on this eBook: Jenn Gile for project conception and management, Sandra Kennedy for the cover design, Tony Mauro for editing, and Michael Weil for the layout and diagrams.
This is our first edition of this eBook and we welcome your input on important scenarios to include in future editions.
Amir Rawdat
Technical Marketing Engineer, F5 NGINX