During last November’s Kubecon + CloudNativeCon North America in Atlanta there was an announcement regarding the retirement of ingress-nginx.
As mentioned in the original blog post, existing deployments of Ingress NGINX will continue to function and installation artifacts will remain available. On the other hand, this is a good time to consider the options moving forward, such as the Gateway API or another ingress controller.
We thought this would be a good moment to ask those that have been through the process to share their experience, their choices and reasoning and any other lessons learned.
CERN
Over the years we added multiple ingress controller options in our clusters. This covered for some gaps in functionality in different controllers, which was especially relevant in the earlier days when dealing with things like ssl passthrough or integration with cert-manager. Since these configurations rely on annotations it also means lock-in to specific controllers even when not relying on these specific features.
As of the end of 2025 ~60% of deployments rely on ingress-nginx as an ingress controller.
Recommendations and challenges
Internally, we started tracking the remaining deployments relying on ingress-nginx. After a first dissemination campaign we will follow the remaining cases more closely and work with those specific deployments to get them migrated as soon as possible.
In our campaign we reinforce our recommendations to cluster owners.
Option 1: Move to the Gateway API
As the modern replacement of Ingress, this is the recommended option for our internal users.
During 2025 we moved to Cilium as our default CNI plugin, which brings built-in support for the Gateway API. The difficulty of moving to the new API depends on the deployment:
- For our own service and components, we control the helm charts or other deployment templates responsible for setting up the Ingress resources. In these cases it’s an easy transition with only a few changes to their values configuration or a reshape of the templates
- In many other cases, the Ingress resources come from chart dependencies. We’re then left with cases where the Gateway API is already an option in the upstream charts, or if not we try to contribute this as a new feature
Option 2: Move to a different Ingress Controller
This can be a better option for those deployments where dependencies do not yet provide the required configurations to rely on the Gateway API and validating those changes can take some time.
While waiting for those developments to be merged and released, users have the option to move to a different Ingress controller. In our deployments the recommendation is to move back to Traefik and we provide a match for the most popular annotations.
Lessons learned
While less common than in the early days of cloud native, events like this should be expected with the need to move away from a particular solution or project. In the past we have moved twice to a different CNI provider following new requirements or changes in maturity of the upstream projects we had originally selected.
Two main points come out of these past experiences and that are also valid in the ingress-nginx case:
- When certain configurations add a lock-in to a specific plugin or provider, ensure this is properly documented and understood internally, both in the platform and end user teams. The requirement to use annotations in the Ingress configurations for things such as ssl redirection or tuning of proxy body sizes or buffering are good examples. Documenting those will prevent surprises down the road.
- Appropriate review and tracking of each component in the stack is essential, in particular for sub-projects of graduated projects in the CNCF. As an example, ingress-nginx lives under the Kubernetes GitHub repo but does not have the expected maturity level of the main project as became obvious by its recent retirement.
- Ensure some effort is left available to deal with events like this. Relying on open and community backed projects such as those living in the CNCF brings a huge amount of benefits, but for those with less clear or lower maturity levels ensure there is enough slack to deal with unexpected events.
Boeing
Infrastructure complexity and regulated environments
Boeing’s Kubernetes footprint is defined by its diversity, requiring a strategy that accounts for several distinct operating environments:
- Public cloud: Leveraging the scalability of providers like AWS or Azure for corporate applications.
- On-premise: High-performance clusters residing in Boeing-managed data centers for sensitive IP and engineering workloads.
- Disconnected/air-gapped: Specialized environments—often in defense or manufacturing—that lack persistent internet connectivity and require robust, pre-packaged container images and local registries.
- Customer-deliverable: Clusters built for external clients that must adhere to specific security protocols and infrastructure constraints, often overriding Boeing’s standard internal defaults.
Strategic evolution to Service Mesh The retirement of the Nginx Ingress controller acted as a catalyst for a broader architectural evolution. Several of Boeing’s flagship K8s instances had already begun integrating Istio Service Mesh to manage complex traffic routing, security, and observability. For these teams, Istio provides a more robust framework than a traditional ingress-only solution.
Standardized selection framework For teams still in the discovery phase, the selection process is grounded in industry-standard due diligence. Boeing teams are utilizing Cloud Native Computing Foundation (CNCF) frameworks—specifically the CNCF Landscape and CNCF Maturity Levels—to ensure chosen technologies have the community backing and security audits required for long-term aerospace lifecycles.
Have multiple end users reporting on what their transition looks/looked like, the alternative they chose and reasoning behind it. Options include alternative controllers as well as a migration to the gateway api
Lessons learned
Have end users reporting on lessons learned from this incident, in particular the need to leave enough resources free to deal with events like this, careful evaluation of projects but also sub-projects, etc.
Large tech company
NGINX has been used over many years as an ingress controller in most enterprise organizations. In the last decade, as part of the transformation to cloud-native Kubernetes based infrastructure, ingress-nginx provided the bridge for networking, supporting low latency connections at scale and even for providing bi-directional mTLS connections for zero-trust configurations. With the imminent retirement of nginx in March 2026, an effort to reduce or remove this infrastructure dependency has been ongoing to reduce risk of operating cloud-native systems. A couple of choices have been considered to be viable options:-
1) using Traefik because of its configuration compatibility or
2) using Envoy Gateway which leverages the Kubernetes Gateway API and provides an extensible architecture with role-based support.
The choice for some of the platform teams has been Envoy Gateway which future-proofs cloud-native infrastructure architectures with support for standardized networking, observability and authentication.