If you've been living under a rock, or maybe just busy with real life stuff, you might've missed the recent news that the fan favorite ingress-nginx is going to be deprecated

As someone who uses this software extensively both personally and professionally, this is a hard pill to swallow, but a perfect opportunity to prioritise learning the Gateway API. After all, if you’re going to migrate anyway, you might as well take the leap sooner rather than later

What is it?

The Gateway API is Kubernetes’ next-generation way to control traffic into your clusters from outside. Think of it as Ingress 2.0, but with way more power and flexibility. It provides a vendor-agnostic, standardized way for traffic to enter your cluster while giving you much more granular control over routing, security, and traffic management.

Unlike the original Ingress resource (which was more of a suggestion than a specification), the Gateway API is a proper standard that multiple implementations can follow. This means you’re not locked into a single vendor’s interpretation of how things should work.

What problem does this solve?

Historically with Ingress resources, there are a few key limitations that can make your life difficult:

Limited expressiveness: Want to do something slightly complex? Good luck. Ingress resources are pretty basic, they can route traffic based on hostname and path, but that’s about it.

Vendor lock-in through annotations: They’re the bane of portability. What works with ingress-nginx might not work with Traefik, and vice versa. You end up with manifests that are tightly coupled to your ingress controller of choice.

Role separation: With Ingress, you typically need a high level of permissions to create routes. There’s no separation of concerns. This means you may need to give developers more access than you care to.

Limited traffic management: Want session affinity? Rate limiting? Advanced load balancing? You’re back to those vendor-specific annotations again.

The Gateway API addresses all of these by providing a richer, more expressive API that’s consistent across implementations. Plus, it introduces role-based access control at the API level

How does it work?

The Gateway API introduces several new resource types that work together:

  • GatewayClass: Defines which controller implementation you want to use (like choosing between ingress-nginx and Traefik, but at the API level)
  • Gateway: The actual entry point into your cluster, think of it as the load balancer or reverse proxy
  • HTTPRoute: The routing rules that attach to a Gateway (this is the replacement for Ingress)
  • BackendTrafficPolicy: Advanced traffic management like load balancing, timeouts, and retries
  • SecurityPolicy: Security controls like IP whitelisting, authentication, and rate limiting

The beauty of this design is that it separates concerns. Infrastructure teams can manage the Gateway (the entry point), while application teams can manage their HTTPRoutes (the routing rules) without stepping on each other’s toes.

My implementation: Envoy Gateway

For my home lab setup, I chose Envoy Gateway as the implementation. Envoy is battle-tested, performant, and has excellent observability. Plus, the Envoy Gateway project makes it dead simple to deploy, it’s just a Helm chart.

In this example we will be keeping it simple and using 1 Gateway that will handle both internal (LAN) and external (WAN) traffic. In production, you’d likely want to create 2 different Gateways using 2 Loadbalancers to create the required segregation

Installation

I’m running this on a Talos cluster with ArgoCD managing everything via GitOps, so the installation is handled through an ArgoCD Application:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: envoy-gateway
  namespace: argocd
spec:
  project: default
  source:
    chart: gateway-helm
    repoURL: docker.io/envoyproxy
    targetRevision: v1.6.0
    helm:
      releaseName: eg
  
  destination:
    server: https://kubernetes.default.svc
    namespace: envoy-gateway-system
  
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
      - ServerSideApply=true

This deploys Envoy Gateway v1.6.0 into the envoy-gateway-system namespace. The Helm chart handles all the heavy lifting, deploying the controller, setting up RBAC, and managing the lifecycle. This also installs the Gateway API CRD’s which at the time of writing are not installed in the cluster by default.

Setting up the Gateway

Once Envoy Gateway is installed, you need to create three resources to get traffic flowing:

1. GatewayClass - This tells Kubernetes which controller should handle Gateway resources:

1
2
3
4
5
6
7
8
9
---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  

2. EnvoyProxy - This is the Envoy-specific configuration for how the gateway should be deployed. In my case, I’m using MetalLB for load balancing, so I specify a static IP:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: gateway-proxy
  namespace: envoy-gateway-system
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyService:
        type: LoadBalancer
        annotations:
          metallb.universe.tf/loadBalancerIPs: "10.0.0.180"

3. Gateway - The actual entry point. This is where you define your listeners (ports, protocols, TLS settings):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: gateway
  namespace: envoy-gateway-system
spec:
  gatewayClassName: eg
  infrastructure:
    parametersRef:
      group: gateway.envoyproxy.io
      kind: EnvoyProxy
      name: gateway-proxy
  listeners:
  - name: https-terminate
    protocol: HTTPS
    port: 443
    hostname: "*.zaldre.com"
    tls:
      mode: Terminate
      certificateRefs:
      - kind: Secret
        name: zaldre-wildcard-tls
    allowedRoutes:
      namespaces:
        from: All

This Gateway listens on port 443 for HTTPS traffic to any *.zaldre.com hostname, terminates TLS using a wildcard certificate, and allows routes from all namespaces.

Creating routes

Now for the fun part, creating routes for your applications. Each application gets its own HTTPRoute resource. Let’s start with a simple example, my stats dashboard, which is accessible from both internal and external networks:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
# HTTPRoute for external access (with external-dns annotation)
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: stats
  namespace: stats
  annotations:
    external-dns.alpha.kubernetes.io/hostname: stats.zaldre.com
    external-dns.alpha.kubernetes.io/ttl: "300"
spec:
  parentRefs:
  - name: gateway
    namespace: envoy-gateway-system
  hostnames:
  - "stats.zaldre.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: stats
      port: 80
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        set:
        - name: X-Forwarded-Proto
          value: https

---
# BackendTrafficPolicy for session affinity (applies to both routes)
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: stats-affinity
  namespace: stats
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: stats
  loadBalancer:
    type: ConsistentHash
    consistentHash:
      type: Cookie
      cookie:
        name: route
        ttl: 48h
        attributes:
          SameSite: Lax

This is a straightforward route:

  • Routes traffic for stats.zaldre.com to the stats service on port 80
  • Sets the X-Forwarded-Proto header so the backend knows it’s receiving HTTPS traffic
  • Uses a BackendTrafficPolicy for session affinity (cookie-based load balancing)
  • Notice there’s no SecurityPolicy here, this route is open to both internal and external traffic

The annotations on the HTTPRoute are for external-dns integration, which automatically creates DNS records for the hostname.

Adding security policies

For services that should only be accessible from internal networks, you can add a SecurityPolicy. Here’s my Immich instance with IP whitelisting:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: immich
  namespace: immich
spec:
  parentRefs:
  - name: gateway
    namespace: envoy-gateway-system
    sectionName: https-terminate
  hostnames:
  - "immich.zaldre.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: immich-server
      port: 2283
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        set:
        - name: X-Forwarded-Proto
          value: https

---
# SecurityPolicy for IP whitelisting
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: immich-ip-whitelist
  namespace: immich
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: immich
  authorization:
    rules:
    - action: Allow
      principal:
        clientCIDRs:
        - 10.0.0.0/8
        - 192.168.0.0/16
    defaultAction: Deny

---
# BackendTrafficPolicy for session affinity (cookie-based)
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: immich-affinity
  namespace: immich
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: immich
  loadBalancer:
    type: ConsistentHash
    consistentHash:
      type: Cookie
      cookie:
        name: route
        ttl: 48h
        attributes:
          SameSite: Lax


Let me break down what’s happening here:

The HTTPRoute defines the basic routing:

  • It attaches to the gateway in the envoy-gateway-system namespace
  • Routes traffic for immich.zaldre.com to the immich-server service on port 2283
  • Sets the X-Forwarded-Proto header so the backend knows it’s receiving HTTPS traffic

The SecurityPolicy adds IP whitelisting:

  • Only allows traffic from private IP ranges (10.0.0.0/8 and 192.168.0.0/16)
  • Denies everything else by default
  • This is way cleaner than the annotation-based approach you’d use with ingress-nginx

The BackendTrafficPolicy configures session affinity:

  • Uses cookie-based consistent hashing for load balancing
  • Ensures users stick to the same backend pod (important for stateful applications)
  • Sets a 48-hour cookie TTL with SameSite=Lax

Routing to external services

One cool thing I discovered is that Gateway API can route to services outside your cluster. I use this for my NAS:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
# Service definition for external backend (no selector = headless service)
apiVersion: v1
kind: Service
metadata:
  name: nas-backend
  namespace: nas
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
    name: http

---
# EndpointSlice pointing to external service outside Kubernetes
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
  name: nas-backend
  namespace: nas
  labels:
    kubernetes.io/service-name: nas-backend
addressType: IPv4
endpoints:
- addresses:
  - "10.0.0.200"
ports:
- port: 8080
  protocol: TCP
  name: http

---
# HTTPRoute for nas.zaldre.com
# NOTE: This configuration terminates TLS at the gateway and forwards HTTP to the backend.
# The QNAP must be configured to accept HTTP connections (not just HTTPS) for this to work.
# To configure QNAP: Control Panel > System > General Settings > System Administration
# - Ensure HTTP port (typically 80 or 8080) is enabled
# - Disable "Force secure connection (HTTPS) only" if enabled
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: nas
  namespace: nas
spec:
  parentRefs:
  - name: gateway
    namespace: envoy-gateway-system
    sectionName: https-terminate
  hostnames:
  - "nas.zaldre.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: nas-backend
      port: 8080
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        set:
        - name: X-Forwarded-Proto
          value: https
        - name: X-Forwarded-Host
          value: nas.zaldre.com

This creates a Service without a selector (making it a headless service), then uses an EndpointSlice to point to an external IP address (10.0.0.200). The HTTPRoute then routes to this service just like any other. The gateway terminates TLS and forwards plain HTTP to the backend, which is perfect for devices that don’t handle TLS termination well.

Migration experience

Migrating from ingress-nginx was surprisingly straightforward. The main steps were:

  1. Install Envoy Gateway - One Helm chart, done

  2. Create the Gateway resources - Three YAML files (GatewayClass, EnvoyProxy, Gateway)

  3. Convert Ingress to HTTPRoute - For each application, create an HTTPRoute. The mapping is pretty direct:

    • spec.rules[].hostspec.hostnames[]
    • spec.rules[].http.paths[]spec.rules[].matches[]
    • spec.rules[].http.paths[].backendspec.rules[].backendRefs[]
  4. Add policies - This is where it gets interesting. Things that required annotations in ingress-nginx (like IP whitelisting) are now proper API resources (SecurityPolicy). This makes them more discoverable, testable, and maintainable.

  5. Test and switch over - I ran both ingress-nginx and Envoy Gateway in parallel for a bit, routing different hostnames to each, then gradually migrated everything over.

The biggest win? No more annotation soup. Everything is declarative, type-safe, and follows Kubernetes resource patterns. Plus, the separation of concerns means I can delegate route management to different teams without giving them access to the Gateway itself.

What about the downsides?

It’s not all sunshine and rainbows. The Gateway API is still evolving, and some features you might be used to from ingress-nginx aren’t available yet (or require different approaches). Also, if you’re heavily invested in ingress-nginx-specific annotations, you’ll need to rethink some of your configurations.

The spec is also far more complex than ingress. It is by no means a panacea and may introduce more problems than it solves. If your organisation values simplicity (or if you’re operating in a smaller team) it may be worth looking at Traefik Ingress Controller. They’ve recently written a blog post here that details a more simplified migration strategy while maintaining support for most of the ingress-nginx annotations.

That said, the deprecation of ingress-nginx means you’re going to have to migrate to SOMETHING eventually anyway. Better to do it now while you have time to plan and test, rather than in a panic when something breaks.


That’s all for this post. Hope you found it useful. If there’s any information you want clarified, thoughts, opinions, please do let me know

zaldre@zaldre.com