CRZ-002

Public Kubernetes API server (EKS pre-prod)

high CVSS 3.1: 8.2 · Asset: track.pre.corezoid.com

Summary

The public hostname track.pre.corezoid.com serves a TLS certificate whose subject is CN=kube-apiserver and whose Subject Alternative Names disclose the EKS cluster endpoint and internal Kubernetes hostnames:

The certificate is issued by CN=kubernetes — meaning this is a self-signed EKS control-plane cert, not a public Amazon cert. HTTPS returns 401 Unauthorized, which strongly indicates a live Kubernetes API server is reachable at this endpoint. The presence of kubernetes.default.svc.cluster.local in the SAN is a red flag — that hostname is internal-only.

Reproduction

Step 1 — confirm cert belongs to Kubernetes API server:

$ openssl s_client -connect track.pre.corezoid.com:443 -servername track.pre.corezoid.com 2>/dev/null </dev/null | openssl x509 -noout -subject -issuer -ext subjectAltName
subject=CN = kube-apiserver
issuer=CN = kubernetes
X509v3 Subject Alternative Name:
    DNS:1f1adac291bdea236dc6f218e5a247ee.sk1.eu-west-1.eks.amazonaws.com,
    DNS:ip-10-0-163-160.eu-west-1.compute.internal,
    DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc,
    DNS:kubernetes.default.svc.cluster.local

Step 2 — confirm unauth endpoints are reachable:

$ curl -sk -o /dev/null -w '%{http_code}\n' https://track.pre.corezoid.com/healthz
200
$ curl -sk -o /dev/null -w '%{http_code}\n' https://track.pre.corezoid.com/readyz
200
$ curl -sk -o /dev/null -w '%{http_code}\n' https://track.pre.corezoid.com/livez
200

Step 3 — confirm authenticated endpoints return k8s-shaped 401:

$ curl -sk https://track.pre.corezoid.com/api
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}

This is the canonical response shape of kube-apiserver. Combined with the cert SAN, there is no remaining ambiguity — this is the EKS pre-prod control plane on port 443.

Confirmed facts

Endpoint Status Auth required
/ 401 yes
/api 401 (k8s Status object) yes
/apis 401 (k8s Status object) yes
/version 401 yes
/healthz 200 no
/readyz 200 no
/livez 200 no

EKS cluster ID leaked: 1f1adac291bdea236dc6f218e5a247ee

Evidence

Impact

Confirmed: The EKS pre-prod Kubernetes control plane is reachable from the public internet.

Even with authentication enforced, this is a High-severity issue because:

  1. Direct target for credential-stuffing / token-theft attacks. Any leaked kubeconfig, service account token, or IAM credential with EKS access can be used from anywhere in the world — there is no network-layer defense in depth.
  2. Attack surface for 0-day Kubernetes API server CVEs. The kube-apiserver has had multiple critical CVEs (CVE-2020-8558, CVE-2021-25741, CVE-2024-10220 path traversal in kubelet, etc.). A public control plane is one Kubernetes CVE away from full compromise.
  3. Kubernetes version discoverable. Even an authenticated /version leaks the server version, which lets an attacker correlate with known CVEs.
  4. Pre-prod typically has weaker secrets. Developer tokens, forgotten service accounts, and test credentials often have admin-like RBAC binds in pre-prod that would never be tolerated in prod.
  5. Lateral movement pivot. If an attacker compromises anything in the cluster (a vulnerable pod, exposed dashboard, leaked Helm chart with creds), they can immediately manage the cluster from outside.
  6. Cluster fingerprinting. The leaked EKS cluster ID + internal hostnames help plan further attacks (e.g., AWS IAM enumeration targeting this cluster).

Remediation

  1. Immediate: Put the EKS API server behind a private-only endpoint (kubectl cluster-info config → endpoint access: private or private+allowlisted). Do not expose kube-apiserver via any public DNS record.
  2. If public access is required (CI/CD from outside VPC): restrict to known CIDR ranges via the EKS cluster publicAccessCidrs setting.
  3. Verify --anonymous-auth=false on the API server. Run the CIS Kubernetes Benchmark against the cluster.
  4. Audit why track.pre.corezoid.com DNS points at an EKS control plane — this is likely a misconfigured Route53 record pointing at the wrong ELB.

References