Public Kubernetes API server (EKS pre-prod)
high
CVSS 3.1: 8.2 · Asset: track.pre.corezoid.com
- Severity: High
- CVSS 3.1:
AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:L/A:L→ 8.2 - CWE: CWE-200, CWE-284 (Improper Access Control)
- Asset:
track.pre.corezoid.com→63.32.68.104:443 - Discovered: 2026-04-26
- Status: Confirmed — live, internet-reachable
Kubernetes API server. Authentication IS enforced on
/api,/apis(401), but/healthz,/readyz,/livezreturn 200 OK unauthenticated. API server should not be public-reachable at all. - Reporter: Claude (CVP pentest)
Summary
The public hostname track.pre.corezoid.com serves a TLS
certificate whose subject is CN=kube-apiserver and whose
Subject Alternative Names disclose the EKS cluster endpoint and internal
Kubernetes hostnames:
1f1adac291bdea236dc6f218e5a247ee.sk1.eu-west-1.eks.amazonaws.comip-10-0-163-160.eu-west-1.compute.internalkubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local
The certificate is issued by CN=kubernetes — meaning
this is a self-signed EKS control-plane cert, not a public Amazon cert.
HTTPS returns 401 Unauthorized, which strongly indicates
a live Kubernetes API server is reachable at this
endpoint. The presence of
kubernetes.default.svc.cluster.local in the SAN is a red
flag — that hostname is internal-only.
Reproduction
Step 1 — confirm cert belongs to Kubernetes API server:
$ openssl s_client -connect track.pre.corezoid.com:443 -servername track.pre.corezoid.com 2>/dev/null </dev/null | openssl x509 -noout -subject -issuer -ext subjectAltName
subject=CN = kube-apiserver
issuer=CN = kubernetes
X509v3 Subject Alternative Name:
DNS:1f1adac291bdea236dc6f218e5a247ee.sk1.eu-west-1.eks.amazonaws.com,
DNS:ip-10-0-163-160.eu-west-1.compute.internal,
DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc,
DNS:kubernetes.default.svc.cluster.local
Step 2 — confirm unauth endpoints are reachable:
$ curl -sk -o /dev/null -w '%{http_code}\n' https://track.pre.corezoid.com/healthz
200
$ curl -sk -o /dev/null -w '%{http_code}\n' https://track.pre.corezoid.com/readyz
200
$ curl -sk -o /dev/null -w '%{http_code}\n' https://track.pre.corezoid.com/livez
200
Step 3 — confirm authenticated endpoints return k8s-shaped 401:
$ curl -sk https://track.pre.corezoid.com/api
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
This is the canonical response shape of kube-apiserver.
Combined with the cert SAN, there is no remaining
ambiguity — this is the EKS pre-prod control plane on port
443.
Confirmed facts
| Endpoint | Status | Auth required |
|---|---|---|
/ |
401 | yes |
/api |
401 (k8s Status object) | yes |
/apis |
401 (k8s Status object) | yes |
/version |
401 | yes |
/healthz |
200 | no |
/readyz |
200 | no |
/livez |
200 | no |
EKS cluster ID leaked:
1f1adac291bdea236dc6f218e5a247ee
Evidence
recon/raw/httpx-results.jsonl(entry fortrack.pre.corezoid.com)- Full TLS cert dump pending:
evidence/CRZ-002-tls-cert.pem
Impact
Confirmed: The EKS pre-prod Kubernetes control plane is reachable from the public internet.
Even with authentication enforced, this is a High-severity issue because:
- Direct target for credential-stuffing / token-theft attacks. Any leaked kubeconfig, service account token, or IAM credential with EKS access can be used from anywhere in the world — there is no network-layer defense in depth.
- Attack surface for 0-day Kubernetes API server
CVEs. The
kube-apiserverhas had multiple critical CVEs (CVE-2020-8558, CVE-2021-25741, CVE-2024-10220 path traversal in kubelet, etc.). A public control plane is one Kubernetes CVE away from full compromise. - Kubernetes version discoverable. Even an
authenticated
/versionleaks the server version, which lets an attacker correlate with known CVEs. - Pre-prod typically has weaker secrets. Developer tokens, forgotten service accounts, and test credentials often have admin-like RBAC binds in pre-prod that would never be tolerated in prod.
- Lateral movement pivot. If an attacker compromises anything in the cluster (a vulnerable pod, exposed dashboard, leaked Helm chart with creds), they can immediately manage the cluster from outside.
- Cluster fingerprinting. The leaked EKS cluster ID + internal hostnames help plan further attacks (e.g., AWS IAM enumeration targeting this cluster).
Remediation
- Immediate: Put the EKS API server behind a
private-only endpoint (
kubectl cluster-infoconfig → endpoint access: private or private+allowlisted). Do not exposekube-apiservervia any public DNS record. - If public access is required (CI/CD from outside VPC): restrict to
known CIDR ranges via the EKS cluster
publicAccessCidrssetting. - Verify
--anonymous-auth=falseon the API server. Run the CIS Kubernetes Benchmark against the cluster. - Audit why
track.pre.corezoid.comDNS points at an EKS control plane — this is likely a misconfigured Route53 record pointing at the wrong ELB.
References
- CWE-200: Exposure of Sensitive Information
- CWE-287: Improper Authentication
- CIS Kubernetes Benchmark v1.23 — §1.2.1 (anonymous-auth), §3.2.1 (audit logging)
- Kubernetes docs: Controlling access to the API
- AWS docs: Amazon EKS cluster endpoint access control
Related
track.dev.corezoid.comandtrack-ai.dev.corezoid.comresolve to a different IP (34.246.32.238) — check those too for the same pattern.