How to fix AWS EKS access errors when your IAM principal lacks permissions on the cluster by adding the correct access entry and policies.
When working with Amazon EKS, you might encounter access errors even if you're using the correct IAM user and have successfully created the cluster. These errors can appear across the CLI (kubectl), CloudWatch, and the AWS Console. Common messages include:
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Identity is not mapped
Your current IAM principal doesn't have access to Kubernetes objects on this cluster...
These typically point to a missing access entry for your IAM principal in the EKS cluster's access configuration.
Here’s how to fix it:
🛠️ Step-by-Step: Granting Access to Your IAM Principal
Go to the AWS Console
Navigate to the EKS > Clusters page and select your cluster.
Open the Access Tab
Click on the Access tab in the cluster details.
Create Access Entry
Click Create access entry.
Select your IAM principal ARN. You can find your ARN using this command:
aws sts get-caller-identity
Look for the Arn field in the output.
Attach Access Policy
When creating the access entry, choose AmazonEKSAdminPolicy as the access policy.
Add Cluster Admin Policy
After the access entry is created, go back to the Access tab.
Click on the access entry you just created.
Click Add access policy, and select AmazonEKSClusterAdminPolicy.
Verify with kubectl
Once the policies are attached, try running your kubectl commands again. You should now have the required access to interact with your EKS cluster.
âś… Summary
EKS clusters require explicit IAM access entries for principals to manage Kubernetes resources. Even if your IAM user created the cluster, it doesn’t automatically get full permissions. Setting up the right access entries and policies resolves this.
Let me know if you'd like a reusable shell script or Terraform config to automate this.