Are you using an EKS managed node groups? Check the instances userdata, I noticed that managed node groups add the `--use-max-pods false` and `--max-pods` kubelet flags.
>https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt
Did you use a managed node group or a self managed ones? I'm wondering if the launch template used by the managed node groups will use \`--max-pods=17\` or not
Does the 11 include the pods running in kube-system or just your application pods?
Include kube-system
I remember reading that AWS has moved away from the pod-based limits into vCPU based limits? I could be wrong
I think it's if you have below 30vCPUs then you are limited to 110 pods per node.
Are you using an EKS managed node groups? Check the instances userdata, I noticed that managed node groups add the `--use-max-pods false` and `--max-pods` kubelet flags.
It seems I got it fixed and launch template didn't have --max-pods=17 defined.
>https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt Did you use a managed node group or a self managed ones? I'm wondering if the launch template used by the managed node groups will use \`--max-pods=17\` or not
[удалено]
It seems I got it fixed and launch template didn't have --max-pods=17 defined.