Kubernetes Deep Dive Room

Sign Up Free or Log In to participate!

Deployment wont use all 3 pods.

Following along but in AWS – Where you need to follow this procedure to get nodes https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html

I have my 3 nodes, in my cluster. When I apply kubectl apply -f ping-deply.yml I get three pods. However, instead of 1 pod on each node, I always get 2 pods that go to one of the nodes. I’ve shot a single node, I’ve shot all three nodes and let them come back. I’ve tried deleted a single pod, and all 3 pods, and the whole deployment and started it up again. 

Every single case, I end up with 2 pods on a single node. Which means there’s a node not being used, and the two pods on the same node can’t ping each other. I have not made any edits to the ping-test.yml. 

Any ideas? Is there any way to force it to use empty nodes first? I thought that’s what replica: 3 meant

1 Answers

I went ahead and shut down the nodes I made, remade the cluster, reset the auth, deleted all the pods, svc, etc. Started from scratch. Things are looking fine. All I changed was from t2.micro to t3.medium. Maybe it was a fluke!


I just got the same thing on GCP. I have a x.x.0.0/24, x.x.1.0/24, and a x.x.2.0/24 network. Each Node exists on its own private /24 network. BUT, when I look at the pods, one of them is empty. This is pretty odd given the .yml files is supposed to put a pod on each node, I thought. Anyways, I guess I’ll give recreation of the cluster a try and hope I never run into this in production _0_/….

Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!

Get Started
Who’s going to be learning?