Hi Mattias and fellow gurus,
I got most of this to work. One piece I am having trouble with is ssh from frontend vm to backend vm. I have defined firewall rules to lockdown ssh from frontend vms only via the frontend-sa service account.
Permission denied (publickey).
From what I observe is that initially the authorized_keys file is in the .ssh directory with keys added by google. Thats when i am able to ssh to backend. After a few minutes the authorized_keys file disappears and I am no longer able to ssh to backend vm, understandably coz there is no authorized keys file anymore. Once I kill the frontend ssh session and start a new one, I can again ssh because auth keys show up again.
So question is why does authorized_keys disappear from .ssh dir after few minutes of launching ssh session ?
Mattias is currently very focused trying to finish off a course.
Can you wait a couple more days ?
Hello Yasir! I’m glad you’re working though the labs and paying lots of attention to what’s going on! 👍
For this particular situation, how are you initiating the SSH connections from the frontend instances to the backend ones? Are you doing straight
ssh or are you using
gcloud for each connection? If it’s the former (plain ssh), then you’re more likely to trip over things because Google is neither aware of nor enabling the connection you’re making. 🙂
For the sake of the lab, though, you don’t need to SSH from the frontend instances to the backend ones–although that is a valuable "bastion" or "jump box" setup. Instead, for the lab, the key thing is to make sure you can ping across the important data paths and you can SSH from the console or cloud shell directly to the backend instances.