Issue
I have searched many of the similar issues but can't seem to figure out the one I'm having. I have a Vagrantfile with which I setup 3 VMs. I add a public key to each VM so I can run Ansible against the boxes after vagrant up
command (I don't want to use the ansible provisioner). I forward all the SSH ports on each box.
I can vagrant ssh <server_name>
on to each box successfully.
With the following:
ssh [email protected] -p 2711 -i ~/.ssh/ansible <-- successful connection
ssh [email protected] -p 2712 -i ~/.ssh/ansible <-- connection error
ssh: connect to host 192.168.56.3 port 2712: Connection refused
ssh [email protected] -p 2713 -i ~/.ssh/ansible <-- connection error
ssh: connect to host 192.168.56.4 port 2713: Connection refused
And
ssh vagrant@localhost -p 2711 -i ~/.ssh/ansible <-- successful connection
ssh vagrant@localhost -p 2712 -i ~/.ssh/ansible <-- successful connection
ssh vagrant@localhost -p 2713 -i ~/.ssh/ansible <-- successful connection
Ansible can connect to the first one ([email protected]
) but not the other 2 also. I can't seem to find out why it connects to one and not the others. Any ideas what I could be doing wrong?
The Ansible inventory:
{
"all": {
"hosts": {
"kubemaster": {
"ansible_host": "192.168.56.2",
"ansible_user": "vagrant",
"ansible_ssh_port": 2711
},
"kubenode01": {
"ansible_host": "192.168.56.3",
"ansible_user": "vagrant",
"ansible_ssh_port": 2712
},
"kubenode02": {
"ansible_host": "192.168.56.4",
"ansible_user": "vagrant",
"ansible_ssh_port": 2713
}
},
"children": {},
"vars": {}
}
}
The Vagrantfile:
# Define the number of master and worker nodes
NUM_MASTER_NODE = 1
NUM_WORKER_NODE = 2
PRIV_IP_NW = "192.168.56."
MASTER_IP_START = 1
NODE_IP_START = 2
# Vagrant configuration
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# default box
config.vm.box = "ubuntu/jammy64"
# automatic box update checking.
config.vm.box_check_update = false
# Provision master nodes
(1..NUM_MASTER_NODE).each do |i|
config.vm.define "kubemaster" do |node|
# Name shown in the GUI
node.vm.provider "virtualbox" do |vb|
vb.name = "kubemaster"
vb.memory = 2048
vb.cpus = 2
end
node.vm.hostname = "kubemaster"
node.vm.network :private_network, ip: PRIV_IP_NW + "#{MASTER_IP_START + i}"
node.vm.network :forwarded_port, guest: 22, host: "#{2710 + i}"
# argo and traefik access
node.vm.network "forwarded_port", guest: 8080, host: "#{8080}"
node.vm.network "forwarded_port", guest: 9000, host: "#{9000}"
# synced folder for kubernetes setup yaml
node.vm.synced_folder "sync_folder", "/vagrant_data", create: true, owner: "root", group: "root"
node.vm.synced_folder ".", "/vagrant", disabled: true
# setup the hosts, dns and ansible keys
node.vm.provision "setup-hosts", :type => "shell", :path => "vagrant/setup-hosts.sh" do |s|
s.args = ["enp0s8"]
end
node.vm.provision "setup-dns", type: "shell", :path => "vagrant/update-dns.sh"
node.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/ansible.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
end
end
# Provision Worker Nodes
(1..NUM_WORKER_NODE).each do |i|
config.vm.define "kubenode0#{i}" do |node|
node.vm.provider "virtualbox" do |vb|
vb.name = "kubenode0#{i}"
vb.memory = 2048
vb.cpus = 2
end
node.vm.hostname = "kubenode0#{i}"
node.vm.network :private_network, ip: PRIV_IP_NW + "#{NODE_IP_START + i}"
node.vm.network :forwarded_port, guest: 22, host: "#{2711 + i}"
# synced folder for kubernetes setup yaml
node.vm.synced_folder ".", "/vagrant", disabled: true
# setup the hosts, dns and ansible keys
node.vm.provision "setup-hosts", :type => "shell", :path => "vagrant/setup-hosts.sh" do |s|
s.args = ["enp0s8"]
end
node.vm.provision "setup-dns", type: "shell", :path => "vagrant/update-dns.sh"
node.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/ansible.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
end
end
end
Solution
Your Vagrantfile confirms what I suspected:
You define port forwarding as follows:
node.vm.network :forwarded_port, guest: 22, host: "#{2710 + i}"
That means, port
22
of the guest is made reachable on the host under port2710+i
. For your 3 VMs, from the host's point of view, this means:192.168.2.1:22 -> localhost:2711
192.168.2.2:22 -> localhost:2712
192.168.2.3:22 -> localhost:2713
As IP addresses for your VMs you have defined the range
192.168.2.0/24
, but you try to access the range192.168.56.0/24
.If a Private IP address is defined (for your 1st node e.g.
192.168.2.2
), Vagrant implements this in the VM on VirtualBox as follows:
Two network adapters are defined for the VM:- NAT: this gives the VM Internet access
- Host-Only: this gives the host access to the VM via IP
192.168.2.2
.
For each
/24
network, VirtualBox (and Vagrant) creates a separateVirtualBox Host-Only Ethernet Adapter
, and the host is.1
on each of these networks.What this means for you is that if you use an IP address from the
192.168.2.0/24
network, an adapter is created on your host that always gets the IP address192.168.2.1/24
, so you have the addresses192.168.2.2
-192.168.2.254
available for your VMs.This means: You have for your master a collision of the IP address with your host!
But why does the access to your first VM work?
ssh [email protected] -p 2711 -i ~/.ssh/ansible <-- successful connection
That is relatively simple: The network
192.168.56.0/24
is the default network for Host-Only under VirtualBox, so you probably have a VirtualBox Host-Only Ethernet Adapter with the address192.168.56.1/24
.Because you have defined a port forwarding in your Vagrantfile a mapping of the 1st VM to
localhost:2711
takes place. If you now access192.168.56.1:2711
, this is your own host, thuslocalhost
, and the SSH of the 1st VM is mapped to port2711
on this host.
So what do you have to do now?
Change the IP addresses of your VMs, e.g. use
192.168.2.11
-192.168.2.13
.The access to the VMs is possible as follows:
Node via Guest-IP via localhost kubemaster
192.168.2.11:22
localhost:2711
kubenode01
192.168.2.12:22
localhost:2712
kubenode02
192.168.2.13:22
localhost:2713
Note: If you want to access with the guest IP address, use port
22
, if you want to access vialocalhost
, use port2710+i
defined by you.
Answered By - phanaz Answer Checked By - David Goodson (WPSolving Volunteer)