Hi Colleagues/Uldis/EVE-NG Team,
I'm about to purchase a server with ESXi and was planning to install 20+ CSR1000v VMs. Afterwards, I will install EVE within ESXi and run IOL switches. I'm running EVE on a laptop now and I know that there is a "bridge" option when you right-click in the EVE workspace.
My question is, when I install the 20+ CSR1000v VMs in the ESXi host (not within EVE) and then install EVE to run IOL switches, can I trunk those IOL switches within EVE into the vSwitch of the Hypervisor to connect to the CSRs? Does the "bridge" option in the workspace provide me that?
I know I can just run CSRs within EVE but I'm just exploring my options.
Thanks in advance.
Trunk IOL Switches to ESXi vSwitch
Moderator: mike
-
- Posts: 5068
- Joined: Wed Mar 15, 2017 4:44 pm
- Location: London
- Contact:
Re: Trunk IOL Switches to ESXi vSwitch
of course yes.
EVE LAB switch needs to connect to cloud1-->pnet1--vmnetA
this VMnetA need assign to every your CSR VM
EVE VM second interface (cloud1/pnet1) is bridged with VMnetA
then on thet VMnet A allow promiscue mode and done
UD
EVE LAB switch needs to connect to cloud1-->pnet1--vmnetA
this VMnetA need assign to every your CSR VM
EVE VM second interface (cloud1/pnet1) is bridged with VMnetA
then on thet VMnet A allow promiscue mode and done
UD
-
- Posts: 5
- Joined: Fri Nov 03, 2017 7:34 pm
Re: Trunk IOL Switches to ESXi vSwitch
Thanks Uldis! I'll try this out once I receive the server and will let you know.
-
- Posts: 2
- Joined: Fri Jan 31, 2020 11:39 am
Re: Trunk IOL Switches to ESXi vSwitch
I was creating the post below, but figured out my real problem. EVE-NG is allowing tagged traffic into eth1/cloud1 interface and routing/switching it properly to the correct VM, but is dropping all return traffic to the same interface and VLAN. It seems to be something simple I am missing.
Here is the tcpdump from eve eth1 capturing a ping from a VSphere VM, to an eve VM. The arp request is being replied to, but there are no echo requests started and the VSphere VM's arp table isn't being populated.
07:01:00.910377 ARP, Request who-has 192.168.3.111 tell 192.168.3.1, length 46
07:01:00.913132 ARP, Reply 192.168.3.111 is-at 50:00:00:02:00:01 (oui Unknown), length 46
07:01:00.918236 ARP, Request who-has 192.168.3.111 (Broadcast) tell 192.168.3.1, length 46
07:01:00.920375 ARP, Reply 192.168.3.111 is-at 50:00:00:02:00:01 (oui Unknown), length 46
A ping from the eve-ng VM shows an arp request never being responded to:
07:23:42.680631 IP 192.168.3.111 > 192.168.3.1: ICMP echo request, id 6, seq 0, length 80
07:23:44.680182 IP 192.168.3.111 > 192.168.3.1: ICMP echo request, id 6, seq 1, length 80
07:23:46.681093 IP 192.168.3.111 > 192.168.3.1: ICMP echo request, id 6, seq 2, length 80
Thanks in advance,
CCF
Here is the tcpdump from eve eth1 capturing a ping from a VSphere VM, to an eve VM. The arp request is being replied to, but there are no echo requests started and the VSphere VM's arp table isn't being populated.
07:01:00.910377 ARP, Request who-has 192.168.3.111 tell 192.168.3.1, length 46
07:01:00.913132 ARP, Reply 192.168.3.111 is-at 50:00:00:02:00:01 (oui Unknown), length 46
07:01:00.918236 ARP, Request who-has 192.168.3.111 (Broadcast) tell 192.168.3.1, length 46
07:01:00.920375 ARP, Reply 192.168.3.111 is-at 50:00:00:02:00:01 (oui Unknown), length 46
A ping from the eve-ng VM shows an arp request never being responded to:
07:23:42.680631 IP 192.168.3.111 > 192.168.3.1: ICMP echo request, id 6, seq 0, length 80
07:23:44.680182 IP 192.168.3.111 > 192.168.3.1: ICMP echo request, id 6, seq 1, length 80
07:23:46.681093 IP 192.168.3.111 > 192.168.3.1: ICMP echo request, id 6, seq 2, length 80
Thanks in advance,
CCF
-
- Posts: 5068
- Joined: Wed Mar 15, 2017 4:44 pm
- Location: London
- Contact:
Re: Trunk IOL Switches to ESXi vSwitch
Avoid to use NIC teaming on ESXi especially for VMnets connected to eve
-
- Posts: 2
- Joined: Fri Jan 31, 2020 11:39 am
Re: Trunk IOL Switches to ESXi vSwitch
Thank you Uldis. I will look into this.
Upon reading my previous post, I thought I removed irrelevant content before posting, but after reading it again, I realized I cut too much. On my ESXi/VSphere host(s), (which is all one lab) my entire lab is on one distributed trunk. My non-trunked VM interfaces are on single VLAN groups. I have one "Trunk" group that allows all VLANs and is where I place all my VM trunk interfaces. For eve-ng, my management interface (eth0) is on a single VLAN group. So is a lab-user access interface that allows VM management from a user level. (eth2) These interfaces both work. It is the eth1/cloud1 interface on EVE that is assigned to the trunk group that is almost working with tagged traffic. The physical Nics used for this distributed trunk, and for distributing it's traffic between the hosts, are being teamed.
Thanks again.
Upon reading my previous post, I thought I removed irrelevant content before posting, but after reading it again, I realized I cut too much. On my ESXi/VSphere host(s), (which is all one lab) my entire lab is on one distributed trunk. My non-trunked VM interfaces are on single VLAN groups. I have one "Trunk" group that allows all VLANs and is where I place all my VM trunk interfaces. For eve-ng, my management interface (eth0) is on a single VLAN group. So is a lab-user access interface that allows VM management from a user level. (eth2) These interfaces both work. It is the eth1/cloud1 interface on EVE that is assigned to the trunk group that is almost working with tagged traffic. The physical Nics used for this distributed trunk, and for distributing it's traffic between the hosts, are being teamed.
Thanks again.