I have tried a few platforms, currently on Google Cloud and still not mgmt connectivity with a network
I have the management interface connected to a single network, I can ping between mgmt interfaces but I can't ping the EVE-ng interface, nor the Ansible VM, which are all on the same subnet. I can ping between Ansible and EVE-ng, but not between the hosted devices and actual devices.
I must be missing something, like promiscuous mode, 
my current interface mappings
root@eve01:~# brctl show
bridge name     bridge id               STP enabled     interfaces
pnet0           8000.12fc66a6ee53       no              eth0
                                                        vunl0_1_0
                                                        vunl0_2_0
                                                        vunl0_3_0
                                                        vunl0_4_0
                                                        vunl0_5_0
pnet1           8000.000000000000       no
root@eve01:~# ifconfig
eth0      Link encap:Ethernet  HWaddr 42:01:0a:9a:00:08
          inet6 addr: fe80::4001:aff:fe9a:8/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:21063 errors:0 dropped:0 overruns:0 frame:0
          TX packets:36589 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1946070 (1.9 MB)  TX bytes:5121464 (5.1 MB)
lo        Link encap:Local Loopback
--removed--
pnet0     Link encap:Ethernet  HWaddr 12:fc:66:a6:ee:53
          inet addr:10.154.0.8  Bcast:10.154.0.8  Mask:255.255.255.255
          inet6 addr: fe80::4001:aff:fe9a:8/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:21145 errors:0 dropped:0 overruns:0 frame:0
          TX packets:36548 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1657440 (1.6 MB)  TX bytes:5121053 (5.1 MB)
I have also changed this
# Cloud devices
iface eth1 inet manual
auto pnet0
iface pnet0 inet manual
    bridge_ports eth1
    bridge_stp off
So why can i do IP connectvity between eve and hosted devices
			
									
									
						Give me a clue on the networking between eve and hosted devices
Moderator: mike
- 
				mrp
- Posts: 12
- Joined: Sun Jun 16, 2019 5:47 pm
- 
				Uldis (UD)
- Posts: 5190
- Joined: Wed Mar 15, 2017 4:44 pm
- Location: London
- Contact:
Re: Give me a clue on the networking between eve and hosted devices
GCP assigns only one IP for your management VM. So connect other nodes to Clod0 is useless, they will not reach internet.
For such purpose EVE Pro has NAT cloud, which natting internal subnet to ouside mgmt cloud0 IP. This feature is on EVE Pro only.
Internally interconnect nodes, like extended LAN/VLAN you can use any cloud numbers.
Please read our cookbook section 10 about clouds,
https://www.eve-ng.net/images/EVE-COOK-BOOK-1.12.pdf
Uldis
			
									
									
						For such purpose EVE Pro has NAT cloud, which natting internal subnet to ouside mgmt cloud0 IP. This feature is on EVE Pro only.
Internally interconnect nodes, like extended LAN/VLAN you can use any cloud numbers.
Please read our cookbook section 10 about clouds,
https://www.eve-ng.net/images/EVE-COOK-BOOK-1.12.pdf
Uldis