vSRXNG problems
Moderator: mike
-
- Posts: 83
- Joined: Tue Jun 27, 2017 8:51 am
Re: vSRXNG problems
The 17.3 is like the 15.1X40D70 - the long-term goal is to get everything back into "mainline" Versioning - however this will not happen before 18.4 onwards (at least for the vSRX).
I would stick to the 15.1X49-train until Juniper releases the "press-release", that 15.1X49 is no longer needed and that 18 or 19.x will have feature-parity.
Currently 6 vSRXes with D110 share a lab on my DL360G7 ESX with 2x Xeon E5649 - each with 4GB RAM and after booting they are quite responsive so you can lab with them very good.
I would stick to the 15.1X49-train until Juniper releases the "press-release", that 15.1X49 is no longer needed and that 18 or 19.x will have feature-parity.
Currently 6 vSRXes with D110 share a lab on my DL360G7 ESX with 2x Xeon E5649 - each with 4GB RAM and after booting they are quite responsive so you can lab with them very good.
-
- Posts: 31
- Joined: Thu Jan 18, 2018 11:43 am
Re: vSRXNG problems
Hello,
I have a same issue. Today, I've finished baremetal installation on my fresh new servers HP DL360e Gen8 with 2 8-core processors. Type of processor will be bellow. I am using 2x1TB HDDs in Raid1 and 64GB RAM. I am using vSRX image, 15.1X49-D120.3. I played with one node, first boot took ages and first commit with hostname and interface configuration took 15seconds...another commits from 15 to 30 seconds. I was expecting at least fluent responses but this is embarrassing. Any ideas????
Second issue...why on node without any connections vSRX showing all interfaces in up/up state?
Thanks.
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 45
Model name: Intel(R) Xeon(R) CPU E5-2450L 0 @ 1.80GHz
Stepping: 7
CPU MHz: 1200.000
CPU max MHz: 1800.0000
CPU min MHz: 1200.0000
BogoMIPS: 3593.45
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 20480K
NUMA node0 CPU(s): 0-7,16-23
NUMA node1 CPU(s): 8-15,24-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm epb tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts
kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
I have a same issue. Today, I've finished baremetal installation on my fresh new servers HP DL360e Gen8 with 2 8-core processors. Type of processor will be bellow. I am using 2x1TB HDDs in Raid1 and 64GB RAM. I am using vSRX image, 15.1X49-D120.3. I played with one node, first boot took ages and first commit with hostname and interface configuration took 15seconds...another commits from 15 to 30 seconds. I was expecting at least fluent responses but this is embarrassing. Any ideas????
Second issue...why on node without any connections vSRX showing all interfaces in up/up state?
Thanks.
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 45
Model name: Intel(R) Xeon(R) CPU E5-2450L 0 @ 1.80GHz
Stepping: 7
CPU MHz: 1200.000
CPU max MHz: 1800.0000
CPU min MHz: 1200.0000
BogoMIPS: 3593.45
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 20480K
NUMA node0 CPU(s): 0-7,16-23
NUMA node1 CPU(s): 8-15,24-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm epb tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts
kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
-
- Posts: 7
- Joined: Fri Dec 22, 2017 10:08 am
Re: vSRXNG problems
Hello,
I glad to know that I am not the only one with this kind of issue.
But we still have to find the solution to this problem.
Ludovic
I glad to know that I am not the only one with this kind of issue.
But we still have to find the solution to this problem.
Ludovic
-
- Posts: 31
- Joined: Thu Jan 18, 2018 11:43 am
Re: vSRXNG problems
Hello,
I bought server with intention to use eve-ng on it and prepare for jncie-sec so my only nodes are junipers. So i've played around...and it makes me nuts...as it was mentioned....each qemu instance takes around 120% - 220% of CPU when I run 9 vsrx nodes and couple of switchces overall performance in status is aroun 50% but each quemu process kills CPU. Same behavior is with 17.3R1.10..
...any idea where to look at? Is it same when you run vSRX directly in ESXi or KVM? I have no clue what we can check....
Thanks...
I bought server with intention to use eve-ng on it and prepare for jncie-sec so my only nodes are junipers. So i've played around...and it makes me nuts...as it was mentioned....each qemu instance takes around 120% - 220% of CPU when I run 9 vsrx nodes and couple of switchces overall performance in status is aroun 50% but each quemu process kills CPU. Same behavior is with 17.3R1.10..
...any idea where to look at? Is it same when you run vSRX directly in ESXi or KVM? I have no clue what we can check....
Thanks...
-
- Posts: 5080
- Joined: Wed Mar 15, 2017 4:44 pm
- Location: London
- Contact:
Re: vSRXNG problems
True, qemu CPU they force CPU high, but overall lab will not be impacted..
sam behaviour i saw on VM machines, KVM...
Total lab CPU use should not spike such way, it is virtual core CPU...
for example ISE 2.3 on boot can reach 300% CPU KVM..but after all ok
sam behaviour i saw on VM machines, KVM...
Total lab CPU use should not spike such way, it is virtual core CPU...
for example ISE 2.3 on boot can reach 300% CPU KVM..but after all ok
-
- Posts: 31
- Joined: Thu Jan 18, 2018 11:43 am
Re: vSRXNG problems
Uldis,
question is if booting time of single node, approx. 25minutes is ok?? And simple commit for 30seconds?
Dont you have experience how responsive single node is on ESXi? Is it possible that its simply because of KVM?
Just searched Juniper forum and couldnt find anything helpful and KVM is officially supported by Juniper. If this is general performance of SRX on KVM than I would be surprised if someone buy it
question is if booting time of single node, approx. 25minutes is ok?? And simple commit for 30seconds?
Dont you have experience how responsive single node is on ESXi? Is it possible that its simply because of KVM?
Just searched Juniper forum and couldnt find anything helpful and KVM is officially supported by Juniper. If this is general performance of SRX on KVM than I would be surprised if someone buy it
-
- Posts: 5080
- Joined: Wed Mar 15, 2017 4:44 pm
- Location: London
- Contact:
Re: vSRXNG problems
No mate it is not normal then..janostasik wrote: ↑Fri Jan 26, 2018 1:06 pmUldis,
question is if booting time of single node, approx. 25minutes is ok?? And simple commit for 30seconds?
Dont you have experience how responsive single node is on ESXi? Is it possible that its simply because of KVM?
Just searched Juniper forum and couldnt find anything helpful and KVM is officially supported by Juniper. If this is general performance of SRX on KVM than I would be surprised if someone buy it
any SRX boot is max 3-5 mins ANY even 17.x nodes
it can be issue if some antivrus checking virtual nodes...
please come in live chat, we can talk..
http://www.eve-ng.net/index.php/live-helpdesk
find me UD-EVE
Login in chat please create new one, its not same as login in forum..
UD
-
- Posts: 7
- Joined: Fri Dec 22, 2017 10:08 am
Re: vSRXNG problems
Hi,
I find something similar in the tutorial 'Optimizing vSRX Performance in VMware vSphere' Apparently hardware virtualization must be enabled using the vSphere client for VMware. Otherwise vSRX is very slow.
Maybe we should do the same for KVM with qemu. For exemple those 2 links
How to enable Nested Virtualization in KVM on CentOS 7 / RHEL 7
Using CPU host-passthrough with virt-manager
How can we switch CPU mode for the VM to “host-model” or “host-passthrough”?
Regards,
Ludovic
I find something similar in the tutorial 'Optimizing vSRX Performance in VMware vSphere' Apparently hardware virtualization must be enabled using the vSphere client for VMware. Otherwise vSRX is very slow.
Maybe we should do the same for KVM with qemu. For exemple those 2 links
How to enable Nested Virtualization in KVM on CentOS 7 / RHEL 7
Using CPU host-passthrough with virt-manager
How can we switch CPU mode for the VM to “host-model” or “host-passthrough”?
Regards,
Ludovic
You do not have the required permissions to view the files attached to this post.
-
- Posts: 7
- Joined: Fri Dec 22, 2017 10:08 am
Re: vSRXNG problems
I found this same requirement for KVM host on the Juniper guide here : Understanding vSRX with KVM
Ludovic
Regards,Note: vSRX requires you to enable hardware-based virtualization on a host OS that contains an Intel Virtualization Technology (VT) capable processor.
Ludovic