Juniper high cpu usage

Before posting something, READ the changelog, WATCH the videos, howto and provide following:
Your install is: Bare metal, ESXi, what CPU model, RAM, HD, what EVE version you have, output of the uname -a and any other info that might help us faster.

Moderator: mike

Post Reply
viets
Posts: 6
Joined: Mon Aug 14, 2017 5:37 pm

Juniper high cpu usage

Post by viets » Mon Aug 14, 2017 5:39 pm

Hi guys,

I'm certain this is not an eve issue, but I need some data to compare. And hope that someone has enough knowledge of qemu and juniper images.

I'm running Vmware 6.5 with a Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz with 6 Cores and HT = 12 Cores and 128GB RAM.

Before I used eve, I installed all virtual machines (csr, juniper, etc.) directly in VMware, which worked pretty well.

But now I'ld like to migrate all vms to eve, just to gain a litte bit more flexibility.

So I installed the ova of eve and assign all cpu and half of RAM to the VM. Then I tried to run just a single juniper vcp and vfp instance. Running these works fine, but the cpu load of the vfp vm after connecting to the vcp is much higher compared to vmware. Both have just a basic config, nothing with load. Tried different configs, but didn't change anything.

I know that this is a nested virtualization and can't be as fast as only on layer of virtualization, but I think this is too much loss.

Before I'm getting to technical for some people I've just a question how much cpu usage they have running juniper vms?
Since with this high cpu I'm not able to run all my 10 juniper routers which worked fine with plain vmware.

I also tried gns3 and have the same issue, so I believe it's not an eve issue, I'm not certain if this is something with the juniper image or something with qemu on my setup. Thats why I post it here to hope someone can share their usage.

For people with some technical background, please look at my Test results and If you know anything that can help, please let me know.

Test results:

on plain vmware:
vcp = uses 23Mhz in vmware
vfp = uses 767 Mhz in vmware

on eve nested in vmware only running 1 juniper device (1 vcp/1vfp)
eve use 3.96 Ghz

in top under eve:

Code: Select all

PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                            
12896 root      20   0 5342356 2.246g  23044 S 147.3  7.1 975:14.47 qemu-system-x86 (vfp)                                                    
 6881 root      20   0 2949528 2.059g  22072 S   2.3  6.6  15:12.74 qemu-system-x86 (vcp)
After some investigation I found out that on the vfp some software interrupts steal my cpu compared to vmware.
top from the vfp:

Code: Select all

top - 17:28:38 up 10:10,  1 user,  load average: 1.09, 1.16, 1.14
Tasks: 116 total,   2 running, 113 sleeping,   0 stopped,   1 zombie
Cpu(s):  2.8%us,  7.8%sy,  0.0%ni, 88.8%id,  0.0%wa,  0.0%hi,  0.1%si,  0.4%st
Mem:   4046340k total,  2605984k used,  1440356k free,    18124k buffers
Swap:        0k total,        0k used,        0k free,   656556k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
 1151 root      20   0  709m 299m 161m R   30  7.6 211:57.70 J-UKERN            
 1138 root      20   0 33.8g  74m 4504 S   16  1.9 196:14.42 riot               
   18 root      -2   0     0    0    0 S   14  0.0 137:51.30 ksoftirqd/1        
   24 root      -2   0     0    0    0 S   11  0.0 113:41.79 ksoftirqd/2        
    3 root      -2   0     0    0    0 S    7  0.0  41:19.24 ksoftirqd/0        
   65 root     -51   0     0    0    0 S    7  0.0  21:43.27 irq/44-virtio1-    
  122 root     -51   0     0    0    0 S    3  0.0   0:00.75 irq/4-serial       
  889 root      20   0  9508 1224  920 S    0  0.0   0:03.88 fpc.core.push.s    
23772 root      20   0 19408 1148  860 R    0  0.0   0:00.10 top  


On Vmware the process J-Ukern and riot are the same, but the ksoftirqds are not visible.

Code: Select all

cat /proc/interrupts 
           CPU0       CPU1       CPU2       
  0:        138          0          0   IO-APIC-edge      timer
  1:         10          0          0   IO-APIC-edge      i8042
  4:      15095       3341          0   IO-APIC-edge      serial
  6:          1          2          0   IO-APIC-edge      floppy
  7:          0          0          0   IO-APIC-edge      parport0
  8:         34          0          0   IO-APIC-edge      rtc0
  9:          0          0          0   IO-APIC-fasteoi   acpi
 12:        130          0          0   IO-APIC-edge      i8042
 14:       2170          1          0   IO-APIC-edge      ata_piix
 15:         74          8          0   IO-APIC-edge      ata_piix
 18:      37478          0          0   IO-APIC-fasteoi   ext
 19:    2826992     148627          0   IO-APIC-fasteoi   int
 40:          0          0          0   PCI-MSI-edge      igb_uio
 41:          0          0          0   PCI-MSI-edge      igb_uio
NMI:          0          0          0   Non-maskable interrupts
LOC:   76477645  482005833  266118529   Local timer interrupts
SPU:          0          0          0   Spurious interrupts
PMI:          0          0          0   Performance monitoring interrupts
IWI:          0          0          0   IRQ work interrupts
RTR:          0          0          0   APIC ICR read retries
RES:   82013955    2052760   96450382   Rescheduling interrupts
CAL:       1061        144         25   Function call interrupts
TLB:      72509      27116          3   TLB shootdowns
TRM:          0          0          0   Thermal event interrupts
THR:          0          0          0   Threshold APIC interrupts
MCE:          0          0          0   Machine check exceptions
MCP:        236        236        236   Machine check polls
ERR:          0
MIS:          0
Just for reference the ps -fax from eve:

Code: Select all

 /opt/unetlab/wrappers/qemu_wrapper -T 0 -D 1 -t vMX-VCP -F /opt/qemu-2.6.2/bin/qemu-system-x86_64 -d 0 -- -nographic -device e1000,netdev=net0,mac=50:00:00:01:00:00 -netdev tap,id=net0,ifname=vunl0_1_0,script=no -device e1000,netdev=net1,mac=50:00:00:01:00:01 -netdev tap,id=net1,ifname=vunl0_1_1,script=no -smp 1 -m 2048 -name vMX-VCP -uuid 468dd6d8-80b6-4d72-a48a-8b82e4683775 -hda hda.qcow2 -hdb hdb.qcow2 -hdc hdc.qcow2 -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic
 6879 ?        S      0:00  \_ /opt/unetlab/wrappers/qemu_wrapper -T 0 -D 1 -t vMX-VCP -F /opt/qemu-2.6.2/bin/qemu-system-x86_64 -d 0 -- -nographic -device e1000,netdev=net0,mac=50:00:00:01:00:00 -netdev tap,id=net0,ifname=vunl0_1_0,script=no -device e1000,netdev=net1,mac=50:00:00:01:00:01 -netdev tap,id=net1,ifname=vunl0_1_1,script=no -smp 1 -m 2048 -name vMX-VCP -uuid 468dd6d8-80b6-4d72-a48a-8b82e4683775 -hda hda.qcow2 -hdb hdb.qcow2 -hdc hdc.qcow2 -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic
 6880 ?        S      0:00      \_ sh -c /opt/qemu-2.6.2/bin/qemu-system-x86_64 -nographic -device e1000,netdev=net0,mac=50:00:00:01:00:00 -netdev tap,id=net0,ifname=vunl0_1_0,script=no -device e1000,netdev=net1,mac=50:00:00:01:00:01 -netdev tap,id=net1,ifname=vunl0_1_1,script=no -smp 1 -m 2048 -name vMX-VCP -uuid 468dd6d8-80b6-4d72-a48a-8b82e4683775 -hda hda.qcow2 -hdb hdb.qcow2 -hdc hdc.qcow2 -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic
 6881 ?        Sl    15:16          \_ /opt/qemu-2.6.2/bin/qemu-system-x86_64 -nographic -device e1000,netdev=net0,mac=50:00:00:01:00:00 -netdev tap,id=net0,ifname=vunl0_1_0,script=no -device e1000,netdev=net1,mac=50:00:00:01:00:01 -netdev tap,id=net1,ifname=vunl0_1_1,script=no -smp 1 -m 2048 -name vMX-VCP -uuid 468dd6d8-80b6-4d72-a48a-8b82e4683775 -hda hda.qcow2 -hdb hdb.qcow2 -hdc hdc.qcow2 -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic
12893 ?        S      0:00 /opt/unetlab/wrappers/qemu_wrapper -T 0 -D 2 -t vMX-VFP -F /opt/qemu-2.9.0/bin/qemu-system-x86_64 -d 0 -- -nographic -device virtio-net-pci,netdev=net0,mac=50:00:00:02:00:00 -netdev tap,id=net0,ifname=vunl0_2_0,script=no -device virtio-net-pci,netdev=net1,mac=50:00:00:02:00:01 -netdev tap,id=net1,ifname=vunl0_2_1,script=no -device virtio-net-pci,netdev=net2,mac=50:00:00:02:00:02 -netdev tap,id=net2,ifname=vunl0_2_2,script=no -device virtio-net-pci,netdev=net3,mac=50:00:00:02:00:03 -netdev tap,id=net3,ifname=vunl0_2_3,script=no -device virtio-net-pci,netdev=net4,mac=50:00:00:02:00:04 -netdev tap,id=net4,ifname=vunl0_2_4,script=no -device virtio-net-pci,netdev=net5,mac=50:00:00:02:00:05 -netdev tap,id=net5,ifname=vunl0_2_5,script=no -device virtio-net-pci,netdev=net6,mac=50:00:00:02:00:06 -netdev tap,id=net6,ifname=vunl0_2_6,script=no -device virtio-net-pci,netdev=net7,mac=50:00:00:02:00:07 -netdev tap,id=net7,ifname=vunl0_2_7,script=no -device virtio-net-pci,netdev=net8,mac=50:00:00:02:00:08 -netdev tap,id=net8,ifname=vunl0_2_8,script=no -device virtio-net-pci,netdev=net9,mac=50:00:00:02:00:09 -netdev tap,id=net9,ifname=vunl0_2_9,script=no -device virtio-net-pci,netdev=net10,mac=50:00:00:02:00:0a -netdev tap,id=net10,ifname=vunl0_2_10,script=no -device virtio-net-pci,netdev=net11,mac=50:00:00:02:00:0b -netdev tap,id=net11,ifname=vunl0_2_11,script=no -smp 3 -m 4096 -name vMX-VFP -uuid 95375231-3628-47d3-87dc-87e2fd3b5720 -hda hda.qcow2 -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic
12894 ?        S      0:00  \_ /opt/unetlab/wrappers/qemu_wrapper -T 0 -D 2 -t vMX-VFP -F /opt/qemu-2.9.0/bin/qemu-system-x86_64 -d 0 -- -nographic -device virtio-net-pci,netdev=net0,mac=50:00:00:02:00:00 -netdev tap,id=net0,ifname=vunl0_2_0,script=no -device virtio-net-pci,netdev=net1,mac=50:00:00:02:00:01 -netdev tap,id=net1,ifname=vunl0_2_1,script=no -device virtio-net-pci,netdev=net2,mac=50:00:00:02:00:02 -netdev tap,id=net2,ifname=vunl0_2_2,script=no -device virtio-net-pci,netdev=net3,mac=50:00:00:02:00:03 -netdev tap,id=net3,ifname=vunl0_2_3,script=no -device virtio-net-pci,netdev=net4,mac=50:00:00:02:00:04 -netdev tap,id=net4,ifname=vunl0_2_4,script=no -device virtio-net-pci,netdev=net5,mac=50:00:00:02:00:05 -netdev tap,id=net5,ifname=vunl0_2_5,script=no -device virtio-net-pci,netdev=net6,mac=50:00:00:02:00:06 -netdev tap,id=net6,ifname=vunl0_2_6,script=no -device virtio-net-pci,netdev=net7,mac=50:00:00:02:00:07 -netdev tap,id=net7,ifname=vunl0_2_7,script=no -device virtio-net-pci,netdev=net8,mac=50:00:00:02:00:08 -netdev tap,id=net8,ifname=vunl0_2_8,script=no -device virtio-net-pci,netdev=net9,mac=50:00:00:02:00:09 -netdev tap,id=net9,ifname=vunl0_2_9,script=no -device virtio-net-pci,netdev=net10,mac=50:00:00:02:00:0a -netdev tap,id=net10,ifname=vunl0_2_10,script=no -device virtio-net-pci,netdev=net11,mac=50:00:00:02:00:0b -netdev tap,id=net11,ifname=vunl0_2_11,script=no -smp 3 -m 4096 -name vMX-VFP -uuid 95375231-3628-47d3-87dc-87e2fd3b5720 -hda hda.qcow2 -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic
12895 ?        S      0:01      \_ sh -c /opt/qemu-2.9.0/bin/qemu-system-x86_64 -nographic -device virtio-net-pci,netdev=net0,mac=50:00:00:02:00:00 -netdev tap,id=net0,ifname=vunl0_2_0,script=no -device virtio-net-pci,netdev=net1,mac=50:00:00:02:00:01 -netdev tap,id=net1,ifname=vunl0_2_1,script=no -device virtio-net-pci,netdev=net2,mac=50:00:00:02:00:02 -netdev tap,id=net2,ifname=vunl0_2_2,script=no -device virtio-net-pci,netdev=net3,mac=50:00:00:02:00:03 -netdev tap,id=net3,ifname=vunl0_2_3,script=no -device virtio-net-pci,netdev=net4,mac=50:00:00:02:00:04 -netdev tap,id=net4,ifname=vunl0_2_4,script=no -device virtio-net-pci,netdev=net5,mac=50:00:00:02:00:05 -netdev tap,id=net5,ifname=vunl0_2_5,script=no -device virtio-net-pci,netdev=net6,mac=50:00:00:02:00:06 -netdev tap,id=net6,ifname=vunl0_2_6,script=no -device virtio-net-pci,netdev=net7,mac=50:00:00:02:00:07 -netdev tap,id=net7,ifname=vunl0_2_7,script=no -device virtio-net-pci,netdev=net8,mac=50:00:00:02:00:08 -netdev tap,id=net8,ifname=vunl0_2_8,script=no -device virtio-net-pci,netdev=net9,mac=50:00:00:02:00:09 -netdev tap,id=net9,ifname=vunl0_2_9,script=no -device virtio-net-pci,netdev=net10,mac=50:00:00:02:00:0a -netdev tap,id=net10,ifname=vunl0_2_10,script=no -device virtio-net-pci,netdev=net11,mac=50:00:00:02:00:0b -netdev tap,id=net11,ifname=vunl0_2_11,script=no -smp 3 -m 4096 -name vMX-VFP -uuid 95375231-3628-47d3-87dc-87e2fd3b5720 -hda hda.qcow2 -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic
12896 ?        Sl   979:44          \_ /opt/qemu-2.9.0/bin/qemu-system-x86_64 -nographic -device virtio-net-pci,netdev=net0,mac=50:00:00:02:00:00 -netdev tap,id=net0,ifname=vunl0_2_0,script=no -device virtio-net-pci,netdev=net1,mac=50:00:00:02:00:01 -netdev tap,id=net1,ifname=vunl0_2_1,script=no -device virtio-net-pci,netdev=net2,mac=50:00:00:02:00:02 -netdev tap,id=net2,ifname=vunl0_2_2,script=no -device virtio-net-pci,netdev=net3,mac=50:00:00:02:00:03 -netdev tap,id=net3,ifname=vunl0_2_3,script=no -device virtio-net-pci,netdev=net4,mac=50:00:00:02:00:04 -netdev tap,id=net4,ifname=vunl0_2_4,script=no -device virtio-net-pci,netdev=net5,mac=50:00:00:02:00:05 -netdev tap,id=net5,ifname=vunl0_2_5,script=no -device virtio-net-pci,netdev=net6,mac=50:00:00:02:00:06 -netdev tap,id=net6,ifname=vunl0_2_6,script=no -device virtio-net-pci,netdev=net7,mac=50:00:00:02:00:07 -netdev tap,id=net7,ifname=vunl0_2_7,script=no -device virtio-net-pci,netdev=net8,mac=50:00:00:02:00:08 -netdev tap,id=net8,ifname=vunl0_2_8,script=no -device virtio-net-pci,netdev=net9,mac=50:00:00:02:00:09 -netdev tap,id=net9,ifname=vunl0_2_9,script=no -device virtio-net-pci,netdev=net10,mac=50:00:00:02:00:0a -netdev tap,id=net10,ifname=vunl0_2_10,script=no -device virtio-net-pci,netdev=net11,mac=50:00:00:02:00:0b -netdev tap,id=net11,ifname=vunl0_2_11,script=no -smp 3 -m 4096 -name vMX-VFP -uuid 95375231-3628-47d3-87dc-87e2fd3b5720 -hda hda.qcow2 -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic
20845 ?        S<     0:00 cpulimit -q -p 12896 -l 150 -b
If you need any information, please let me know.

Many thanks in advance
Viets

ecze
Posts: 533
Joined: Wed Mar 15, 2017 1:54 pm

Re: Juniper high cpu usage

Post by ecze » Mon Aug 14, 2017 11:14 pm

no miracle, nested consume a lot of cpu cycle comparing to bare install...

For a real compare, you should use a bare metal installation....

E.

Chris929
Posts: 83
Joined: Tue Jun 27, 2017 8:51 am

Re: Juniper high cpu usage

Post by Chris929 » Tue Aug 15, 2017 7:27 am

I had the same with the EVE-VM on ESX 6.0 with L5630 and 144GB RAM.
After a while my vSRXes calmed down a bit ;)
Bu indeed - nested "steals" CPU like crazy.

Post Reply