We are benchmarking Redis using 4.0.12. After perf/mpstat investigation, we found softirq got almost ~50% of CPU usage, is it an assumed behavior?
Perf top
3.11% [kernel] [k] __softirqentry_text_start
2.94% [kernel] [k] _raw_spin_unlock_irqrestore
2.47% redis-server [.] readQueryFromClient
2.42% [kernel] [k] ipt_do_table
2.07% [kernel] [k] __inet_lookup_established
1.60% [kernel] [k] do_syscall_64
1.55% [kernel] [k] skb_release_data
1.42% [kernel] [k] fib_table_lookup
1.37% [kernel] [k] __fget
1.34% [kernel] [k] __nf_conntrack_find_get
mpstat
03:50:09 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
03:50:10 all 13.25 0.00 23.75 0.00 0.00 48.25 0.00 0.00 0.00 14.75
03:50:10 0 14.00 0.00 28.00 0.00 0.00 49.00 0.00 0.00 0.00 9.00
03:50:10 1 11.00 0.00 19.00 0.00 0.00 46.00 0.00 0.00 0.00 24.00
03:50:10 2 13.13 0.00 30.30 0.00 0.00 51.52 0.00 0.00 0.00 5.05
03:50:10 3 12.62 0.00 18.45 0.00 0.00 46.60 0.00 0.00 0.00 22.33
# cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
0: 40 0 0 0 IO-APIC 0-edge timer
1: 0 0 9 0 IO-APIC 1-edge i8042
4: 1658 0 0 0 IO-APIC 4-edge ttyS0
8: 0 0 0 1 IO-APIC 8-edge rtc0
9: 0 0 0 0 IO-APIC 9-fasteoi acpi
12: 0 152 0 0 IO-APIC 12-edge i8042
24: 0 0 0 0 PCI-MSI 49152-edge virtio0-config
25: 0 0 0 0 PCI-MSI 49153-edge virtio0-control
26: 3 0 0 0 PCI-MSI 49154-edge virtio0-event
27: 59777 0 0 0 PCI-MSI 49155-edge virtio0-request
28: 0 0 0 0 PCI-MSI 65536-edge virtio1-config
29: 60256744 0 1 0 PCI-MSI 65537-edge virtio1-input.0
30: 651478 0 0 1 PCI-MSI 65538-edge virtio1-output.0
31: 1 59664318 0 0 PCI-MSI 65539-edge virtio1-input.1
32: 0 629387 0 0 PCI-MSI 65540-edge virtio1-output.1
33: 0 0 62776302 0 PCI-MSI 65541-edge virtio1-input.2
34: 0 0 651281 1 PCI-MSI 65542-edge virtio1-output.2
35: 1 0 0 60463462 PCI-MSI 65543-edge virtio1-input.3
36: 0 1 0 608075 PCI-MSI 65544-edge virtio1-output.3
Comment From: charsyam
If you have another cpu, set it to handle irq using taskset