Bugzilla – Bug 929339
VUL-0: CVE-2015-3456: qemu kvm xen: VENOM qemu floppy driver host code execution
Last modified: 2015-06-24 11:42:22 UTC
An update workflow for this issue was started. This issue was rated as important. Please submit fixed packages until 2015-05-19. When done, reassign the bug to security-team@suse.de. https://swamp.suse.de/webswamp/wf/61703
http://venom.crowdstrike.com/ is public now. --------------------- The bug is in QEMU’s virtual Floppy Disk Controller (FDC). This vulnerable FDC code is used in numerous virtualization platforms and appliances, notably Xen, KVM, and the native QEMU client. VMware, Microsoft Hyper-V, and Bochs hypervisors are not impacted by this vulnerability. Since the VENOM vulnerability exists in the hypervisor’s codebase, the vulnerability is agnostic of the host operating system (Linux, Windows, Mac OS, etc.). Though the VENOM vulnerability is also agnostic of the guest operating system, an attacker (or an attacker’s malware) would need to have administrative or root privileges in the guest operating system in order to exploit VENOM. ------------------
opensuse fixes can now be submitted.
The floppy emulation is enabled by default in qemu/kvm. It can only be disabled by supplying -nodefaults to the qemu commandline. Note when using this the other standard devices (console and vga) must also be added again if required.
Thanks Marcus, I'll write a TID and will release it today.
XSA-133 Xen Security Advisory CVE-2015-3456 / XSA-133 version 2 Privilege escalation via emulated floppy disk drive UPDATES IN VERSION 2 ==================== Public release. ISSUE DESCRIPTION ================= The code in qemu which emulates a floppy disk controller did not correctly bounds check accesses to an array and therefore was vulnerable to a buffer overflow attack. IMPACT ====== A guest which has access to an emulated floppy device can exploit this vulnerability to take over the qemu process elevating its privilege to that of the qemu process. VULNERABLE SYSTEMS ================== All Xen systems running x86 HVM guests without stubdomains are vulnerable to this depending on the specific guest configuration. The default configuration is vulnerable. Guests using either the traditional "qemu-xen" or upstream qemu device models are vulnerable. Guests using a qemu-dm stubdomain to run the device model are only vulnerable to takeover of that service domain. Systems running only x86 PV guests are not vulnerable. ARM systems are not vulnerable. MITIGATION ========== Enabling stubdomains will mitigate this issue, by reducing the escalation to only those privileges accorded to the service domain. qemu-dm stubdomains are only available with the traditional "qemu-xen" version. CREDITS ======= This issue was discovered by Jason Geffner, Senior Security Researcher at CrowdStrike. [ ... ]
TID is at https://www.suse.com/support/kb/doc.php?id=7016497
Is there an ETA on when maintenance updates for SLES that fix these vulnerbilities will be released? Thanks.
Created attachment 634202 [details] CVE-2015-3456.c reproducer I think gcc -o CVE-2015-3456 CVE-2015-3456.c ./CVE-2015-3456 will crash qemu
(worked on my 13.2 qemu-kvm guest, not sure if it will work everywhere)
(In reply to Marcus Meissner from comment #44) > Created attachment 634202 [details] > CVE-2015-3456.c > > reproducer I think > > gcc -o CVE-2015-3456 CVE-2015-3456.c > ./CVE-2015-3456 > > will crash qemu SLES 11 SP3 Xen HVM on SLES 12 host with no fda/fdb defined in /etc/xen/vm/<vmname> did not crash. qemu-dm used 100% cpu while the program was running and acted normal once I exited the program. SLES 11 SP3 Xen HVM kernel ring buffer displayed: [3024124.174445] floppy0: unexpected interrupt repl[0]=40 repl[1]=0 repl[2]=0 repl[3]=42 repl[4]=42 repl[5]=42 repl[6]=2 [3024124.175150] floppy0: unexpected interrupt repl[0]=40 repl[1]=0 repl[2]=0 repl[3]=42 repl[4]=42 repl[5]=42 repl[6]=2 [3024124.175157] irq 6: nobody cared (try booting with the "irqpoll" option) [3024124.175161] Pid: 0, comm: swapper Not tainted 3.0.101-0.29-default #1 [3024124.175163] Call Trace: [3024124.175175] [<ffffffff81004935>] dump_trace+0x75/0x310 [3024124.175183] [<ffffffff8145e083>] dump_stack+0x69/0x6f [3024124.175190] [<ffffffff810c92a3>] __report_bad_irq+0x33/0xe0 [3024124.175196] [<ffffffff810c9552>] note_interrupt+0x202/0x240 [3024124.175201] [<ffffffff810c72ef>] handle_irq_event_percpu+0xef/0x1c0 [3024124.175206] [<ffffffff810c73f4>] handle_irq_event+0x34/0x60 [3024124.175211] [<ffffffff810ca0f5>] handle_edge_irq+0x65/0x130 [3024124.175218] [<ffffffff81004487>] handle_irq+0x17/0x20 [3024124.175223] [<ffffffff81003c16>] do_IRQ+0x56/0xe0 [3024124.175228] [<ffffffff81461313>] common_interrupt+0x13/0x13 [3024124.175235] [<ffffffff810300a2>] native_safe_halt+0x2/0x10 [3024124.175241] [<ffffffff8100adf5>] default_idle+0x145/0x150 [3024124.175246] [<ffffffff81002126>] cpu_idle+0x66/0xb0 [3024124.175253] [<ffffffff81befeff>] start_kernel+0x376/0x447 [3024124.175259] [<ffffffff81bef3c9>] x86_64_start_kernel+0x123/0x13d [3024124.175263] handlers: [3024124.175273] [<ffffffffa0230250>] floppy_hardint [3024124.175276] Disabling IRQ #6
same result we saw for that combination.
(In reply to Jared Hudson from comment #46) > SLES 11 SP3 Xen HVM on SLES 12 host with no fda/fdb defined in > /etc/xen/vm/<vmname> did not crash. qemu-dm used 100% cpu while the program > was running and acted normal once I exited the program. SLES 11 SP3 Xen HVM > kernel ring buffer displayed: I see the same on a SLES12 Xen host with SLES12 HVM guest, which uses /usr/lib/bin/qemu-system-i386 (aka qemu-xen). So at least Marcus' reproducer (as is) wont crash /usr/lib/xen/bin/qemu-{dm,system-i386}.
(In reply to Jared Hudson from comment #46) > (In reply to Marcus Meissner from comment #44) > > Created attachment 634202 [details] > > CVE-2015-3456.c > > > > reproducer I think > > > > gcc -o CVE-2015-3456 CVE-2015-3456.c > > ./CVE-2015-3456 > > > > will crash qemu > > SLES 11 SP3 Xen HVM on SLES 12 host with no fda/fdb defined in > /etc/xen/vm/<vmname> did not crash. qemu-dm used 100% cpu while the program > was running and acted normal once I exited the program. SLES 11 SP3 Xen HVM > kernel ring buffer displayed: > On SLES 12 host, with a SLES 12 guest, I got essentially the same results when running qemu-system-x86_64 under gdb. Running normally, it segfaulted.
I get pretty much the identical result as in comment #46 with an SP3 KVM guest on an SP3 host, i.e. the guest does not crash, qemu-kvm is using a lot of CPU, and a very similar stack trace eventually appears in the ring buffer. Guest was started via OpenStack nova without a floppy device attached: qemu 26222 1 65 22:56 ? 00:10:05 /usr/bin/qemu-kvm -name instance-00000033 -S -machine pc-i440fx-1.4,accel=kvm,usb=off -cpu qemu64,+bmi1,+tbm,+fma4,+xop,+osvw,+3dnowprefetch,+misalignsse,+sse4a,+abm,+cr8legacy,+cmp_legacy,+lahf_lm,+pdpe1gb,+fxsr_opt,+mmxext,+hypervisor,+f16c,+avx,+osxsave,+xsave,+aes,+popcnt,+sse4.2,+sse4.1,+fma,+ssse3,+monitor,+pclmuldq,+vme -m 512 -smp 1,sockets=1,cores=1,threads=1 -uuid cb660c5d-0fd0-47a8-9feb-94975a6c10c0 -smbios type=1,manufacturer=Devel:Cloud:5:Staging / SLE_11_SP3,product=OpenStack Nova,version=2014.2.4-2014.2.4.dev35,serial=dbc769b0-64df-4d5f-8ee7-a38469663076,uuid=cb660c5d-0fd0-47a8-9feb-94975a6c10c0 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000033.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/instances/cb660c5d-0fd0-47a8-9feb-94975a6c10c0/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/nova/instances/cb660c5d-0fd0-47a8-9feb-94975a6c10c0/disk.local,if=none,id=drive-virtio-disk1,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=24 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:21:f7:cb,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/cb660c5d-0fd0-47a8-9feb-94975a6c10c0/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 0.0.0.0:1 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
Created attachment 634211 [details] Difference between qemu-kvm arguments for crashable and possibly non-crashable guests Here is a diff more clearly showing the difference between the qemu-kvm arguments in comment #50 and those in comment #52. I split the arguments into one per line in order to obtain the diff. Apparently these differences are sufficient to make the VM instantly crashable.
(In reply to Adam Spiers from comment #53) > Difference between qemu-kvm arguments for crashable and possibly > non-crashable guests In the crashable case (#52), the floppy drive is connected to a floppy controller -drive file=/var/lib/nova/instances/978410f3-45e0-4208-8372-0bdfa33f09fc/disk.eph0,if=none,id=drive-fdc0-0-0,format=qcow2,cache=none -global isa-fdc.driveA=drive-fdc0-0-0 In the non-crashable case, the floppy is connected to a virtio-blk controller -drive file=/var/lib/nova/instances/cb660c5d-0fd0-47a8-9feb-94975a6c10c0/disk.local,if=none,id=drive-virtio-disk1,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk1 The bug only applies to the emulated floppy controller, not the para-virtualized virtio-blk controller. I'm curious to peek at the libvirt XML nova created for these two instances. Can you provide that?
(In reply to James Fehlig from comment #54) > In the crashable case (#52), the floppy drive is connected to a floppy > controller > > -drive > file=/var/lib/nova/instances/978410f3-45e0-4208-8372-0bdfa33f09fc/disk.eph0, > if=none,id=drive-fdc0-0-0,format=qcow2,cache=none > > -global isa-fdc.driveA=drive-fdc0-0-0 This part is correct. > In the non-crashable case, the floppy is connected to a virtio-blk controller > > -drive > file=/var/lib/nova/instances/cb660c5d-0fd0-47a8-9feb-94975a6c10c0/disk.local, > if=none,id=drive-virtio-disk1,format=qcow2,cache=none > > -device > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk1, > id=virtio-disk1 Don't know what I was thinking here. You even said this instance was booted without a floppy :-/. But what is this disk? > I'm curious to peek at the libvirt XML nova created for these two instances. > Can you provide that? Might still be useful. Also, does hwinfo show a floppy controller in both instances? E.g. my instances have 08: None 00.0: 0102 Floppy disk controller [Created at misc.281] Unique ID: rdCR.3wRL2_g4d2B Hardware Class: storage Model: "Floppy disk controller" I/O Port: 0x3f2 (rw) I/O Ports: 0x3f4-0x3f5 (rw) I/O Port: 0x3f7 (rw) DMA: 2 IRQ: 6 (3 events) Config Status: cfg=no, avail=yes, need=no, active=unknown
Created attachment 634274 [details] a stronger reproducer C With this I managed to segfault (on 13.1) a qemu-kvm -cdrom openSUSE-13.2-GNOME-Live-i686.iso -m 1500 \ -monitor stdio -nodefaults -vga cirrus -net nic -net user and also a SUSE Cloud instance booted without floppy-drive (which still showed a Floppy disk controller in hwinfo --all) so it is pretty obvious that using -nodefaults is not enough.
Again: The floppy _controller_ is always present in all pc* machines plus isapc in i386/x86_64 (i.e., all except for "none"). The same is also true for the prep machine in ppc/ppc64 (but not for the more common g3beige, mac99 or pseries machines). What -nodefaults suppresses for the floppy is the creation of a default floppy _drive_ in driveA called "floppy0", as can be observed in "info qtree". Verification: You can see in the comments above that -global isa-fdc.driveA is setting a property for an existing device despite -nodefaults, not creating a new isa-fdc via -device. In QEMU 2.3 you can run "qemu-system-x86_64 -nodefaults -monitor stdio" and at the prompt execute "info qom-tree" to see that there is some "device[*] (isa-fdc)" node listed. In earlier versions you can run "info qtree" to a similar effect ('dev: isa-fdc, id ""'). Code: The floppy controller is created in hw/i386/pc.c:pc_basic_device_init() -> hw/block/fdc.c:fdctrl_init_isa(), and hw/isa/pc87312.c:pc87312_realize() respectively. There is no dependency on vl.c:default_floppy in either location.
(In reply to Bernhard Wiedemann from comment #58) > a stronger reproducer C Huh, peeking at the qemu patch, I thought more commands were affected. But according to #5 only READ ID and DRIVE SPECIFICATION COMMAND cause problems. For completeness, I'll summarize my test results and then stop spamming the bug. KVM: I can crash qemu with either reproducer regardless of host/guest combination. Xen: I'm unable to crash qemu with either reproducer when host is SLE11 SP3. On SLE12 host, I can crash qemu with DRIVE SPECIFICATION COMMAND, but not READ ID. With patches applied, I'm unable to crash qemu at all (only tested SLE11 SP3 and SLE12 KVM/Xen hosts).
any ETA for the opensuse submissions?
(In reply to Marcus Rückert from comment #61) > any ETA for the opensuse submissions? 13.1: https://build.opensuse.org/request/show/307164 devel: https://build.opensuse.org/request/show/307165 13.2 is still in progress - you will find it in Virtualization:openSUSE13.2 soon.
Factory: https://build.opensuse.org/request/show/307166
13.1: https://build.opensuse.org/request/show/307170 13.2: https://build.opensuse.org/request/show/307172
Factory: https://build.opensuse.org/request/show/307303
Upgrade instructions: After installing the fixed qemu / kvm package, all running kvm processes need to be shutdown for the fix to become effective. for SUSE OpenStack Cloud this can be accomplished using live-migration or with these commands: . .openrc nova list --all_tenants --status active |\ perl -ne "m/^[| ]*([0-9a-f-]+)/ && print \$1.' '" > active for id in `cat active` ; do nova suspend $id while nova show $id | grep OS-EXT-STS:task_state.*suspending ; do sleep 3 done nova resume $id done
SUSE-SU-2015:0889-1: An update that fixes one vulnerability is now available. Category: security (important) Bug References: 929339 CVE References: CVE-2015-3456 Sources used: SUSE Linux Enterprise Server 11 SP3 (src): kvm-1.4.2-0.22.27.1 SUSE Linux Enterprise Desktop 11 SP3 (src): kvm-1.4.2-0.22.27.1
openSUSE-SU-2015:0893-1: An update that fixes one vulnerability is now available. Category: security (important) Bug References: 929339 CVE References: CVE-2015-3456 Sources used: openSUSE 13.2 (src): libcacard-2.1.3-4.1, qemu-2.1.3-4.1, qemu-linux-user-2.1.3-4.1
openSUSE-SU-2015:0894-1: An update that fixes one vulnerability is now available. Category: security (important) Bug References: 929339 CVE References: CVE-2015-3456 Sources used: openSUSE 13.1 (src): qemu-1.6.2-4.8.1, qemu-linux-user-1.6.2-4.8.1
(In reply to James Fehlig from comment #60) > Xen: > I'm unable to crash qemu with either reproducer when host is SLE11 SP3. On > SLE12 host, I can crash qemu with DRIVE SPECIFICATION COMMAND, but not READ > ID. > > With patches applied, I'm unable to crash qemu at all (only tested SLE11 SP3 > and SLE12 KVM/Xen hosts). I notice that there have been updates for SLE11 SP3 KVM but none for Xen. Are there patches for Xen pending or are we saying that Xen is unaffected on SLES 11 SP3?
(In reply to Ralph Schaffner from comment #73) > (In reply to James Fehlig from comment #60) > > Xen: > > I'm unable to crash qemu with either reproducer when host is SLE11 SP3. On > > SLE12 host, I can crash qemu with DRIVE SPECIFICATION COMMAND, but not READ > > ID. > > > > With patches applied, I'm unable to crash qemu at all (only tested SLE11 SP3 > > and SLE12 KVM/Xen hosts). > > I notice that there have been updates for SLE11 SP3 KVM but none for Xen. > Are there patches for Xen pending or are we saying that Xen is unaffected on > SLES 11 SP3? There are patches for Xen which have been submitted (see comment #28). If they haven't shown up in the update channel then perhaps QA is still working on getting it released.
Correct, the updates are in QA and will be released over the next days.
SUSE-SU-2015:0896-1: An update that solves two vulnerabilities and has one errata is now available. Category: security (important) Bug References: 886378,924018,929339 CVE References: CVE-2015-1779,CVE-2015-3456 Sources used: SUSE Linux Enterprise Server 12 (src): qemu-2.0.2-46.1 SUSE Linux Enterprise Desktop 12 (src): qemu-2.0.2-46.1
SUSE-SU-2015:0923-1: An update that fixes four vulnerabilities is now available. Category: security (important) Bug References: 922705,922709,927967,929339 CVE References: CVE-2015-2751,CVE-2015-2752,CVE-2015-3340,CVE-2015-3456 Sources used: SUSE Linux Enterprise Software Development Kit 12 (src): xen-4.4.2_04-18.1 SUSE Linux Enterprise Server 12 (src): xen-4.4.2_04-18.1 SUSE Linux Enterprise Desktop 12 (src): xen-4.4.2_04-18.1
SUSE-SU-2015:0927-1: An update that solves one vulnerability and has two fixes is now available. Category: security (important) Bug References: 910441,927967,929339 CVE References: CVE-2015-3456 Sources used: SUSE Linux Enterprise Software Development Kit 11 SP3 (src): xen-4.2.5_06-0.7.1 SUSE Linux Enterprise Server 11 SP3 (src): xen-4.2.5_06-0.7.1 SUSE Linux Enterprise Desktop 11 SP3 (src): xen-4.2.5_06-0.7.1
SUSE-SU-2015:0929-1: An update that fixes three vulnerabilities is now available. Category: security (important) Bug References: 877642,877645,929339 CVE References: CVE-2014-0222,CVE-2014-0223,CVE-2015-3456 Sources used: SUSE Linux Enterprise Server 11 SP1 LTSS (src): kvm-0.12.5-1.26.1
SUSE-SU-2015:0940-1: An update that fixes two vulnerabilities is now available. Category: security (important) Bug References: 927967,929339 CVE References: CVE-2015-3340,CVE-2015-3456 Sources used: SUSE Linux Enterprise Server 11 SP1 LTSS (src): xen-4.0.3_21548_18-0.21.1
SUSE-SU-2015:0889-2: An update that fixes one vulnerability is now available. Category: security (important) Bug References: 929339 CVE References: CVE-2015-3456 Sources used: SUSE Linux Enterprise Server 10 SP4 LTSS (src): xen-3.2.3_17040_46-0.15.1
SUSE-SU-2015:0943-1: An update that solves one vulnerability and has one errata is now available. Category: security (important) Bug References: 834196,929339 CVE References: CVE-2015-3456 Sources used: SUSE Linux Enterprise Server 11 SP2 LTSS (src): kvm-0.15.1-0.29.1
SUSE-SU-2015:0944-1: An update that solves two vulnerabilities and has one errata is now available. Category: security (important) Bug References: 910441,927967,929339 CVE References: CVE-2015-3340,CVE-2015-3456 Sources used: SUSE Linux Enterprise Server 11 SP2 LTSS (src): xen-4.1.6_08-0.11.1
openSUSE-SU-2015:0983-1: An update that fixes two vulnerabilities is now available. Category: security (important) Bug References: 927967,929339 CVE References: CVE-2015-3340,CVE-2015-3456 Sources used: openSUSE 13.1 (src): xen-4.3.4_04-44.1
all out (except 13.2 xen)
openSUSE-SU-2015:1092-1: An update that solves 17 vulnerabilities and has 10 fixes is now available. Category: security (important) Bug References: 861318,882089,895528,901488,903680,906689,910254,912011,918995,918998,919098,919464,919663,921842,922705,922706,922709,923758,927967,929339,931625,931626,931627,931628,932770,932790,932996 CVE References: CVE-2014-3615,CVE-2015-2044,CVE-2015-2045,CVE-2015-2151,CVE-2015-2152,CVE-2015-2751,CVE-2015-2752,CVE-2015-2756,CVE-2015-3209,CVE-2015-3340,CVE-2015-3456,CVE-2015-4103,CVE-2015-4104,CVE-2015-4105,CVE-2015-4106,CVE-2015-4163,CVE-2015-4164 Sources used: openSUSE 13.2 (src): xen-4.4.2_06-23.1