Bug 615929 (CVE-2010-2240) - VUL-0: CVE-2010-2240: kernel: heap/stack overlapping
Summary: VUL-0: CVE-2010-2240: kernel: heap/stack overlapping
Status: RESOLVED FIXED
Alias: CVE-2010-2240
Product: SUSE Security Incidents
Classification: Novell Products
Component: General (show other bugs)
Version: unspecified
Hardware: Other Linux
: P2 - High : Major
Target Milestone: ---
Deadline: 2010-08-27
Assignee: Jeff Mahoney
QA Contact: Security Team bot
URL:
Whiteboard: maint:released:sle10-sp3:35454 maint:...
Keywords:
Depends on:
Blocks:
 
Reported: 2010-06-21 13:53 UTC by Sebastian Krahmer
Modified: 2017-03-20 21:20 UTC (History)
5 users (show)

See Also:
Found By: ---
Services Priority:
Business Priority:
Blocker: ---
Marketing QA Status: ---
IT Deployment: ---


Attachments
Condensed backport (3.41 KB, patch)
2010-08-31 16:26 UTC, Jeff Mahoney
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Sebastian Krahmer 2010-06-21 13:53:18 UTC
Subject: [vendor-sec] X.Org server exposes kernel vulnerability
Date: Sun, 20 Jun 2010 22:29:30 +0200
From: Matthieu Herrb <matthieu.herrb@laas.fr>
To: vendor-sec@lst.de

Hi,

xorg-security got the attached vulnerability report from  Rafal Wojtczuk
last week.

A short analysis shows that the problems looks like a generic kernel
vulnerability that didn't get much attention 5 years ago when the
initial paper was published at CanSecWest 2005. But Rafal found out that
the X server has a number of properties that make it a good vector to
exploit that.

Rafal suggests that setting RLIMIT_AS in the X server to 1.5GB on
x86_32 and to something like 10GB on 64 arches should be enough to fix
the issue.

He agrees to keep that under embargo  until august 1st.
Please keep him CC'ed.

there's no CVE id allocated for that afaik.

--- Cut ---
Xorg server is prone to large memory management vulnerabilities

1) Summary

        A malicious authenticated client can force Xorg server to fill all
        of its address space. This may result in the process stack being in an
unexpected region and execution of arbitrary code with server privileges
(root). Both x86_32 and x86_64 platforms are affected, at least when
running on Linux.

        Note that depending on the system configuration, by default local
unprivileged users may be able to start an instance of Xorg server that
requires no authentication and exploit it. Also if an attacker exploits
a vulnerability in a GUI application (e.g. web browser), he will have
ability to attack X server.

        In case of a local attacker that can use MIT-SHM extension, the
exploit is very reliable.

2) Affected and patched software versions
...

3) Attack scenario

        The class of vulnerabilities mentioned in the title is described by
Gael Delalleu at [1]. Applying this paper to X server case, an attacker
can instruct Xorg server to allocate many large pixmaps. This may result
in the stack and mmapped memory regions becoming close, and allows all
the tricks described in the Gael's paper.

        In fact, X server case is special, because of MIT-SHM extension.
        Local attacker can almost completely exhaust X server's address apace, then
create a shared memory segment S and force X server to attach it at the
only available region left, which will be close above the stack. Then
attacker instructs X server to call a recursive function, which results
in the stack being extended and the stack pointer being moved to S for a
brief period of time (during recursion). Attacker can then write to S;
this will overwrite the stack locations and allow arbitrary code
execution. So, unlike in Gael's paper, we will not need to trigger data
structures corruption by expanding the stack; we will be able to write
to the expanded stack directly, which makes this attack 100% reliable
(and somehow unique).

        Without MIT-SHM, it is possible for an attacker to allocate a pixmap
and have X server place the stack top there for a moment, but when
attacker instructs X server to write to this pixmap, the stack is
already in a safe location (because the recursing function has already
completed).

Attack scenario (in case of Linux):
Step 1:
In case of x86_64 platform, instruct X server to allocate as many
32000x32000 pixmaps (largest allowed by X) as possible. Note that on
x86_64 platform, not all 64bit address space is available - legal
addresses must be canonical (top 16bits must be all 0 or all 1), so we
have only 49bits address space. Linux and X have their additional
restrictions on the mmap return value; as a result, ca 36000 pixmaps
exhaust all address space. Note that X server does not initialize pixmap
contents (just reserves a VMA for it);
my tests showed that only about 800MB of RAM is needed for this step.
Again, this step is not necessary in x86_32 case.
Step 2:

shm_seg_size=1<<24
while shm_seg_size >= 4096
        shm_seg=shmget(..., shm_seg_size,...)
        have X attach shm_seg
        if XShmAttach fails, then shm_seg_size/=2
done
Similarly, the action of creating and attaching a shared memory segment
require resident memory only for the control structures, not for all
segment content; thus little RAM is required.
On Linux, the maximum number of shm segments is limited by shmmni kernel
variable (exported in /proc/sys/kernel/shmmni). By default it is 4096,
and that is why in case of a 64bits platform we need to shrink the
available address space by allocating pixmaps first.
Step 3:
Allocate windows arranged so that when X processes them, some function F
is called recursively. Trigger F recursion.
Step 4:
Find a shm segment S that has nonzero content. Nonzero content means X
stack has been resident in this segment during F recursion.
Step 5:
Spawn a process W that continuously overwrites the bottom page of S with
custom payload
Step 6:
Trigger F recursion. When one of Fs returns, it will pick the return
address from our payload. It is a race (W must write to stack after F
has placed its return address there, but before F returns), but reliably
winnable.

4) The fix
[probably prefault stack pages and place a guard PROT_NONE page above
it. Not perfect, maybe something else will pop up]

5) Other notes
        The attack has been reproduced on Fedora 13 default install, both 32
and 64bits. Local users (say, logged in via ssh) can run "Xorg :1" and
attack this process. SELinux in the enforcing mode neither prevents
exploitation nor limits the executed code capabilities - it is
impossible to sandbox a process that requires iopl privileges. OpenBSD
privilege-separated X server may resist root compromise; it was not
verified.
        On Fedora, Xorg executable base is not randomized, so we may happily
return into locations in Xorg executable (they are at constant
addresses): into execl@got in case of x86_32, or into middle of
os/utils.c!System in case of x86_64. If Xorg executable base was
randomized (PIE executable), its base would leak in the stack content,
so it would not help, either.

        If an attacker is not local (or cannot use MIT-SHM), the attack is
still possible - expanded stack may overwrite other data structures.
This has not been researched, and probably would be much less reliable.

        In case of Qubes [2] architecture, X server maps pages from
        untrusted VMs, so its address space can be attacked, too. The difference
with a vanilla Linux+Xorg installation is that untrusted parties (VMs) do
not talk to X server directly: the access is proxied by a qubes_guid
process, and very limited set of actions are actually executed on a real
X server (BTW, it prevents all attacks against X server known so far).
The attack described here is unique, as it potentially can be conducted
by only creating windows, which obviously must be allowed. However, one
of the proactive safety measures
implemented in qubes_guid since the very beginning was to detect the
suspiciously high number of allocations and ask the user (in dom0) for
permission to continue. It prevents the attack.

6) Credits
Rafal Wojtczuk <rafal@invisiblethingslab.com>

7) References
[1] Gael Delalleu, "Large memory management vulnerabilities",
http://cansecwest.com/core05/memory_vulns_delalleau.pdf
[2] Qubes OS project, http://www.qubes-os.org
Comment 1 Sebastian Krahmer 2010-06-21 13:54:01 UTC
Even though we have ulimits, they seem dynamically
generated during OS setup, and are too large to avoid
exploitation.
Comment 2 Ludwig Nussel 2010-06-22 12:18:42 UTC
do_shmat() in the kernel has this code:

/*
 * If shm segment goes below stack, make sure there is some
 * space left for the stack to grow (at least 4 pages).
 */
if (addr < current->mm->start_stack &&
    addr > current->mm->start_stack - size - PAGE_SIZE * 5)
	goto invalid;

Looks like a halfhearted attempt to stop intersection of memory regions.
Comment 3 Ludwig Nussel 2010-06-23 08:46:42 UTC
There has been no comment on the issue from the (upstream) kernel folks at all so far.
Comment 4 Ludwig Nussel 2010-06-25 07:09:32 UTC
CVE-2010-2240
Comment 5 Marcus Meissner 2010-06-28 09:56:57 UTC
linus not happy with current patch.

Subject: Re: [Security] [vendor-sec] X.Org server exposes kernel vulnerability
From: Linus Torvalds <torvalds@linux-foundation.org>
To: Eugene Teo <eugeneteo@kernel.sg>
Cc: Andrea Arcangeli <aarcange@redhat.com>, Nick Piggin <npiggin@suse.de>,
   security@kernel.org, Ludwig Nussel <ludwig.nussel@suse.de>,
   Matthieu Herrb <matthieu.herrb@laas.fr>,
   Andrew Morton <akpm@linux-foundation.org>, vendor-sec@lst.de,
   rafal@invisiblethingslab.com

On Thu, Jun 24, 2010 at 5:05 PM, Eugene Teo <eugeneteo@kernel.sg> wrote:
>
> As discussed, can you please review the attached rewritten
> heap-stack-gap patch from Nick? We should also change the default gap
> size to something larger for 64-bit architectures.

I detest this patch.

I think it would be _much_ better to instead just change the semantics
of grow-down stack segments very subtly, and just reserve the poison
page in the vma itself.  That would automatically mean that nobody
else can grow into the stack segment, _without_ adding these kinds of
random ad-hoc games to core mmap functionality.

In other words, I'd much rather change how grows-down works, so that
we always guarantee that when we fault in a stack page, we make sure
to extend the vma downwards. We could do it in "handle_pte_fault()"
for the !pte_present case or something, and actually move the whole
"grow stack down" logic in there instead. Just make sure that the
poison gap size is MIN(PAGE_SIZE, MAX_STACK_ADJUST).

In fact, I bet that would simplify a lot of things. We'd never need to
care about GROWSDOWN in the actual architecture fault handling
routines, because any valid stack expansion would always hit the
existing stack vma, never hit "below" it. So all the expand_stack()
logic would move from arch code to generic VM code.

And I bet the patch would be smaller than that horror you posted.

                            Linus
Comment 6 Nick Piggin 2010-06-28 15:29:44 UTC
I think for SLES10 and SLES11, the current patch should be fine. Possibly for SP1 we could look at backporting a patch that Linus takes into mainline. But even then it probably isn't needed.

However if an untrusted client can call these arbitrary recursive functions, then it seems like there may be a bigger problem. Manipulating the stack pointer is more or less like a random memory scribble, so I don't know how this is safe.

If there are strict limits on how much the client can manipulate the stack pointer in the server, then we can put in a proper guard. But this really seems like an X issue.
Comment 7 Thomas Biege 2010-08-09 07:54:59 UTC
mass change P5->P3
Comment 9 Marcus Meissner 2010-08-19 09:53:49 UTC
we had Andrea Arcangeliss patch for heap-stack gap handling already...

- sle11 has nicks forward port
- sles9 had andreas code
- opensuse also have nicks forwards ports.


only sles10 is urgently missing a patch at the moment.

not sure if we want to replace ANdrea's earlier patch by the new one
Comment 10 Swamp Workflow Management 2010-08-20 11:15:45 UTC
The SWAMPID for this issue is 35321.
This issue was rated as important.
Please submit fixed packages until 2010-08-27.
When done, please reassign the bug to security-team@suse.de.
Patchinfo will be handled by security team.
Comment 11 Ludwig Nussel 2010-08-23 10:52:33 UTC
please submit a sles10 kernel with the patch enabled
Comment 14 Jeff Mahoney 2010-08-31 16:26:24 UTC
Created attachment 386572 [details]
Condensed backport

This is the condensed version of 6 patches that I'll ultimately commit to the SLES10 repo. I'm awaiting review on the kernel@suse.de mailing list to verify that the last chunk is correct.
Comment 19 Jeff Mahoney 2010-08-31 19:17:39 UTC
Huh, turns out that Jiri fixed up Andrea's fix to work on SLES10 already but didn't comment here. Closing as fixed with his patch.
Comment 20 Swamp Workflow Management 2010-09-03 14:44:59 UTC
Update released for: kernel-bigsmp, kernel-bigsmp-debuginfo, kernel-debug, kernel-debug-debuginfo, kernel-default, kernel-default-debuginfo, kernel-kdump, kernel-kdump-debuginfo, kernel-kdumppae, kernel-kdumppae-debuginfo, kernel-smp, kernel-smp-debuginfo, kernel-source, kernel-source-debuginfo, kernel-syms, kernel-syms-debuginfo, kernel-vmi, kernel-vmi-debuginfo, kernel-vmipae, kernel-vmipae-debuginfo, kernel-xen, kernel-xen-debuginfo, kernel-xenpae, kernel-xenpae-debuginfo
Products:
SLE-DEBUGINFO 10-SP3 (i386)
SLE-DESKTOP 10-SP3 (i386)
SLE-SDK 10-SP3 (i386)
SLE-SERVER 10-SP3 (i386)
Comment 21 Swamp Workflow Management 2010-09-03 15:17:12 UTC
Update released for: kernel-debug, kernel-debug-debuginfo, kernel-default, kernel-default-debuginfo, kernel-kdump, kernel-kdump-debuginfo, kernel-smp, kernel-smp-debuginfo, kernel-source, kernel-source-debuginfo, kernel-syms, kernel-xen, kernel-xen-debuginfo
Products:
SLE-DEBUGINFO 10-SP3 (x86_64)
SLE-DESKTOP 10-SP3 (x86_64)
SLE-SAP-APL 10-SP3 (x86_64)
SLE-SDK 10-SP3 (x86_64)
SLE-SERVER 10-SP3 (x86_64)
Comment 22 Swamp Workflow Management 2010-09-03 15:32:44 UTC
Update released for: kernel-debug, kernel-debug-debuginfo, kernel-default, kernel-default-debuginfo, kernel-source, kernel-source-debuginfo, kernel-syms
Products:
SLE-DEBUGINFO 10-SP3 (ia64)
SLE-SDK 10-SP3 (ia64)
SLE-SERVER 10-SP3 (ia64)
Comment 23 Swamp Workflow Management 2010-09-03 15:38:07 UTC
Update released for: kernel-default, kernel-default-debuginfo, kernel-source, kernel-syms
Products:
SLE-DEBUGINFO 10-SP3 (s390x)
SLE-SERVER 10-SP3 (s390x)
Comment 24 Swamp Workflow Management 2010-09-03 15:54:24 UTC
Update released for: kernel-default, kernel-default-debuginfo, kernel-iseries64, kernel-iseries64-debuginfo, kernel-kdump, kernel-kdump-debuginfo, kernel-ppc64, kernel-ppc64-debuginfo, kernel-source, kernel-source-debuginfo, kernel-syms
Products:
SLE-DEBUGINFO 10-SP3 (ppc)
SLE-SDK 10-SP3 (ppc)
SLE-SERVER 10-SP3 (ppc)
Comment 25 Marcus Meissner 2010-09-27 13:13:49 UTC
all done I think.
Comment 26 Swamp Workflow Management 2011-06-14 16:48:47 UTC
Update released for: kernel-default, kernel-default-debuginfo, kernel-source, kernel-syms
Products:
SLE-SERVER 10-SP2-LTSS (s390x)
Comment 27 Swamp Workflow Management 2011-06-14 17:19:25 UTC
Update released for: kernel-bigsmp, kernel-bigsmp-debuginfo, kernel-debug, kernel-debug-debuginfo, kernel-default, kernel-default-debuginfo, kernel-kdump, kernel-kdump-debuginfo, kernel-kdumppae, kernel-kdumppae-debuginfo, kernel-smp, kernel-smp-debuginfo, kernel-source, kernel-source-debuginfo, kernel-syms, kernel-syms-debuginfo, kernel-vmi, kernel-vmi-debuginfo, kernel-vmipae, kernel-vmipae-debuginfo, kernel-xen, kernel-xen-debuginfo, kernel-xenpae, kernel-xenpae-debuginfo
Products:
SLE-SERVER 10-SP2-LTSS (i386)
Comment 28 Swamp Workflow Management 2011-06-14 17:24:50 UTC
Update released for: kernel-debug, kernel-debug-debuginfo, kernel-default, kernel-default-debuginfo, kernel-kdump, kernel-kdump-debuginfo, kernel-smp, kernel-smp-debuginfo, kernel-source, kernel-source-debuginfo, kernel-syms, kernel-xen, kernel-xen-debuginfo
Products:
SLE-SERVER 10-SP2-LTSS (x86_64)