Make your own free website on Tripod.com


TECHNOCRAT.NET ®

 up a level
 post article
 search
 main


  Are buffer-overflow security exploits really Intel and OS makers fault?
Posted by: Bruce Perens on Friday July 28, @06:51PM

Buffer-overflow security exploits are common, but your computer shouldn't really be vulnerable to them. It seems the main problem is with the i386 architecture. Secondary to that, there's the problem of operating systems that could protect against this sort of exploit by using a simple facility of the virtual memory hardware, but don't.

A common way that computer criminals break the security of your system is the buffer-overflow exploit. The system cracker sends a server on your system a too-large input. The server code should check the input length, but doesn't, and it reads enough to over-write something beyond the end of the input buffer. In the case of a security exploit, the system cracker is aiming to overwrite the return stack or a function table. Contained within the overlarge input is the executable code of an injection vector. When your server code returns or calls through the function table, the injection vector runs and the cracker has control of your system.

Program memory is divided into "data" spaces such as the heap provided by malloc(), the stack, and static variables, and "code" spaces such as the instructions of your program and the shared libraries it uses. In general, code isn't writable, but on the i386 architecture, writable data spaces are also executable. This is because the Intel i386 architecture does not have an execute-protection bit in its virtual memory page tables, a facility that has been available in virtual-memory systems since the 1970's. Most CPUs today, including the various Intel Pentium models and those manufactured by Intel "compatible" competitors, are using the i386 architecture, and thus allow buffer-overflow security exploits.

On processors with an execute-protect bit on their VM pages and an operating system that uses it properly, buffer-overflow security bugs can never introduce new executable code into a process. We can make this facility available in operating systems like Linux as users transition to processors like Intel's new ia-64 architecture (also known as Merced or Itanium) and the ALPHA and MIPS chips. I don't think any of these chips have any reason to need the execute bit turned on for stack or data pages. Rare programs that actually run self-modifying code, like Java just-in-time compilers and programs that use executable "trampoline" code on the stack would have to turn off this protection, but that should be done selectively, on a page-by-page basis. Linux already has a system call, mprotect(), to do that.

I'm told that someone named "Solar Designer" actually produced a patch to do this for Linux, but that Linus hasn't accepted the patch into the main kernel source. Apparently, there's even a way to make it work on the i386, for the stack but not data regions, by using segmentation instead of paging. I can see why that would inspire Linus' esthetic revulsion, even though it's an important security fix. Also, someone showed one way to defeat the patch, but a good many exploits would be stopped dead. The people on the Linux kernel list, I'm told, have discussed and rejected this idea twice now. Maybe it's time for the rest of us to take it more seriously. There's also the StackGuard Compiler, which hardens code against stack attacks and can detect them. We need both of these tools in our systems.



<  | There is a Hell on Earth and its name is CompUSA!  >

 

  Related Links
  • Articles on Miscellany
  • Also by Bruce Perens
  • Contact author
  • The Fine Print:Technocrat.net posters grant technocrat.net an independent copyright on their postings, and retain their own copyright.
    ( Reply )

    Over 10 comments listed. Printing out index only.
    And how are you crazy?
    by David Terrell on Friday July 28, @08:23PM
    Let's see... Solaris on Ultrasparc ... has buffer overflows. How is this intel's fault? Architecturally, it's virtually impossible to have no pages that aren't both writable and executable, which means you're going to end up with security holes. Buffer overflows come from improper code, period, and should be fixed, period.

    Why does everyone in the world thing Linux/x86 is the only platform?
    [ Reply to this ]
    Re: Are buffer-overflow security exploits really Intel's fault?
    by Shang-Shan Chong on Friday July 28, @09:08PM
    Pardon me for asking, but waasn't the Morris sendmail worm a buffer overflow exploit of unices that ran on non-Intel hardware? Was this sort of hardware protection available (but unsed) then?


    Anyway, you do have a good point, and newer CPUs and OSes should be designed with these features in mind. What about portables like the Dragonball? I suspect they don't have any such protection at all. Can one imagine a worm that is beamed from one Palm to another? *shudder*

    [ Reply to this ]
    Re: Are buffer-overflow security exploits really Intel's fault?
    by Luke Scharf on Friday July 28, @10:27PM
    The lack of division between code and data space is pretty basic to the Von Neuman Architecture. Without lecturing, this lack of division makes computers more flexible, more efficient, and less complicated. Programs just have to be written to handle unreasonable data.

    It's up to the operating system to enforce the distinction. The Windows macro virii are an excellent example of what happens when everything is considered a program.

    I will elaborate on either of these two points if asked.
    [ Reply to this ]
    Re: Are buffer-overflow security exploits really Intel's fault?
    by Steve Underwood on Friday July 28, @10:59PM
    Until recently there was very little economic pressure to add this type of feature to processors and OSes. The processors which have such features didn't get them for security reasons, but to make the programmer's life easier when debugging. From the i386, to the i486, to the i586 and i586+MMX, to the i686 and i686+MMX and i686+MMX+SSE there have been extensive compatible additions to the architecture. Why wasn't a protection gate added during one of these updates? Because nobody screamed for it.

    In 2000 security is becoming a hot topic. Perhaps the Willamette will finally get Bruce's heart's desire.
    [ Reply to this ]
    Re: Are buffer-overflow security exploits really Intel's fault?
    by Steve Underwood on Friday July 28, @11:26PM
    If you protect the stack segment from execution you make life harder for someone trying to plant evil code. However, its just as easy to crash the software, or alter its execution sequence, since you can still screw the return addresses. If someone is devious enough, they can probably still find somewhere in the code to point to that has a seriously undesirable effect.

    I don't know of a current general purpose architecture which really deals with this. Its certainly possible. Some non-mainstream processors have a return address stack quite separate from main memory, which is incorruptable. These days its cheap to make such a stack huge, so its size will always be adequate.
    [ Reply to this ]
    Solar Designer's README
    by Bruce Perens on Saturday July 29, @02:45AM
    Linux kernel patch from the Openwall Project
    ----------------------------------------------

    ==========
    Overview
    ==========

    This patch is a collection of security-related features for the Linux
    kernel, all configurable via the new 'Security options' configuration
    section. In addition to the new features, some versions of the patch
    contain various security fixes. The number of such fixes changes from
    version to version, as some are becoming obsolete (such as because of
    the same problem getting fixed with a new kernel release), while other
    security issues are discovered.

    Non-executable user stack area
    --------------------------------

    Most buffer overflow exploits are based on overwriting a function's return
    address on the stack to point to some arbitrary code, which is also put
    onto the stack. If the stack area is non-executable, buffer overflow
    vulnerabilities become harder to exploit.

    Another way to exploit a buffer overflow is to point the return address to
    a function in libc, usually system(). This patch also changes the default
    address that shared libraries are mmap()'ed at to make it always contain a
    zero byte. This makes it impossible to specify any more data (parameters
    to the function, or more copies of the return address when filling with a
    pattern), -- in many exploits that have to do with ASCIIZ strings.

    However, note that this patch is by no means a complete solution, it just
    adds an extra layer of security. Many buffer overflow vulnerabilities
    will remain exploitable a more complicated way, and some will even remain
    unaffected by the patch. The reason for using such a patch is to protect
    against some of the buffer overflow vulnerabilities that are yet unknown.

    Also, note that some buffer overflows can be used for denial of service
    attacks (usually in non-respawning daemons and network clients). A patch
    like this cannot do anything against that.

    It is important that you fix vulnerabilities as soon as they become known,
    even if you're using the patch. The same applies to other features of the
    patch (discussed below) and their corresponding vulnerabilities.

    Restricted links in /tmp
    --------------------------

    I've also added a link-in-+t restriction, originally for Linux 2.0 only,
    by Andrew Tridgell. I've updated it to prevent from using a hard link in
    an attack instead, by not allowing regular users to create hard links to
    files they don't own. This is usually the desired behavior anyway, since
    otherwise users couldn't remove such links they've just created in a +t
    directory, and because of disk quotas.

    Restricted FIFOs in /tmp
    --------------------------

    In addition to restricting links, you might also want to restrict writes
    into untrusted FIFOs (named pipes), to make data spoofing attacks harder.
    Enabling this option disallows writing into FIFOs not owned by the user in
    +t directories, unless the owner is the same as that of the directory or
    the FIFO is opened without the O_CREAT flag.

    Restricted /proc
    ------------------

    This was originally a patch by route that only changed the permissions on
    some directories in /proc, so you had to be root to access them. Then
    there were similar patches by others. I found them all quite unusable for
    my purposes, on a system where I wanted several admins to be able to see
    all the processes, etc, without having to su root (or use sudo) each time.
    So I had to create my own patch that I include here.

    This option restricts the permissions on /proc so that non-root users can
    see their own processes only, and nothing about active network connections,
    unless they're in a special group. This group's id is specified via the
    gid= mount option, and is 0 by default. (Note: if you're using identd, you
    will need to edit the inetd.conf line to run identd as this special group.)
    Also, this disables dmesg(8) for the users. You might want to use this
    on an ISP shell server where privacy is an issue. Note that these extra
    restrictions can be trivially bypassed with physical access (without having
    to reboot).

    When using this part of the patch, most programs (ps, top, who) work as
    desired -- they only show the processes of this user (unless root or in
    the special group, or running with the relevant capabilities on 2.2), and
    don't complain they can't access others. However, there's a known problem
    with w(1) in recent versions of procps, so you should apply the included
    patch to procfs if this applies to you.

    Special handling of fd 0, 1, and 2
    ------------------------------------

    File descriptors 0, 1, and 2 have a special meaning for the C library and
    lots of programs. Thus, they're often referenced by number. Still, it is
    normally possible to execute a program with one or more of these fd's
    closed, and any open(2) calls it might do will happily provide these fd
    numbers. The program (or the libraries it is linked with) will continue
    using the fd's for their usual purposes, in reality accessing files the
    program has just opened. If such a program is installed SUID and/or SGID,
    then we might have a security problem.

    Enable this option to ensure that fd's 0, 1, and 2 are always open on
    startup of a SUID/SGID binary. If any of the fd's is closed, "/dev/null"
    will be opened for it (the device itself; you don't need to have /dev in
    the filesystem for that to work, such as in a chroot). This part of the
    patch is by Pavel Kankovsky, I've only ported it to Linux 2.2 (any errors
    are mine, of course).

    Enforce RLIMIT_NPROC on execve(2)
    -----------------------------------

    Linux lets you set a limit on how many processes a user can have, via a
    setrlimit(2) call with RLIMIT_NPROC. Unfortunately, this limit is only
    looked at when a new process is created on fork(2). If a process changes
    its UID, it might exceed the limit for its new UID.

    This is not a security issue by itself, as changing the UID is a privileged
    operation. However, there're privileged programs that want to switch to a
    user's context, including setting up some resource limits. The only fork(2)
    required (if at all) is done before switching the UID, and thus doesn't
    result in a check against RLIMIT_NPROC.

    Enable this option to enforce RLIMIT_NPROC on execve(2) calls. (The Linux
    2.0 version of this patch only checks the limit for processes that have
    their "dumpable" flag reset, such as due to an UID change, to reduce the
    performance impact.)

    Note that there's at least one good reason I am not enforcing the limit
    right after setuid(2) calls: some programs don't expect setuid(2) to fail
    when running as root.

    Destroy shared memory segments not in use
    -------------------------------------------

    Linux lets you set resource limits, including on how much memory a process
    can consume, via setrlimit(2). Unfortunately, shared memory segments are
    allowed to exist without association with any process, and thus might not
    be counted against any resource limits.

    This option automatically destroys shared memory segments when their attach
    count becomes zero after a detach or a process termination. It will also
    destroy segments that were created, but never attached to, on exit from the
    process. (In case you're curious, the only use left for IPC_RMID is to
    immediately destroy an unattached segment.)

    Of course, this breaks the way things are defined, so some applications
    might stop working. Apache and PostgreSQL are known to work, though. :-)

    Note that this feature will do you no good unless you also configure your
    resource limits (in particular, RLIMIT_AS and RLIMIT_NPROC). Most systems
    don't need this.

    Privileged IP aliases (Linux 2.0 only)
    ----------------------------------------

    It is sometimes desirable not to let regular users put their services on
    some of the IP addresses configured on the system. For example, this is
    the case when providing web hosting services with shell and/or CGI access,
    so that one user can't abuse the other domains hosted on the same system.

    When this option is enabled, only root can bind sockets to addresses of
    privileged aliased interfaces: those with slot numbers of the first half
    of the allowed range. The default limit is also expanded to 2048 aliases,
    so that the familiar slot numbers of 0 to 1023 become privileged.

    ================
    How to install
    ================

    Apply the patch (use 'patch -p0'). In kernel configuration, go to the new
    'Security options' section. Read help for the suboptions, and configure
    them.

    If desired, edit /etc/fstab to specify the group id for accessing /proc.
    Also, make sure you have no extra "mount" commands in the startup scripts,
    as these might override your fstab settings; this is the case for some
    distributions, including Red Hat. (Note that you won't be able to specify
    the GID by remounting /proc on a running system: fs specific options are
    not supported at that stage.)

    Build the kernel and reboot.

    You may also want to add the following line to your /etc/syslog.conf to
    log [security] alerts separately:

    kern.alert /var/log/alert

    Additionally, you may do something like this (assuming the log file will
    be empty most of the time):

    > /var/log/alert
    chown root.staff /var/log/alert
    chmod 640 /var/log/alert
    echo "less -XEU /var/log/alert" >> ~non-root/.bash_profile
    chattr +a /var/log/alert

    [ The last command doesn't do much unless you raise your securelevel on
    Linux 2.0, for example, with securelevel.c supplied with this patch.
    Raising the securelevel this way will force someone to reboot your system
    in order to modify the log file, which you should then be watching for.
    For those who have asked: securelevel is not a part of this patch any
    longer, so its use is only mentioned here for those who already know what
    it is and what it isn't. ]

    Ensure that the non-executable stack part of the patch is working, using
    stacktest.c for that purpose -- running './stacktest -e' should segfault,
    and a message should get logged to /var/log/alert (if you've followed the
    syslogd configuration described above). If you've enabled the support for
    GCC trampolines, try running './stacktest -t', it should succeed. If you
    have trampoline call emulation enabled on Linux 2.0, you should also try
    './stacktest -b', the simulated exploit attempt should fail even after a
    trampoline call in the same process has succeeded.

    If you enabled the link-in-+t restriction, you can also try to create a
    symlink in /tmp (as a non-root user) pointing to a file that user has no
    read access to, then switch to some other user that has the read access
    (for example, root) and try to read the file via the link (such as, with
    'cat /tmp/link'). This should fail, and a message should get logged.

    Now, you can try to create a hard link as a non-root user to a file that
    user doesn't own. This should also fail.

    ========
    F.A.Q.
    ========

    Q: Where can I find new versions of the patch?
    Q: I only have the patch for Linux 2.0, where do I get a version for
    Linux 2.2 (or vice versa)?
    A: http://www.openwall.com/linux/

    Q: Will you be updating the patch for the new kernel version 2.0.x?
    A: I will likely support 2.0.39, then we'll see.

    Q: What about 2.2.x?
    A: I will definitely support future 2.2.x versions of the kernel.

    Q: What about 2.3.x?
    A: Currently I have no plans of porting the patch to Linux 2.3.

    Q: What about 2.4.x?
    A: I will likely start supporting these kernels several revisions after
    2.4.0. My advice is to use 2.2 on "production" systems until then.

    Q: Is the patch x86-specific?
    A: Only the non-executable stack feature of the patch is x86-specific.
    The patch has been tested and is used on other architectures as well.
    In fact, I've released some minor updates of the patch after testing
    them on Alpha only in the past.

    Q: Are there any issues with the patch on SMP boxes?
    A: None that I am currently aware of. I've been running all versions
    of the patch since 2.0.33 on SMP.

    Q: Why don't they make it into the standard kernel?
    A: This is not a trivial question to answer. First, some parts of older
    versions of the patch (or equivalent, but different, fixes) are in fact
    in the kernel now. This is the reason the patch for 2.0.36 was smaller
    than it used to be in the 2.0.33 days. Now the patch for 2.2.13 is once
    again smaller than its last 2.2.12 version. :-) So, security problems in
    the kernel itself are typically getting fixed. It is, however, true that
    the security "hardening" features of the patch are not getting in. One
    of the reasons for this, is that those features could result in a false
    sense of security. Someone could then decide against fixing a hole on a
    system they administer or in software they maintain just because of these
    kernel features. If such things happen, the security is in fact relaxed,
    not improved. The rlimit restrictions I have here are temporary hacks,
    to be replaced with a real solution (beancounters), so I'm not trying to
    get them into the kernel. Finally, there are some features here that I
    think could get included (don't know of a good reason against doing so),
    such as the fd 0-2 fix.

    Q: I've applied the patch, and now my kernel doesn't compile?!
    A: Are you sure you haven't forgot to specify the '-p0' option to patch!?

    Q: Will GCC-compiled programs that use trampolines work with the non-exec
    stack part of the patch?
    A: Yes, if you enable the support.

    Q: When I'm trying to use 'print f()' in gdb on a Linux 2.2 system with
    your patch, my program crashes. What's going on?
    A: The changes made in Linux 2.2 didn't let me port my old workaround for
    this from the Linux 2.0 version of the patch. You'll have to use chstk.c
    on the program you're debugging in order to use this feature of gdb.

    Q: What does GCC use trampolines for?
    A: Trampolines are needed to fully support one of the GNU C extensions,
    nested functions. When a nested function is called from outside of the
    one it was declared in (that is, via a function pointer), something needs
    to provide the stack frame. The bad thing is that GCC puts trampolines
    onto the stack (as they're generated at runtime). You can find an example
    in stacktest.c, included with this patch.

    Q: How do you differ a trampoline call from an exploit attempt?
    A: Since most buffer overflow exploits overwrite the return address, the
    instruction to pass control onto the stack has to be a RET. When calling
    a trampoline, the instruction is a CALL. Note that in some cases such
    autodetection can be fooled by RET'ing to a CALL instruction and making
    this CALL pass control onto the stack (in reality, this also requires a
    register to be set to the address, and only works this way on Linux 2.0).
    Read help for the 'Autodetect GCC trampolines' configuration option.

    Q: What about glibc and non-executable stack?
    A: You have to enable trampoline autodetection when using glibc 2.0.x, or
    the system won't even boot. If you're running Linux 2.0, you will likely
    also want to enable trampoline call emulation to avoid running privileged
    processes with executable stack.

    Q: I've just compiled glibc on my system, but "make check" fails while
    trying to load testobj1.so. What's going on? Will the newly compiled
    glibc work with your patch in the kernel?
    Q: What's the deal with glibc-2.1.3-dl-open.diff?
    A: The non-executable stack part of the patch changes the default address
    shared libraries are mmap()ed at. Unfortunately, some parts of (at least
    some versions of) glibc depend on this address being above ELF sections.
    This is a bug, and glibc maintainers are now aware of it. The good thing
    is that the problem is only likely to show up with the little used ORIGIN
    feature, and only when the dynamic linker is run as a standalone program.
    It is thus highly unlikely that this will cause anything other than "make
    check" to break. You can, however, use the workaround included with this
    patch.

    Q: What does the procps-2.0.6-ow1.diff patch do? Is it required for the
    kernel patch to work?
    A: This procps patch updates the stale utmp entry check, so that w(1) in
    recent versions of procps works correctly on systems with the restricted
    /proc option. If you don't experience any problems with w(1), you don't
    have to install the procps patch.

    Q: What is chstk.c for?
    A: The patch adds an extra flag to ELF and a.out headers, which controls
    whether the program will be allowed to execute code on the stack or not,
    and chstk.c is what you should use to manage the flag. You might find it
    useful if you choose to disable the GCC trampolines autodetection. BTW,
    setting the flag also restores the original address shared libraries are
    mmap()'ed at, just in case some program depends on that.

    Q: What if an attacker uses chstk.c on a buffer overflow exploit?
    A: Nothing changes. It's the vulnerable program being exploited that needs
    executable stack, not the exploit. The attacker would need write access
    to this program's binary to use chstk.c successfully.

    Q: Do I have to reboot with an unpatched kernel to try out a new overflow
    exploit to see if I'm vulnerable?
    A: No, you can use chstk.c on the vulnerable program to temporarily allow
    it to execute code on the stack. Just don't forget to reset the flag back
    when you're done. Also, be careful with relying on such tests: typically,
    they can't prove that you're not vulnerable, they can only sometimes prove
    the opposite. Note that setting the flag on Linux 2.2 systems will change
    the default stack location to be 8 MB lower than where many exploits expect
    it to be.

    Q: Why did you modify signal handler return code?
    A: Originally the kernel put some code onto the stack to return from signal
    handlers. Now signal handler returns are done via the GPF handler instead
    (an invalid magic return address is put onto the stack).

    Q: What to do if a program needs to follow a symlink in a +t directory for
    its normal operation (without introducing a security hole)?
    A: Usually such a link needs to be created only once, so create it as root
    (or the directory owner, if it's not root). Such links are followed even
    when the patch is enabled.

    Q: What will happen if someone does:

    ln -s /etc/passwd ~/link
    ln -s ~/link /tmp/link

    and the vulnerable program runs as root and writes to /tmp/link?
    A: The patch is not looking at the target of the symlink in /tmp, it only
    checks if the symlink itself is owned by the user that vulnerable program
    is running as, and doesn't follow the link if not (like in this example).

    Q: Is there some performance impact of using the patch?
    A: Well, normally the only thing affected is signal handler returns. I
    didn't want to modify the sigreturn syscall, so there is some extra code to
    setup its stack frame. I don't think this has a noticable effect on the
    performance (and my benchmarks prove that): saved context checks and other
    signal handling stuff are taking much more time. Executing code on the
    stack was not fast anyway. Also, programs using GCC trampolines will run
    slower if trampoline calls are emulated. However, I don't know of any
    program that uses trampolines where the performance is critical (would be a
    stupid thing to do so anyway).

    --
    Solar Designer
    [ Reply to this ]
    Re: Are buffer-overflow security exploits really Intel and OS makers fault?
    by Anonymous hero on Saturday July 29, @10:39AM
    >I'm told that someone named "Solar Designer" actually produced a patch to do this for Linux, but that Linus hasn't accepted the patch into the main kernel source.

    Where is the problem? If you feel this is a must have, add it to the kernels *YOU* control. Like the debian one. Unique kernel patches in "Linux" is not un-heard of. Put it in and market it as a new imporvement/feature...it gives your product a market differentation.

    The "I'm told" comment makes it appear that you havn't actually researched this issue.
    [ Reply to this ]
    Is it me, or are most of you missing something ?
    by Robert Krajewski on Saturday July 29, @01:39PM
    It is as much the fault of the designers of the C language, and especially the monstrosity of the "classic" C library, as it is OS or hardware vendors.


    There are two ways to avoid buffer overflow in user programs.


    • Use interfaces (libraries, classes, OS calls) that don't allow buffers to be overrwritten. For example, all buffers have to be encapsulated safe interfaces, or buffer-writing functions must have "size" parameters (which, of course, can be fudged, but it's better than, say, gets, which can never be safe).
    • Use a language that doesn't have the concept of pointers to raw storage, such as Lisp, Smalltalk, or Java. Even Pascal is OK.

    Of course, there are two problems with either of these approaches:

    • The language, library, OS, and driver implementations all have to be robust with respect to buffer overflow. On the other hand, buffer safety is relatively easy to check in code reviews.
    • The second problem is cultural. Most competent programmers will stick with C (or C++) because even though they know it's dangerous, hey, it's efficient, and they can handle it. Suggesting to a somebody with ten years of experience in C that they ought to right a program in Java would almost be seen as an insult (even if a lot of their experience was painful).

    [ Reply to this ]
    I disagree
    by John Hoffman on Saturday July 29, @01:44PM
    Admittedly, if binary executable data is kept separate from other operating data in a program, you'll make it impossible to write new code for execution into a program. A buffer overrun can still be used as an exploit, however, by overwriting other data. Any program that performs a SUID could be fooled into performing something destructive or even into giving someone root access. The number of exploits would be decreased, but certainly not to zero.

    The correct way of solving this problem would be to restrict all forms of buffered input to routines designed to reject or delay oversized input. An addition to the lint checker giving warnings on the use of code using functions without buffer limits would help.
    [ Reply to this ]
    What's up with Linus, urh, I mean you?
    by Anon Coward on Saturday July 29, @01:55PM
    If Linus rejects the patch, do it yourself. That's
    what Open Source is all about after all: the ability
    to modify (and then distribute) the code without being
    dependent on the original provider of the code. You
    seem to be locked into a Microsoft-mindset.
    [ Reply to this ]
    Re: Are buffer-overflow security exploits really Intel and OS makers fault?
    by Someone on Saturday July 29, @02:07PM
    I highly disagree with blaming the IA/X86 design for buffer overrun issues.


    First, it is the simplistic Unix memory model which causes the problems in the first place, in which the entire linear address space is accessible as code OR data.


    Second, IA processors HAVE the mechanism to make these attacks much more complex, by making the spaces for code and data DISTINCT, such that there is no way to for the data segment to overwrite the code segment.


    It seems that O/S programmers are too lazy and uninspired to break some of the "traditional" unix programming model for the dubious gain of making attacks harder -- the simple "eggplant" buffer overflow exploit could have used a bogus return to the exec() syscall if it needed to and we'd still have the same problem.
    [ Reply to this ]

     
    The Fine Print:Posters to technocrat.net grant us an independent copyright on their postings, and retain their own copyright..
    ( Reply )

    Add this site to My Netscape    
    Webmaster: Bruce Perens. Powered by The Debian GNU/Linux Distribution, The Linux Operating System Kernel, Software from the Free Software Foundation's GNU Project, Digital Creations Zope Web Publishing Environment, The Python Interpretive Language, The Apache Web Server, and other Free Software (Open Source) products, exclusively.
    Webmasters: You may syndicate this site using the RSS file at http://technocrat.net/rdf .
    TECHNOCRAT and TECHNOCRAT.NET are trademarks of TECHNOCRAT. Other trademarks are the property of their respective owners. TECHNOCRAT.NET runs entirely on Free Software (Open Source). We return value to the Free Software community in the form of services and original free software. Our content is currently not Open Source, it's Copyright © 1999 TECHNOCRAT, All Rights Reserved. We are currently formulating a policy to free our content over time. Some of the images on this site are Copyrighted by ArtToday, All Rights Reserved.
    [ home | post article | search ]