Buffer overflow exploit not working for kernel module [closed] - kernel-module

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have done buffer overflow exploits for user level processes in the past. However this does not seem to work good if I try to overflow the buffer of a vulnerable kernel module. Here's what I do:
There is a vulnerable kernel module which I can open as a file and read/write to it. The write operation is done without bounds checking. So I do a write operation and overflow the buffer and overwrite the return address as the address of an environment variable that has my shellcode. But something is going wrong. The kernel crashes and after rebooting I opened /var/log/messages and find that the eip is correctly pointing to the address I overwrote. But still it crashes saying "Unable to handle kernel null pointer dereference at virtual address"
Any reason why this would happen? Why wouldn't the control be redirected to a overwritten return address?
Note: I ran this on redhat enterprise linux with exec-shield and ASLR turned off.

The kernel can't jump to user addresses without performing a kernel exit, since it is running in a privileged mode with a different configuration than userspace (e.g. different paging tables, CPU permission bits, etc.).
So, in order to get shellcode into the kernel, you'd have to pack the shellcode into the buffer written to the driver (and copied to the kernel), and somehow get the address of it. This is not so hard if you have access to the kernel map, but recently Linux distributions have begun locking access to such sensitive information to make it harder to exploit kernel bugs.

Related

How application memory is freed in operating systems? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last year.
Improve this question
I was going through the common issues in memory management. I'm curious to know how is the memory managed in the following case with dynamic memory management in c:
I request the memory manager to provide me a free portion through malloc() call.
I performed some computations and stored a portion of the data on that section of memory.
The memory allocated in not freed.
How does the memory grow on the application. Does it keeps growing (of the some GUI element whose data container is not cleared once allocated).Does it grows each time i open the application untill program terminates (though it is allocated in the normals fashion using some DMA functions).
Will the segment of memory be freed by application during runtime or os doesnt cares of the memory mamagement in such cases ??
OS keeps track of which physical pages of RAM are referenced by which processes, and which pages are free. The exact data-structure can differ based on the OS, so it doesn't really matter. What matters is that when the OS needs to give your process a physical page of RAM, it can allocate it from the pool of free pages. When the process dies, the pages that aren't used anymore can be reclaimed and marked as 'free' again, to be used in future memory allocations.
This is how it works in a nut shell.
When you start a program, the OS allocates some amount of memory to it. While the program runs, it might request more (with malloc() and the like). When the program terminates normally or is killed, ALL of its memory is freed back to the OS. "Memory leaks" are an issue only for very large programs that stay open for a long time and continually allocate more memory, like a web browser. If such a program, say, requests more memory every time it displays a page, does not free it, and stays open, then it can indeed grow to the point where it causes problems for the OS.

How can I keep a file in memory between runs of the rust binary? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
When developing and testing I have a very large file that needs to be loaded into memory. This takes about 20 seconds each time I run the program.
Is there a way to keep the file in memory so that it doesn't need to be loaded each time?
It depends on what you mean by "loaded".
If you're referring to transferring the data from storage to ram that's more or less what your operating system's IO cache already should be doing, assuming you have enough spare memory and you're not using methods that bypass that cache.
On linux it's called page cache and you can check whether a file is in the cache via fincore. Or you can simulate the cache being cold via echo 3 > /proc/sys/vm/drop_caches which will drop its contents (requires root).
If you mean moving the bytes from the OS's cache into your application then that shouldn't take much time as long as you use sufficiently large block sizes for the read calls or use mmap. The latter is a dual-edged sword, used incorrectly it can actually cause slowdowns.
If you mean decoding the bytes into some application-specific logic then that's not IO but deserialization.

Can we use exit() function in C while working with embedded systems [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am working on a project related to embedded systems and wanted to know if it is a good practice to use the function exit(0) in your program.
if (Req[0] == '0')
{
puts("Emergency stop button operated\n");
**exit(0);**
}
exit, as well as returning from main(), only makes sense in hosted systems where there is an OS to return to. Most embedded systems do not have that, but are so-called "freestanding" systems: "bare metal" or RTOS microcontroller applications. A compiler for such a system need not provide stdlib.h so the function exit might not even be available.
The normal way to handle errors in such systems is to implement a custom error handler, which can log or print the error. And from there on in case of critical errors, you usually provoke a watchdog reset, leading to a full system re-boot. This is because errors in embedded systems are often hardware-related, and a watchdog reset doesn't just restore the software to default, but also all MCU registers and peripheral hardware connected to the MCU.
In high integrity embedded systems, such as the ones that actually have a real emergency stop, it is however common to keep the program running but revert it to a safe state. The MCU keeps running but all dangerous outputs etc are disabled, either to allow the system to get shut down in a controlled manner, or to "limp home" and keep running as well as it is still capable of. Writing software for such system is quite a big topic of its own.
Short answer, as #Felix G said in comment: If you work with an operating system, exit(0); is relevant, if you work with bare-metal application, no.
Please refer to #Felix G comment for more details.
By the way **exit(0);** is not a correct statement in C. You may mean exit(0); or /*exit(0);*/.

What If There Is A Memory Leak In Virtual Environment? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
What I understood from reading some web articles is that just like any other program, the host OS allocates X amount of memory to virtual OS and when I start any program on virtual OS, the virtual OS fetches the exact amount of memory needed for the program.
When I shut the virtual OS down, it returns the allocated memory to the host OS.
But what happens if there is a memory leakage in the virtual OS environment? I am starting to learn C, and my professor says that in dynamic memory allocation operations, permanent leakage can happen in the host OS.
But what if it happens in virtual environment? I guess the program will give back ALL of the memory allocated to the host OS when I shut it down, right? What happens when I start the virtual host again the next time? Does the memory leakage show up there permanently?
Just getting afraid before I even start writing my first program in C.
P. S. If I use websites like Repl.it and use memory allocation over there, will it cause damage to my system still?
Memory leak can occur when you allocate some memory (with malloc in C) and you never free that memory, this can happen for a number of reasons.
Now the important thing to understand is that this allocated memory will be released once the process is finished running.
When you setup your VM you set the maximum amount of memory it can consume. When you shut down your VM it will also be released.
You can't cause a "permanent" memory leakage if the program you write doesn't run. If the OS has some always running service with memory leak than it will slow down when it is out of memory but when you restart, all the memory will be released again.
So don't let this stop you, you can't damage your computer and you can always recover it by exiting the program. (or restarting the PC in a worst case scenario
)
EDIT:
As it was mentioned in the comments there is a special scenario when you leak shared memory, in this case exiting the program might not release the memory but I consider this the worst case scenario and a reboot will solve this problem as well. (still not permanent)
This answer is meant to provide a different view point, in addition to the good answer(s) and comments, which I agree with.
I am trying to see the worst case, i.e a way how you could get what you fear.
You probably have an environment which does not match the following assumptions, in that case my construct does of course not apply.
your virtual OS supports "persistence"
(i.e. you can shut it down in a "hibernate" way, it can start with the same running processes and their restored memory content)
your virtualisation engine also supports persistence of the virtual OS
shutting down for persistence in virtual OS is possible with a process occupying a critical amount of memory (sanity checks could prevent this)
virtualisation engine also does not mind the depleted memory and allows persistence
you choose to use persistent shutdown,
rebooting the virtual OS normally would include killing the evil process and reclaiming the memory (this is discussed by other answers and comments, but thanks to MrBlaise for proposing the clarification here)
In this circumstance I imagine that you can have:
a process which has taken (and ran out of) all avaiable memory
but has not crashed or otherwise triggered emergency measures
then this situation is saved for persistence before shutting down, successfully
then you restart the virtual OS
it restores the previous situation, i.e. returns from hiberantion
the restored previous situation contains a still/again running process which has taken all memory
I think this will still only affect the virtual OS, not the host.
Please note that I intentionally made all necessary assumptions just to get the situation you are afraid of. Some of the assumptions are quite "daring".
I imagine for example that anything supporting persistence should have sanity checks, which at least detect the memory issue and ask how to handle.
(By the way, I do not know about virtualisation engines which support persistence, neither whether any do not support it. I am thinking in the generic, theoretical area.
In case I have invented the persistence for virtualisation engines (can't believe it), I claim this as prior art. ;-))

Implementing kernel bypass for a network card [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
My situation:
I would like the data received on a network card to reach my application as fast as possible. I have concluded that the best (as in lowest latency) solution is to implement a network stack in my user space.
The network traffic can be a proprietary protocol (if it makes writing the network stack easier) because it is simply between two local computers.
1) What is the bare minimum list of functions my network stack will need to implement?
2) Would I need to remove/disable whatever network stack is currently in my Linux/how would I do this?
3) How exactly would I write the driver? I presume I would need to find exactly where the driver code gets called and then instead of the driver/network stack being called, I would instead send the data to a piece of memory which I can access from my application?
I think the already built-in PF_PACKET socket type does exactly what you want to implement.
Drawback: The application must be started with root rights.
There are some enhancements to the PF_PACKET system that are described on this page:
Linux packet mmap
The Kernel is in control of the NIC card. Whenever you pass data between kernel and user-space, there is a context-switch between the kernel rings, which is costly. My understanding is that you would use the standard API's while setting the buffers to a larger size allowing larger chunks of data to be copied between user and kernel-space at a time, reducing the number of context switches for a given size of data.
As far as implementing your own stack, it is unlikely a single person can created a faster network stack than the one built into the kernel.
If the linux kernel is not capable of processing packets at a speed you require, you might want to investigate NIC cards with more onboard hardware processing power. These sorts of things are used for network throughput testing etc.

Resources