Different memory dumps generated by being internal/external to a process - c

I have been playing around lately with memory dumping and stumbled upon something that I didn't fully understand.
If I have a process and dump its memory contents by using VirtualQueryEx & ReadProcessMemory to grab the data and dump it to a file everything is ok. Meanwhile, I have tried doing the same thing by being internal to the process and doing VirtualQuery and just dumping the contents of the pointers it returns.
I was able to do this by proxying one of the DLLs of the process I am testing on.
Now, the problem is that these two memory dumps are different ( missing areas from the dump created from inside the process )
Could somebody enlighten me as to why this is happening ?
Windows XP SP3 + Visual Studio 2008
Thank you very much.

What do you need to dump? Speaking about all the memory pages that are allocated by the process then I think that you can get different values because of the internal process state that is (in general) unique per time. Also, if you are dumping process's memory outside the process then the dumper's code is not in the dumping process address space while if you are dumping process from inside the process, the process now includes the dumper's code. So, it may be useful to dump only certain number of pages belongs to process application or DLL's.

Related

C IPC: Revoke access to shared memory, similar to Microsoft/Apple Clipboard

As I understand it, the way you interact with the clipboard in Windows(And MacOS too, I think) is similar to:
Open the clipboard (Requesting access)
Clear the clipboard
Allocate new Global Memory, and get a pointer to that memory
Fill the memory
Release the memory handle
Indicate to the system that the clipboard is ready.
Those final steps are what I am concerned with- Reading up on shared memory APIs, I see no way for a provider of shared memory to enforce/verify that someone it has shared the memory with has in fact released it. Without such a guarantee, the "Copier" could manipulate the data freely even when it was supposed to be "Done", without the knowledge of the clipboard owner.
Can someone help me find how one process can create shared memory (Similar to shm_open()), share that memory, and then know when the client they shared to has completely released that memory (Or force revoke it- either works)?
Alternatively, am I either having a key misunderstanding in how these clipboards work, or are these OS taking further special OS level action that a normal program can not replicate?
I may have found the answer, please let me know this makes sense here, I won't be able to try it for a couple of days:
In the man page for fcntl it mentions the following behavior: EBUSY cmd is F_ADD_SEALS, arg includes F_SEAL_WRITE, and there exists a writable, shared mapping on the file referred to by fd
So you could create a block of shared memory, pass it to a trusted arbiter, and not trust that block until you successfully apply F_SEAL_WRITE (Thus knowing the client has released all open writable mappings, and is unable to create any more)
I was looking for this for most of this week, but found the answer almost right after posting to stackoverflow. If I got this right, sorry for the trouble but hopefully it helps people in the future!

How a C application can update itself in Linux environment at run time

First I want to ask whether is it possible for an application to update itself at runtime on same address space?
If yes, what's the best way to implement the logic?
Usecase : My application is running on a board which is connected to network. At runtime if it detects a new version of same application, then how to update the application on same memory address, where previous one is stored.
As per my understanding first we should take the backup of update and at the time of boot load, main application should be updated with backup and then launch the application normally. Am I right?
Usually you can replace the file containing the executable while it's running without problems.
After you update the file, you can start the application like always, and close your running instance.
If you however want to do it at runtime (i.e. without forking or starting new process), I don't think it's possible without extremely weird hacks:
if you plan to "rebase" your program memory with new executable's code, you'd need to figure the stack, memory and instruction pointers for each thread. You'd need to become a disassembler.
if you plan to call a stub in your program after loading it into auxilliary memory segment, that's fine, but you need to figure where the target function is, and what happens if it's gone in your next update. Plus it's totally platform-specific.
if you plan to standardize the above approach by using shared libraries that are dynamically loaded and unloaded, I see no problem - it's very similar to the approach where you restart entire process.
I'd go with replacing just the executable, or the third option if I have a very good reason for this. The last option is nice since it lets you update your application's components separately (but at the same time this might cause you maintenance headaches later on.)
What you need is something akin to a bootloader. In this case: you will have two programs on the device, hereafter referred to as the Loader and the App.
On your initial install of the system: write the App to the beginning of memory and the Loader somewhere further down to give space if the App grows in size in the future. (Keep note of the beginning memory address of the Loader)
The App will run normally as if it was the only program, periodically checking for updates to itself. If it finds an update on the network, use a GOTO to go to the first memory location of your Loader, which will then begin running and can write over the original App with the new App found on the network. At the end of your Loader, GOTO back to the (new) App.
See this Stack Overflow question for ideas on how to GOTO to specific memory addresses. Goto a specific Address in C

cgicc: uploading large files?

I'm writing a CGI application in C++ using cgicc, running on an embedded device. Now I came to the point where it is required to upload a large file to the device (the firmware package for updating). Now I realize 2 problems:
1.) The web server (currently lighttpd) stores the file in an temporary file before starting the CGI application.
2.) cgicc tries to load the complete data into an allocated memory before continuing.
Point 1.) is another problem, more importand for now is point 2.). Is there a way to tell cgicc to read the file piece by piece instead at once? Otherwise I run into OOM trouble.
Any other suggestion which can help are welcome!
Thanx, Andi

How to capture List of Processes that have received SIGSEGV

Part of my application (preferably a daemon) required to log the list of process names that have got core dumped. It would be great, if someone points which mechanism that I can use?
If the processes are truly dumping core, you could use the following trick:
Set /proc/sys/kernel/core_pattern to |/absolute/path/to/some/program %p %e
This will cause the system to execute your program (with the faulting process' pid and executable name), and pipe the core dump into its standard input. You may then log and store the core dump file.
Note that the program will run as the user and group root
See man 5 core for more information, and an example core dump handling program

How do I copy a locked file directly from the disk and make sure that the file is intact?

The application I am writing needs to be able to copy files that are locked. We attempted to use Volume Shadow Copy, and while it was successful in copying the file, the application that had the lock on the file crashed because it could not acquire a lock while we were copying the file.
I am left to believe that my only option is to bypass the OS and read directly from the disk. The problem is that if I read directly to the disk I cannot be sure of the integrity of the file, if it is in the middle of a write the file will be in a damaged state.
After hours of searching I was able to find one utility that copied the file directly from the disk and used a file system driver to cache writes while copying so that it could make sure that the file was in an intact state. However, that utility is extraordinarily expensive, 100k+ for the license I would likely need to use.
Does anyone have any ideas on how to accomplish what I am trying to?
We are planning on restricting the system to NTFS volumes only.
I ended up using a C program called DirectCopy written by Napalm. It works rather well.
http://www.rohitab.com/discuss/topic/24252-ntfs-directcopy-method-from-napalm/
Can you grab the process ID of the application that has a lock on it and suspend its thread while you perform the copy? Something like this http://www.codeproject.com/KB/threads/pausep.aspx
This description of "layered drivers" might be useful. I know nothing about it though.
Also, if the file is locked then can you just 'watch' it and wait for it to be unlocked and then quickly copy it?

Resources