How does the monitor store the data displayed on it? Is it stored in memory and if so how can I access it? The reason I am asking is because I am programming a text editor, in which I use an array to store the data being manipulated. I was wondering if I could access the memory containing the data displayed on the screen rather than using my own array. It seems redundant to reserve memory for the same data twice. But I just don't know how the monitor stores the data displayed on it or if it even stores it at all.
You can make very few assumptions about where stdout goes. It might go to a terminal, where it will end up in a buffer somewhere. Or it might be piped to another process. Or it might to to /dev/null. Or to a line printer, etc. And even in the cases where it does end up in memory somewhere, that buffer will have a limited size, and hence not necessarily hold the whole file. And you probably won't have permission to access that memory anyway. So while this could in theory work in certain circumstances, it is definitely not the right way to go.
You will probably not want to use stdout for your text editor at all, but something like ncurses, which lets you place text where you want in the terminal, and update it at will. The actual contents of the file is probably best managed through your own internal buffers, the way you are already doing it, though you might consider mmap too.
Stdout is the output stream of a program. The environment you run the program from determines where this stream points to. You are probably running the program from either a console terminal or from some IDE.
Console terminals by default store the output internally themselves, unless instructed to redirect the output to a file or another program's input.
You can't rely on third party to store any output info for you to later query without any agreement. You'll want to hold enough data inside your program to generate views you want. And yes, as above stated, ncurses and such libraries make console app building a bit easier.
Related
I'm developing a kind-of secure password manager. It won't be for professional use, and I know it won't be as secure as KeePass or anything. This is just for my own understanding of how to allocate secure memory, using crypto-algorithms etc.
For this I work with libgcrypt and allocate my memory with gcry_malloc_secure.
I've now come to a point where I somehow need the user to enter his password for encryption/decryption.
But as I see it, any console input is first buffered in stdin (or argv[..]) and thus not in secure memory. So it could "easily" be read by an attacker.
Any security-related thing that happens inside my program is in securemem and hopefully harder to read/steal.
So my question is like the title states:
What is the most secure way to let a user input data?
If by "secure" memory you mean memory that won't be paged to disk, a POSIX-compliant C environment should provide mlock() at least. So you can create a buffer that won't be paged. But you still have to find a way to read data into it, and you might have to zero it before freeing it, to avoid having the sensitive data lurking about in the process space.
I suppose a rudimentary way to implement what you need would be to create a buffer, apply mlock() to it, and then read input character-by-character into that buffer.
If you use stdio.h calls, however, you're still going to fall foul of the buffering that goes on automatically. On Linux you can turn this buffering off at the terminal level, and then read characters one-by-one using ioctl() calls. Libraries like ncurses have their own ways of doing similar things. I presume similar effects can be achieved on other platforms, but I don't know enough about them to comment.
I have a file let's log. I need to remove some bytes let's n bytes from starting of file only. Issue is, this file referenced by some other file pointers in other programs and may these pointer write to this file log any time. I can't re-create new file otherwise file-pointer would malfunction(i am not sure about it too).
I tried to google it but all suggestion for only to re-write to new files.
Is there any solution for it?
I can suggest two options:
Ring bufferUse a memory mapped file as your logging medium, and use it as a ring buffer. You will need to manually manage where the last written byte is, and wrap around your ring appropriately as you step over the end of the ring. This way, your logging file stays a constant size, but you can't tail it like a regular file. Instead, you will need to write a special program that knows how to walk the ring buffer when you want to display the log.
Multiple number of small log filesUse some number of smaller log files that you log to, and remove the oldest file as the collection of files grow beyond the size of logs you want to maintain. If the most recent log file is always named the same, you can use the standard tail -F utility to follow the log contents perpetually. To avoid issues of multiple programs manipulating the same file, your logging code can send logs as messages to a single logging daemon.
So... you want to change the file, but you cannot. The reason you cannot is that other programs are using the file. In general terms, you appear to need to:
stop all the other programs messing with the file while you change it -- to chop now unwanted stuff off the front;
inform the other programs that you have changed it -- so they can re-establish their file-pointers.
I guess there must be a mechanism to allow the other programs to change the file without tripping over each other... so perhaps you can extend that ? [If all the other programs are children of the main program, then if the children all O_APPEND, you have a fighting chance of doing this, perhaps with the help of a file-lock or a semaphore (which may already exist ?). But if the programs are this intimately related, then #jxh has other, probably better, suggestions.]
But, if you cannot change the other programs in any way, you appear to be stuck, except...
...perhaps you could try 'sparse' files ? On (recent-ish) Linux (at least) you can fallocate() with FALLOC_FL_PUNCH_HOLE, to remove the stuff you don't want without affecting the other programs file-pointers. Of course, sooner or later the other programs may overflow the file-pointer, but that may be a more theoretical than practical issue.
Here's the situation:
I'm analysing a programs' interaction with a driver by using an LD_PRELOADed module that hooks the ioctl() system call. The system I'm working with (embedded Linux 2.6.18 kernel) luckily has the length of the data encoded into the 'request' parameter, so I can happily dump the ioctl data with the right length.
However quite a lot of this data has pointers to other structures, and I don't know the length of these (this is what I'm investigating, after all). So I'm scanning the data for pointers, and dumping the data at that position. I'm worried this could leave my code open to segfaults if the pointer is close to a segment boundary (and my early testing seems to show this is the case).
So I was wondering what I can do to pre-emptively check whether the current process owns a particular offset before trying to dereference? Is this even possible?
Edit: Just an update as I forgot to mention something that could be very important, the target system is MIPS based, although I'm also testing my module on my x86 machine.
Open a file descriptor to /dev/null and try write(null_fd, ptr, size). If it returns -1 with errno set to EFAULT, the memory is invalid. If it returns size, the memory is safe to read. There may be a more elegant way to query memory validity/permissions with some POSIX invention, but this is the classic simple way.
If your embedded linux has the /proc/ filesystem mounted, you can parse the /proc/self/maps file and validate the pointer/offsets against that. The maps file contains the memory mappings of the process, see here
I know of no such possibility. But you may be able to achieve something similar. As man 7 signal mentions, SIGSEGV can be caught. Thus, I think you could
Start with dereferencing a byte sequence known to be a pointer
Access one byte after the other, at some time triggering SIGSEGV
In SIGSEGV's handler, mark a variable that is checked in the loop of step 2
Quit the loop, this page is done.
There's several problems with that.
Since several buffers may live in the same page, you might output what you think is one buffer that are, in reality, several. You may be able to help with that by also LD_PRELOADing electric fence which would, AFAIK cause the application to allocate a whole page for every dynamically allocated buffer. So you would not output several buffers thinking it is only one, but you still don't know where the buffer ends and would output much garbage at the end. Also, stack based buffers can't be helped by this method.
You don't know where the buffers end.
Untested.
Can't you just check for the segment boundaries? (I'm guessing by segment boundaries you mean page boundaries?)
If so, page boundaries are well delimited (either 4K or 8K) so simple masking of the address should deal with it.
I want to write a C program that will sample something every second (an extension to screen). I can't do it in a loop since screen waits for the program to terminate every time, and I have to access the previous sample in every execution. Is saving the value in a file really my best bet?
You could use a named pipe (if available), which might allow the data to remain "in flight", i.e. not actually hit disk. Still, the code isn't any simpler, and hitting disk twice a second won't break the bank.
You could also use a named shared memory region (again, if available). That might result in simpler code.
You're losing some portability either way.
Is saving the value in a file really my best bet?
Unless you want to write some complicated client/server model communicating with another instance of the program just for the heck of it. Reading and writing a file is the preferred method.
I want are the steps that an application takes inorder to open the file and allow user to read. File is nothing more than sequence of bits on the disk. What steps does it take to show show the contents of the file?
I want to programatically do this in C. I don't want to begin with complex formats like word/pdf but something simpler. So, which format is best?
If you want to investigate this, start with plain ASCII text. It's just one byte per character, very straightforward, and you can open it in Notepad or any one of its much more capable replacements.
As for what actually happens when a program reads a file... basically it involves making a system call to open the file, which gives you a file handle (just a number that the operating system maps to a record in the filesystem). You then make a system call to read some data from the file, and the OS fetches it from the disk and copies it into some region of RAM that you specify (that would be a character/byte array in your program). Repeat reading as necessary. And when you're done, you issue yet another system call to close the file, which simply tells the OS that you're done with it. So the sequence, in C-like pseudocode, is
int f = fopen(...);
while (...) {
byte foo[BLOCK_SIZE];
fread(f, foo, BLOCK_SIZE);
do something with foo
}
fclose(f);
If you're interested in what the OS actually does behind the scenes to get data from the disk to RAM, well... that's a whole other can of worms ;-)
Start with plain text files