I've been working on an app that sends keystrokes to mimic user actions. For this, I want to record my keystrokes. I looked around on the internet on how to go about such a task, and I found a program called Key Catcher. Because I'm worried of getting a malicious keylogger on my device I'm reading the source code first, and I found this line:
return, dllcall("psapi.dll\EmptyWorkingSet", "UInt", -1)
I didn't know what this command was and a google search gave internet forums warning not to use EmptWorkingSet yourself, but also examples of specific programs using this without problems. Could anyone explain how this should be used? or if this will give problems? or could someone give a better alternative?
PS: this command is used everytime a process finishes if that helps
The "EmptyWorkingSet" operation removes as many of the application's pages as possible from memory. It is often used mistakenly by people who think that having lots of free RAM is good.
It generally won't do very much harm. The pages can be loaded back into RAM fairly quickly if needed. But the only good it does is make the amount of free memory pages go up, which is actually slightly harmful.
It's basically neither here nor there. It's very bad to call it from a process that's performance critical because it will slow the process down. It's possibly useful to call it from a process that has accumulated a lot of cruft in RAM that doesn't need to be there. But the OS will already remove that stuff from RAM itself.
Related
I am learning C, and am concerned about memory leaks. I understand that rebooting will generally flush memory, and assuming I don't run the program again, I will be fine. I am considering using a second, high-power machine. How badly can I screw up my system if:
I do something ridiculously stupid
I use GCC (not sure if the compiler can do anything?)
I have a memory leak and restart
Out of curiosity, if I used a VM. I probably won't, because I simply prefer using real hardware.
Would any of the following things have long-term effects on my system? Thanks.
If your product is pure software, the biggest thing that you have to worry about is a memory leak building up and eventually causing the machine to run out of memory, fail to allocate any more, and the application will crash. A lot of memory won't be happening repeatedly and won't even get this far. They will then go away when the application exits. Your application could also potentially corrupt data if something is being modified when it crashes, but that could apply to any type of crash.
If your product controls hardware in some way, you need to be very careful. If the software fails, then you don't know what the hardware may do. As one of the comments said, a spaceship with a memory leak that causes it to crash can make the spaceship crash. Robots could move unexpectedly and cause damage to property or injury to people. Other devices could cause electrical discharges.
As far as handling memory leaks, you just have to be careful. In C, any call to malloc and similar functions needs to be paired with a call to free on all paths of execution. If some type of error occurs, free still needs to be called if the application is going to continue running. Likewise, fopen should be paired with fclose. Here, you can also run into issues with running out of file handles, which is a different but similar problem in many ways. In C++, manual memory allocation with new should be paired with delete, although using "smart" pointers like std::unique_ptr, std::shared_ptr, and std::weak_ptr can ease memory management and prevent memory leaks. Other libraries also provide pointer types that use reference counting to handle their own lifecycle. I would recommend using these any time you can over raw pointers. If you have the option to use C++ instead of C, I would also recommend that. In most cases (performance or otherwise), you don't really need C over C++. If you're not sure that you need C, you can probably use C++.
If you're interested in finding memory leaks, check out valgrind. It has a lot of functionality that will help you find memory leaks and determine their severity.
Memory leaks won't damage your machine. When a program terminates, all of its transient resources are released, including whatever memory was allocated to it.
What will suffer is your programming style. Correctly freeing resources is not difficult, but it takes some practice. After a while, you will need to think much less in order to do it. And that is one of the things that makes you a good programmer.
Why does it matter? Because sooner or later, you will start writing programs that run for a long time, perhaps an information server, or a web browser, or a graphic editor. Something that stays active until the user no longer needs it, or because it crashes after using up all available memory. And you don't want to be responsible for the second outcome.
So right now, when you're starting, is the time to develop some good habits. Learn how to do it right, and you won't have to relearn it later.
According to the answers in the comments:
Memory leaks should go away if the system restarts
Spaceships are hard to reboot
VMs are safe if they are written properly
Thanks for the quick answers!
TL;DR at end in bold if you don't want rationale/context (which I'm providing since it's always good to explain the core issue and not simply ask for help with Method X which might not be the best approach)
I frequently do software performance analysis on older hardware, which shows up race errors, single-frame graphical glitches and other issues more readily than more modern silicon.
Often, it would be really cool to be able to take screenshots of a misbehaving application that might render garbage for one or two frames or display erroneous values for a few fractions of a second. Unfortunately, problems most frequently arise when the systems in question are swapping heavily to disk, making it consistently unlikely that the screenshots I try to take will contain the bugs I'm trying to capture.
The obvious solution would be a capture device, and I definitely want to explore pixel-perfect image and video recording in the future when I have the resources for that (it sounds like a hugely fun opportunity to explore FPGAs).
I recently realized, however, that the kernel is what is performing the swapping, and that if I move screenshotting into kernelspace, well, I don't have to wait for my screenshot keystroke to make its way through the X input layer, into the screenshot program, wait for that to do its XSHM dance and get the screenshot data, all while the system is heavily I/O loaded (eg, 5-second system load of >10) - I can simply have the kernel memcpy() the displayed area of video memory to a preallocated buffer at the exact fraction of a second I hit PrtSc!
TL;DR: Where should I start looking to figure out how to "portably" (within the sense of Linux having different graphics drivers, each with different architectural designs) access the currently-displayed area of video memory?
I get the impression I should be looking at libdrm, possibly within KMS, but I would really appreciate some pointers to knowing what actually accesses video memory.
I'm also guessing there are probably some caveats and gotchas to reading video memory directly on certain chipsets? I don't expect my code to make it into the Linux kernel (who knows, but I doubt it) but I'd still like whatever I build to be fairly portable across computers for convenience.
NOTE: I am not using compositing with the systems in question, in case this changes anything. I'm interested to know whether I could write a compositing-compatible system; I suspect this would be nontrivial.
Basically, all i would like to do is to make sure no one is able to step through sensitive code.
I read somewhere it was possible, only i can't find where i read that.
thanks!
No. Fundamentally, anyone who can run the object code can inspect it to any degree.
If you don't want them to be able to run the object code, you have to run it on a machine of your choice, and only interact with the user over a network.
All techniques that claim to disable debuggers simply exploit bugs, which are usually fixed a few months later when the next version of the debugger is released; and even those are completely useless against debugging through a VM.
i would like to do is to make sure no one is able to step through sensitive code.
The "no one" part is impossible: a sophisticated attacker will be able to do it no matter what you try.
There are many techniques that will stop less sophisticated attacker, this book shows some of them.
Generally, these techniques are not worth your time -- they make field support of your software hard, they don't stop sophisticated attacker (and you only need one to succeed to render your efforts useless), and your software usually isn't that interesting to begin with.
If it is useful enough, people will buy it. If it is not, adding protections will make it even less useful.
OK, So I am programming this for a HW assignment but could either use some help or insight. I know I've read everywhere that you shouldnt open files in modules but its our assignment...
Anyway my module code is here:
http://pastebin.com/LU8hWraL
and my user level code is here:
http://pastebin.com/RC0Zk1kQ
Ok, my issue is that sometimes it works, other times it doesnt... most of the time when it doesnt work, it catches in a loop on the kernel and I dont understand what is causing the issue and how I can resolve it. Any help on this situation would be incredibly appreciated, I just am getting frustrated having to constantly shut down and restart my VM.
Even if someone tells me how to find the error when my VM loops like that...?
First, you might want to use kernel_read() and don't do these things yourself.
There might be two issues here
you give &filpRead->f_pos as parameter to read and write, which is for kernel internal use.
when you encrypt or decrypt the data, you might not get the same amount of bytes you read. So writing the same amount of data as you read could be a problem too.
Take both with a grain of salt, since it's quite some time ago, since I've looked at kernel programming.
I was wondering if I could make a large number of system calls at the same time, with only one switch overhead. I need this because I have a need to make many (128) system calls at the same time. If I could do this without switching between kernel and userland 256+ times I think it could make my (speed sensitive) library significantly faster.
You really can't do that from an application program. What you could do is build a loadable kernel module that implements those operations and presents a simple API -- then you can change context once, do all the work, and return.
However, as with most of these sorts of optimization questions, the first thing to ask is "why do you think it's going to be necessary?" Do you have timing information etc? Have you profiled? How much of a performance issue do you really have, and is the additional complexity going to be worth the speedup?
I don't think Linux will support syscall chaining anytime soon. You might have more luck implementing this on another kernel and porting your application.
That said, it's not difficult to write a proxy to do the job in kernelspace for you, but don't expect it to be merged upstream. I've worked on real-time stuff and we had a solution like that, but never used in production because of support issues :/.