Using C winapi, how can you capture receieved data from a commport that is open exclusively by another program.
I know there are programs that do this, but I want to code my own monitoring software for a specific purpose and was wondering how is it done?
You can do this using API hooking see here: http://www.codeproject.com/KB/system/hooksys.aspx for details. Basically you could load the target process, inject some code into the target process to hook the API that you're interested in and then use an IPC mechanism to transfer the data from your hooks to your analysis program.
This is how my program that can control the values returned by GetTickCount() in another program works (see here http://www.lenholgate.com/blog/2006/04/tickshifter-v02.html)
Related
Sorry If this is a noob question, but I'm developing a software "add on" for a game. I'm doing this through a driver simply because the anti-cheat doesn't support ring 0 detection. I haven't seen much info on how IOCTL can be used and i was wondering if you can send custom inputs like process ids and other information that may change or is it all set in stone like a switch function or something. Once again sorry for noob question.
You can communicate with a kernel-mode device driver via IOCTL using the DeviceIoControl Win32 API routine. This routine internally calls NtDeviceIoControlFile (NTDLL) which performs a system call to get NtDeviceIoControlFile (NTOSKRNL) executed.
The DeviceIoControl routine is documented at MSDN: https://msdn.microsoft.com/en-us/library/windows/desktop/aa363216(v=vs.85).aspx
The kernel-mode device driver will have a prerequisite to fulfill: https://learn.microsoft.com/en-us/windows-hardware/drivers/kernel/named-device-objects
I haven't seen much info on how IOCTL can be used and i was wondering if you can send custom inputs like process ids and other information
The answer is yes, you can send custom buffers via IOCTL. You can also receive an output buffer back from your kernel-mode device driver to the user-mode application which initiated the IOCTL operation - this is optional of course.
If you need to send multiple pieces of information at the same time, consider using a structure.
I also recommend you read the following:
https://learn.microsoft.com/en-us/windows-hardware/drivers/kernel/methods-for-accessing-data-buffers
I would like to inject a shared library into a process (I'm using ptrace() to do that part) and then be able to get output from the shared library back into the debugger I'm writing using some form of IPC. My instinct is to use a pipe, but the only real requirements are:
I don't want to store anything on the filesystem to facilitate the communication as it will only last as long as the debugger is running.
I want a portable Unix solution (so Unix-standard syscalls would be ideal).
The problem I'm running into is that as far as I can see, if I call pipe() in the debugger, there is no way to pass the "sending" end of the pipe to the target process, and vice versa with the receiving end. I could set up shared memory, but I think that would require creating a file somewhere so I could reference the memory segment from both processes. How do other debuggers capture output when they attach to a process after it has already begun running?
I assume that you are in need of a debugging system for your business logic code (I mean application). From my experience, this kind of problem is tackled with below explained system design. (My experience is in C++, I think the same must hold good for the C based system also.)
Have a logger system (a separate process). This will contain - logger manager and the logging code - which will take the responsibility of dumping the log into hard disk.
Each application instance (process running in Unix) will communicate to this process with sockets. So you can have your own messaging protocol and communicate with the logger system with socket based communication.
Later, for each of this application - have a switch which can switch off/on the log.So that you can have a tool - to send signal to this process to switch on/off the message logging.
At a high level, this is the most generic way to develop a logging system. In case you need any information - Do comment it. I will try to answer.
Using better search terms showed me this question is a dup of these guys:
Can I share a file descriptor to another process on linux or are they local to the process?
Can I open a socket and pass it to another process in Linux
How to use sendmsg() to send a file-descriptor via sockets between 2 processes?
The top answers were what I was looking for. You can use a Unix-domain socket to hand a file descriptor off to a different process. This could work either from debugger to library or vice versa, but is probably easier to do from debugger to library because the debugger can write the socket's address into the target process while it injects the library.
However, once I pass the socket's address into the target process, I might as well just use the socket itself instead of using a pipe in addition.
I'd like to know whether something like this has been done before:
I've recently started work on a networking library in C. The library maintains a set of sockets, each of which is associated with two FIFO byte streams, input and output.
A developer using the library is expected to register some callbacks, consisting of a recognizer function and a handler function. If new data arrives on a socket (i.e. the input stream), every recognizer is called. If one of the recognizers finds a matching portion of data, its associated handler is called, consuming the data and possibly queuing new data on the socket's output stream, scheduled to be transmitted later on.
Here's an example to make clear how the library is used:
// create client socket
client = nc_create(NC_CLIENT);
// register some callback functions that you'll have to supply yourself
nc_register_callback(client, &is_login, &on_login);
nc_register_callback(client, &is_password, &on_password);
// connect to server
nc_dial(client, "www.google.com", "23");
// start main loop (we might as well have more than one connection here)
nc_talk();
To me, this is the most obvious way to write a general purpose networking library in C. I did some research using Google, but I wasn't able to find something similar written in C. But it's hard to believe that I'm the first one that implements this approach.
Are there other data-driven general purpose C networking libraries like this out there?
Would you use them?
Here's a few libraries that provides similar APIs, (at various levels, e.g. libevent provides a general callback driven API for socket/file descriptors)
libesmtp (example)
libevent
libcurl
The Sun/OncRPC APIs have a similar style, in that the library does the heavy lifting for you, dispatching requests to the proper callback handlers.
The Java netty and mina libraries works in a similar manner, although more object oriented.
I am developing some experimental setup in C.
I am exploring a scenario as follows and I need help to understand it.
I have a system A which has a lot of Applications using cryptographic algorithms.
But these crypto calls(openssl calls) should be sent to another System B which takes care of cryptography.
Therefore, I have to send any calls to cryptographic (openssl) engines via socket to a remote system(B) which has openssl support.
My plan is to have a small socket prog on System A which forwards these calls to system B.
What I'm still unclear at this moment is how I handle the received commands at System B.
Do I actually get these commands and translate them into corresponding calls to openssl locally in my system? This means I have to program whatever is done on System A right?
Or is there a way to tunnel/send these raw lines of code to the openssl libs directly and just received the result and then resend to System A
How do you think I should go about the problem?
PS: Oh by the way, the calls to cryptography(like EngineUpdate, VerifyFinal etc or Digest on System A can be either on Java or C.. I already wrote a Java/C program to send these commands to System B via sockets...
The problem is only on System B and how I have to handle..
You could use sockets on B, but that means you need to define a protocol for that. Or you use RPC (remote procedure calls).
Examples for socket programming can be found here.
RPC is explained here.
The easiest (not to say "the easy", but still) way I can imagine would be to:
Write wrapper (proxy) versions of the libraries you want to make remote.
Write a server program that listens to calls, performs them using the real local libraries, and sends the result back.
Preload the proxy library before running any application where you want to do this.
Of course, there are many many problems with this approach:
It's not exactly trivial to define a serializing protocol for generic C function calls.
It's not exactly trivial to write the server, either.
Applications will slow a lot, since the proxy call needs to be synchronous.
What about security of the data on the network?
UPDATE:
As requested in a comment, I'll try to expand a bit. By "wrapper" I mean a new library, that has the same API as another one, but does not in fact contain the same code. Instead, the wrapper library will contain code to serialize the arguments, call the server, wait for a response, de-serialize the result(s), and present them to the calling program as if nothing happened.
Since this involves a lot of tedious, repetitive and error-prone code, it's probably best to abstract it by making it code-driven. The best would be to use the original library's header file to define the serialization needed, but that (of course) requires quite heavy C parsing. Failing that, you might start bottom-up and make a custom language to describe the calls, and then use that to generate the serialization, de-serialization, and proxy code.
On Linux systems, you can control the dynamic linker so that it loads your proxy library instead of the "real" library. You could of course also replace (on disk) the real library with the proxy, but that will break all applications that use it if the server is not working, which seems very risky.
So you basically have two choices, each outlined by unwind and ammoQ respectively:
(1) Write a server and do the socket/protocol work etc., yourself. You can minimize some of the pain by using solutions like Google's protocol buffers.
(2) use an existing middleware solution like (a) message queues or (b) an RPC mechanism like CORBA and its many alternatives
Either is probably more work than you anticipated. So really you have to answer this yourself. How serious is your project? How varied is your hardware? How likely is the hardware and software configuration to change in the future?
If this is more than a learning or pet project you are going to be bored with in a month or two then an existing middleware solution is probably the way to go. The downside is there is a somewhat intimidating learning curve.
You can go the RPC route with CORBA, ICE, or whatever the Java solutions are these days (RMI? EJB?), and a bunch of others. This is an elegant solution since your calls to the remote encryption machine appear to your SystemA as simple function calls and the middleware handles the data issues and sockets. But you aren't going to learn them in a weekend.
Personally I would look to see if a message queue solution like AMQP would work for you first. There is less of a learning curve than RPC.
The user, administrators and support staff need detailed runtime and monitoring information from a daemon developed in C.
In my case these information are e.g.
the current system health, like throughput (MB/s), already written data, ...
the current configuration
I would use JMX in the Java world and the procfs (or sysfs) interface for a kernel module. A log file doesn't seem to be the best way.
What is the best way for such a information interface for a C daemon?
I thought about opening a socket and implementing a bare-metal http or xmlrpc server, but that seems to be overkill. What are alternatives?
You can use a signal handler in your daemon that reacts to, say USR1, and dumps information to the screen/log/net. This way, you can just send the process a USR1 signal whenever you need the info.
You could listen on a UNIX-domain socket, and write regularly write the current status (say once a second) to anyone who connects to it. You don't need to implement a protocol like HTTP or XMLRPC - since the communication will be one-way just regularly write a single line of plain text containing the state.
If you are using a relational database anyway, create another table and fill it with the current status as frequent as necessary. If you don't have a relational database, write the status in a file, and implement some rotation scheme to avoid overwriting a file that somebody reads at that very moment.
Write to a file. Use a file locking protocol to force atomic reads and writes. Anything you agree on will work. There's probably a UUCP locking library floating around that you can use. In a previous life I found one for Linux. I've also implemented it from scratch. It's fairly trivial to do that too.
Check out the lockdev(3) library on Linux. It's for devices, but it may work for plain files too.
I like the socket idea best. There's no need to support HTTP or any RPC protocol. You can create a simple application specific protocol that returns requested information. If the server always returns the same info, then handling incoming requests is trivial, though the trivial approach may cause problems down the line if you ever want to expand on the possible queries. The main reason to use a pre-existing protocol is to leverage existing libraries and tools.
Speaking of leveraging, another option is to use SNMP and access the daemon as a managed component. If you need to query/manage the daemon remotely, this option has its advantages, but otherwise can turn out to be greater overkill than an HTTP server.