Debug tools that captures the process name and not the process number and also be able to save the log file - pid

Can anybody suggest a Debug tool that captures the process name and not the process number and also be able to save the log file
I have come across Trace spy but it does not save the log.

Have a look at DebugView++ at https://github.com/djeedjay/DebugViewPP/
It can do what you asked and much more :)

Related

MS Access - Linked Text File Error

I have an access database that points to a linked text file saved in the network. I’ve setup the following:
A batch file that opens the DB and runs a macro.
A scheduled Windows task that runs daily to kick-start the batch file. The process used to run with no issue, but lately I started getting this error message ‘M:\’ is not a valid path. Make sure the path name is spelled correctly and you are connected to the server on which the file resides.
Please note the following:
The windows scheduler runs with no issue.
The database opens
The Macro runs
The process throws the above error message only at the step related to the linked text file.
The strange thing is that when I run the batch file manually, the process runs like a clock and gets completed successfully (txt file path gets recognized).
Any idea about how to deal with the above issue? Please
Your assistance is appreciated
Thanks,
Ben
It is likely that your M: drive is being mapped as a result of a logon or start-up script. When taskschedular wakes the machine your script runs before this drive mapping takes place. Another thing to watch for is the user that runs taskscheduler might not have the same drive mapping as you. Always run the task manually and investigate any failures. Running the script outside the environment taskscheduler supplies is only mildly diagnostic.
Thank you for your response.
Please note that I have a proceeding VBA step that renames the same file in the same location and it occurs successfully.
Furthermore, when I engage the batch file manually, the location of linked txt file gets recognized.
Also, after getting the error message, I close the DB, open it again, click on the linked txt file, and it opens successfully.
It is very strange.

How to display information from processes which are running in session 0?

For those who don't know what SCCM is, a little information, to understand better what i want to know. SCCM is an application, with you can deploy software packages. It is also possible to create a so called "Task Sequence". A Task sequence can contain multiple packages, which will be installed one after the other.
The Task Sequence execution occurs in Session 0. Of course the packages query some processes, if they are running. If they are, a window will pop up, to ask the user to close the application.
Here comes the problem. If an administrator deploys packages using task sequences (and they do), the users won't see the window, and won't close the required process. If the process is not closed, the script execution aborts.
I found this Link, and created a simple exe, according to the description. This simple exe is able to start a process from session 0 in session 1(or above), where the user is logged on(i know the security risks). So far so good, but how do i get the packages to display their windows? Obviously i could change the command line, so my exe will start the installation of every package, but this is not an option. There would be to much work.
The ideal solution would be if my exe would be the first in the task sequence, it would do "something" so the windows could be visible.
And that's where i am stuck.
Does anyone has any idea how i could achieve what i want?
Thanks in advance!

How to read from a dynamic file in Groovy

I've got the log file for a program; the log records every task done by the software. I've modified the software to insert a line such as "operations terminated" into the log when the software is done executing. I am trying to write a script in Groovy that looks for this "operations terminated" line in the log file. I am unsure how to proceed because the log file changes with time and I want to catch "operations terminated" as soon as it pops up. I don't know how to put in a recursive or continuous check... Suggestions? Thanks!
monitoring may be a solution
https://stackoverflow.com/a/495962/722948
So then. you could take the delta of the file and search for a specific line/word

Building an "odometer" for time spent on a server

I want to build an odometer to keep track of how long I've been on a server since I last reset the counter.
Recently I've been logging quite a bit of time working on one of my school's unix servers and began wondering just how much time I had racked up in the last couple days. I started trying to think of how I could go about writing either a Bash script or C program to run when my .bash_profile was loaded (ie. when I ssh into the server), background itself, and save the time to a file when I closed the session.
I know how to make a program run when I login (through the .bash_profile) and how to background a C program (by way of forking?), but am unsure how to detect that the ssh session has been terminated (perhaps by watching the sshd process?)
I hope this is the right stack exchange to ask how you would go about something like this and appreciate any input.
Depending on your shell, you may be able to just spawn a process in the background when you log in, and then handle the kill signal when the parent process (the shell) exits. It wouldn't consume resources, you wouldn't need root privileges, and it should give a fairly accurate report of your logged in time.
You may need to use POSIX semaphores to handle the case of multiple shells logged in simultaneously.
Have you considered writing a script that can be run by cron every minute, running "who", looking at its output for lines with your uid in them, and bumping a counter if it finds any? (Use "crontab -e" to edit your crontab.)
Even just a line in crontab like this:
* * * * * (date; who | grep $LOGNAME)>>$HOME/.whodata
...would create a log you could process later at your leisure.

Daemon logging in Linux

So I have a daemon running on a Linux system, and I want to have a record of its activities: a log. The question is, what is the "best" way to accomplish this?
My first idea is to simply open a file and write to it.
FILE* log = fopen("logfile.log", "w");
/* daemon works...needs to write to log */
fprintf(log, "foo%s\n", (char*)bar);
/* ...all done, close the file */
fclose(log);
Is there anything inherently wrong with logging this way? Is there a better way, such as some framework built into Linux?
Unix has had for a long while a special logging framework called syslog. Type in your shell
man 3 syslog
and you'll get the help for the C interface to it.
Some examples
#include <stdio.h>
#include <unistd.h>
#include <syslog.h>
int main(void) {
openlog("slog", LOG_PID|LOG_CONS, LOG_USER);
syslog(LOG_INFO, "A different kind of Hello world ... ");
closelog();
return 0;
}
This is probably going to be a was horse race, but yes the syslog facility which exists in most if not all Un*x derivatives is the preferred way to go. There is nothing wrong with logging to a file, but it does leave on your shoulders an number of tasks:
is there a file system at your logging location to save the file
what about buffering (for performance) vs flushing (to get logs written before a system crash)
if your daemon runs for a long time, what do you do about the ever growing log file.
Syslog takes care of all this, and more, for you. The API is similar the printf clan so you should have no problems adapting your code.
One other advantage of syslog in larger (or more security-conscious) installations: The syslog daemon can be configured to send the logs to another server for recording there instead of (or in addition to) the local filesystem.
It's much more convenient to have all the logs for your server farm in one place rather than having to read them separately on each machine, especially when you're trying to correlate events on one server with those on another. And when one gets cracked, you can't trust its logs any more... but if the log server stayed secure, you know nothing will have been deleted from its logs, so any record of the intrusion will be intact.
I spit a lot of daemon messages out to daemon.info and daemon.debug when I am unit testing. A line in your syslog.conf can stick those messages in whatever file you want.
http://www.linuxjournal.com/files/linuxjournal.com/linuxjournal/articles/040/4036/4036s1.html has a better explanation of the C API than the man page, imo.
Syslog is a good option, but you may wish to consider looking at log4c. The log4[something] frameworks work well in their Java and Perl implementations, and allow you to - from a configuration file - choose to log to either syslog, console, flat files, or user-defined log writers. You can define specific log contexts for each of your modules, and have each context log at a different level as defined by your configuration. (trace, debug, info, warn, error, critical), and have your daemon re-read that configuration file on the fly by trapping a signal, allowing you to manipulate log levels on a running server.
As stated above you should look into syslog. But if you want to write your own logging code I'd advise you to use the "a" (write append) mode of fopen.
A few drawbacks of writing your own logging code are: Log rotation handling, Locking (if you have multiple threads), Synchronization (do you want to wait for the logs being written to disk ?). One of the drawbacks of syslog is that the application doesn't know if the logs have been written to disk (they might have been lost).
If you use threading and you use logging as a debugging tool, you will want to look for a logging library that uses some sort of thread-safe, but unlocked ring buffers. One buffer per thread, with a global lock only when strictly needed.
This avoids logging causing serious slowdowns in your software and it avoids creating heisenbugs which change when you add debug logging.
If it has a high-speed compressed binary log format that doesn't waste time with format operations during logging and some nice log parsing and display tools, that is a bonus.
I'd provide a reference to some good code for this but I don't have one myself. I just want one. :)
Our embedded system doesn't have syslog so the daemons I write do debugging to a file using the "a" open mode similar to how you've described it. I have a function that opens a log file, spits out the message and then closes the file (I only do this when something unexpected happens). However, I also had to write code to handle log rotation as other commenters have mentioned which consists of 'tail -c 65536 logfile > logfiletmp && mv logfiletmp logfile'. It's pretty rough and maybe should be called "log frontal truncations" but it stops our small RAM disk based filesystem from filling up with log file.
There are a lot of potential issues: for example, if the disk is full, do you want your daemon to fail? Also, you will be overwriting your file every time. Often a circular file is used so that you have space allocated on the machine for your file, but you can keep enough history to be useful without taking up too much space.
There are tools like log4c that you can help you. If your code is c++, then you might consider log4cxx in the Apache project (apt-get install liblog4cxx9-dev on ubuntu/debian), but it looks like you are using C.
So far nobody mentioned boost log library which has nice and easy way to redirect your
log messages to files or syslog sink or even Windows event log.

Resources