Licensing and auto-delete of a program - licensing

I have an application which get copied and run on client machines. The program is in the form of an Adobe Projector file. I want to write a process that checks when the program starts running whether or not the license is still active, and if not, delete the entire program.
The program itself knows the real date that it was installed, and since we install the program ourselves for the clients, we can ensure that at install time the date on the client's computer matches. Every time they start up the program, it compares the current date with the date they last ran the program. If today's date is after that date, it subtracts the number of elapsed dates from the number of days remaining. If the date is before the date it was last run, it penalizes the client by a constant number of days (this is to discourage the client from trying to reset the date on the computer and have their license never expire).
If they were to copy the entire directory over to a new machine, the installation date inside the program would not match the created date on any of the files in the directory, and it would treat this case like an expired license.
My question is this: is there a simple way to script this to run every time they start the application? We currently create a shortcut which could be pointed to a batch job, but what do you put in the batch job? Or is this approach for licensing completely wrong? People who buy this program will only buy time-limited licenses, and the program is run by copying and pasting a directory onto the target machine.

I believe you are thinking too complicated. Why not make it like a trial version that expires n days after installation, first use, or whatever you wish.
About the deleting the exe approach: Be careful, this could be an illegal intervention in somebody elses computer.

Not running software on a system when the trial period has expired is accepted today. I don't think actually deleting the program would be. I know I would never use such an application again.

Related

An unknown process is overwriting the contents of my application's files with 0x00 bytes

Summary
I have developed a Windows application that saves configuration files every time it runs. Rarely, when reading those files back in, I discover that they are filled with zeroes (0x00 bytes). I've determined that they are zeroed out long after my application has exited by an unknown process. What could this process be, and why is it zeroing out my files?
Details
Although rare, I am not the only one having this problem. For a broader perspective, see: Very strange phenomenon: output file filled or truncated with binary zeros. What could cause that?
I'm using Visual Studio 2017 C++, but files are opened with fopen(), written to with a single fwrite() call, and closed using fclose(). All calls are error-checked (other affected people report that they use CFile).
I confirm the validity of each save by immediately reading it back in and checking file contents (via hashes/checksums). I'm certain that the file is indeed written to the disk correctly.
At some point before the next invocation of my application, some unknown process writes to the file and fills it with zeroes (the file size/length stays the same, but all bytes are 0x00).
How I know this: Every time I save a file, I also save in the Registry the file's 3 GetFileTime() times (Creation, Last Access, Last Write). When I detect a corruption (zero-out), I compare the current GetFileTime() times with the ones stored in the Registry. In all cases, the "Last Access" and "Last Write" times have changed.
Something has written to my files a random amount of time later (anywhere between 10 minutes and 24+ hours, although that number is obviously also skewed by how frequently the application is executed).
Some more information:
I can't reproduce the problem on my development machine -- all instances are on clients (application users) through remote logging and crash reports (I've been trying to solve this for months). This makes it impossible to run the application in a controlled test harness (e.g. debugger, procmon, security policy's "Audit object access", etc)
Upon detecting a zero-out, I immediately log the currently running process list (using CreateToolhelp32Snapshot() -- no services). However, there isn't any common process that stands out. If I exclude processes also running on my development machine, then the next most common process (it's a Windows process) is only executing on 50% of the clients. Avast antivirus (a potential culprit) only exists on 35% of clients.
I've tried bracketing each save with "canary" files that are saved immediately prior and after the actual file. Canary files contain plain text, random binary data, or actual application data (encrypted or not). They are all zeroed out regardless.
Whenever multiple of my files get zeroed out, then they ALL get zeroed out simultaneously (their "Last Access"/"Last Write" times are within 100ms across all files).
I've tried compiling the application both with Visual Studio 2017 using CL and also Visual Studio 2019 using LLVM. They are both zeroed out.
Zero-outs still happen with a frequency of about 1 per day for every 1000 daily application executions (i.e., daily active users).
Zero-outs are random and once-only. It could be months before it occurs for a given client. And after it does, it never happens again.
Affects both Windows 7 and Windows 10, even if no antivirus is installed.

Application calls old source functions

There is an application on remote machine with Linux OS(Fedora), writing to the log file when certain events occur. Some time ago I changed format of the message being written to the log file. But recently it turned out that for some reason in some seldom cases log files with old format messages appear there. I know for sure that none part of my code can write such strings. Also there is no instance of the old application running. Does anyone have some ideas why it can happen? It's not possible to check which process writes those files because anything like auditctl is not installed there, and neither package manager or yum to get it or install. Application is written in C language.
you can use fuser command to find out all the processes that are using that file
`fuser file.log`

How to display information from processes which are running in session 0?

For those who don't know what SCCM is, a little information, to understand better what i want to know. SCCM is an application, with you can deploy software packages. It is also possible to create a so called "Task Sequence". A Task sequence can contain multiple packages, which will be installed one after the other.
The Task Sequence execution occurs in Session 0. Of course the packages query some processes, if they are running. If they are, a window will pop up, to ask the user to close the application.
Here comes the problem. If an administrator deploys packages using task sequences (and they do), the users won't see the window, and won't close the required process. If the process is not closed, the script execution aborts.
I found this Link, and created a simple exe, according to the description. This simple exe is able to start a process from session 0 in session 1(or above), where the user is logged on(i know the security risks). So far so good, but how do i get the packages to display their windows? Obviously i could change the command line, so my exe will start the installation of every package, but this is not an option. There would be to much work.
The ideal solution would be if my exe would be the first in the task sequence, it would do "something" so the windows could be visible.
And that's where i am stuck.
Does anyone has any idea how i could achieve what i want?
Thanks in advance!

how to automate jcl to run a cobol program on mainframe

We have a COBOL batch program that we are able to execute manually from JCL. We want to automate this process so that it can execute every 15 minutes.
Is there a way to automate the execution of a batch program on the mainframe?
I'm a PC guy and I know in windows I can create a .BAT file and set it up in Task Scheduler to run every 15 minutes. I'm essentially trying to do the same thing on the mainframe.
Is there a way to automate the execution of a batch program on the
mainframe?
Yes.
Many mainframe shops have job schedulers. Control-M from BMC is one, ASG has Zeke, there are others.
Having said that, it sounds like the application in question is written to periodically poll for some event. Mainframes typically have better ways of accomplishing tasks people normally solve via polling. Event monitoring, for example.
Mainframe Scheduling software like Control-M from BMC is one, ASG has Zeke, CA7 from CA and IBM TWS for ZOS formerly OPCA can be used to schedule a job every 15 minutes.
You could add a job for every 15 minute period or have the first step of the job be to add the 1 that will run in the next 15 minutes.
Pros
Operators will be notified of the job failing
Cons
Will end up allot of the same jobs in the schedule
TWS for ZOS (what I am know) you would need to add nearly 96 jobs and set the corresponding times for it
The option I would recommend is using an automation product such System Automation from IBM, Control-O from BMC or OPS from CA.
With any of the above automation products you could setup a started task and get them to start it every 15 minutes. It is much easier say for example using 1 panel in System Automation to set it up to run a start task every 15 minutes
If you wanted to know if it fails you could use the automation products to schedule it in any of the above schedulers.
There are so many solutions to this, it really depends on what you are monitoring. Besides the standard "use a job scheduler like CA7" (with the disadvantage of having so many jobs that run during the day, just kind-of messy).
You could either define an address space (started task) that invokes your COBOL code, and within your COBOL code have it sleep (i.e. wait on a timer) for 15 minutes, wake-up check whatever and go back to sleep. Alternatively, run the job on JES2 but you might have to a little extra so that JES keeps the job active all day!
If this code finds a problem then it can also issue a console message (maybe, you might have to write a little bit of assembler code to do issue a WTO or WTOR), so the the operator either knows (WTO) or knows and has to reply (WTOR) (write to operator with reply).

Building an "odometer" for time spent on a server

I want to build an odometer to keep track of how long I've been on a server since I last reset the counter.
Recently I've been logging quite a bit of time working on one of my school's unix servers and began wondering just how much time I had racked up in the last couple days. I started trying to think of how I could go about writing either a Bash script or C program to run when my .bash_profile was loaded (ie. when I ssh into the server), background itself, and save the time to a file when I closed the session.
I know how to make a program run when I login (through the .bash_profile) and how to background a C program (by way of forking?), but am unsure how to detect that the ssh session has been terminated (perhaps by watching the sshd process?)
I hope this is the right stack exchange to ask how you would go about something like this and appreciate any input.
Depending on your shell, you may be able to just spawn a process in the background when you log in, and then handle the kill signal when the parent process (the shell) exits. It wouldn't consume resources, you wouldn't need root privileges, and it should give a fairly accurate report of your logged in time.
You may need to use POSIX semaphores to handle the case of multiple shells logged in simultaneously.
Have you considered writing a script that can be run by cron every minute, running "who", looking at its output for lines with your uid in them, and bumping a counter if it finds any? (Use "crontab -e" to edit your crontab.)
Even just a line in crontab like this:
* * * * * (date; who | grep $LOGNAME)>>$HOME/.whodata
...would create a log you could process later at your leisure.

Resources