Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have a batch file FolderWatcher.bat.
And I created a task with task scheduler in Windows 7. I need to run it in a specific time(once) and should not stop until the user wants it to stop. (Start once and should run for days and days.)
UAC is off and I am Admin.
Any idea how this can be done?
http://technet.microsoft.com/en-us/library/cc722178.aspx
A task's settings are displayed on the Settings tab of the Task Properties or Create Task dialog box. The following list contains the descriptions of task settings.
Stop the task if it runs longer than: time period
This setting allows you to limit the amount of time a task is allowed to run. Use this setting to limit tasks that might take a long period of time to execute, causing an inconvenience to the user.
Just uncheck this setting and you should be good.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I'm writing a program to "break" one of my company's devices. The issue is that too much TCP traffic on the device shuts its port down and we need to try to replicate this. Originally, my thought has been to use the for loop in C, but I also think it's probably more representative of the failures that are occurring if there are more simultaneous connections. So my thought now is to create a bash script that will execute the program multiple times.
The real issue is that we are seeing the failures occur because TCP connections are being opened and closed for every command that comes in.
So I have a simple C program that opens a connection, sends a command to the device, gets the response, and then closes the connection. My original thought was to create a for loop inside the C code that performs all the connections, but it wouldn't happen simultaneously. So I wonder how much, if anything I lose with a bash script that executes the compiled C program in the for loop.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Suppose I have written some c programs.And the OS has SJF(shortest job first) scheduling policy so how will CPU decide the execution time for all the processes before actually executing them.Ex. every time whichever the short process that will be executed first in SJF.
As commented below, apparently Linux does not have a job control scripting language, so you should probably remove that tag, as well as the C tag.
On systems with job scheduling, there's some type of job control scripting language where the estimated run time is included in the information needed to run the job.
Example Wiki articles:
http://en.wikipedia.org/wiki/Job_Control_Language
In this case, the estimated time is specified as a job parameter:
/*JOBPARM TIME=10
for a time estimate of 10 minutes. On this web page, scroll down to the TIME parameter description:
http://www-01.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieab600/iea3b6_Parameter_definition5.htm
Based on the description, if the time is exceeded, the operator is notified. I'm not sure what happens on unattended systems.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am using a Windows 8.1 machine to remote-access a Windows 7 machine through Remote Desktop Connection. Currently, the only way I find to exit the RDP session is to hover the mouse cursor to the top, wait for the following dropdown bar to appear and click the "close" button.
Is there another way to existing an RDP session while in a RDP session? Say, through command line? Or, keyboard shortcuts? On my local machine, I notice that I can as well kill the mstsc.exe session to exit it.
Start -> Windows Security -> Disconnect
Also, tsdiscon from command prompt or run dialog.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Apologies if this question seems rather trivial, but it's causing me some frustration.
I have a Redhat 5.3 installation, using nautilus-2.16.2-7.el5 that has two filesystems mounted. Under user1, when sending to the trash (pressing Del) on filesystem A we receive a confirmation dialogue box (do you want to etc). The behaviour is the same on filesystem B.
However, under user2, we receive the confirmation on Del on filesystem A, but NOT on filesystem B.
I've tried renaming the ~/.gconf/apps/nautilus folder and logging out/in to reset the Nautilus settings, but it's still behaving the same.
This is basically leading to users accidently deleting data, which isn't great!
Any advice would be appreciated folks!
D
Turns out that creating a new user profile and copying the settings corrected the problem (even though there was nothing filesystem specific in the gconf file).
cp /home/new_username/.gconf/apps/nautilus/preferences/%gconf.xml /home/username/.gconf/apps/nautilus/preferences/
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am running a website running on Django,Postgres and Apache.
Recently, due to sudden surge in traffic the site came down. On checking server logs i came to know that there was some issue of maximum connections limit exceeded. On looking up more i found out that in postgresql.conf , the parameter max_connections affects the simultaneous connections, that can be made at any point of time, to the DB.
The current value in my postgresql.conf is 100.
The event that happened and brought site down is not a commonly occuring event but i want to be prepared the next time it happens again.
So i am seeking advice for how can i monitor the active connections at any moment on a regular day and to how much i should increase the value of max connections and what other parameters need to be changed parallely as i was seeing that i have to increase other values in postgresql.conf accordingly(like shared buffers etc).
Please take a look at the relevant wiki article: http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
In general it's best not to bump max_connections up too much. Use a connection pool like PgBouncer, or a pool inside your server, instead.