How to have console output for Intel ManyCore Lab batch jobs? - batch-file

I'm currently testing an OpenMP parallel program on Intel's ManyCore Testing Lab computers, and have been using
qsub -l select=1:ncpus=30 $HOME/myjob
to add the job and run it. It puts the output from the program into a file called myjob.o123456 (where the numbers depend on the job ID), but I'd like it to output to the console while the job is running, that way I can figure out the progress my program is making. Does anybody know how to do this?

Take a look at Interactive Jobs in torque.
http://docs.adaptivecomputing.com/torque/help.htm#topics/commands/qsub.htm#-I
Basically just add a -I to get an interactive shell on the node.
qsub -I -l select=1:ncpus=30 $HOME/myjob
Note: If you are using torque 4.x, make sure you are using torque 4.2.2 or 4.1.5.1 or later because there was recently a bug with interactive MPI jobs.

Related

Running an exe on multiple pcs on sync

I'm trying to run an exe on multiple pcs on sync.
Im using psexec, this is what I have till now:
I have a batch file with this:
start psexec \\pc01 -i -s -d c:\videos360\video360.exe
start psexec \\pc02 -i -s -d c:\videos360\video360.exe
With this I can start the exe in the 2 pcs, but never totally on sync.
Anyone has some idea of how can I make them run more on sync?
Thanks in advance.
Sorry for my bad English...
First sync the clocks on both machines. You can run a script on one of them to sync to the other or have them both sync to a central time source. Then add a task to Task Scheduler on each machine to start the application at the same time. That's about as close as you're going to get without resorting to some sort of IPC mechanism between the processes (requires source code access to video360.exe).
See
schtasks.exe
Windows time service tools
You won't need psexec because schtask can be used to manage tasks on the remote machines. It would be up to your script to change the next time to fire the task, or you could setup a repetitive task that fires every minute or two and just enable/disable the task. I believe there's a one-shot option as well.

Debugging Postgresql 9.3 with Eclipse CDT and GDB

i am from java background and have used debugger in eclipse(java).
i have installed postgresql 9.3 as stated in this link: https://wiki.postgresql.org/wiki/Working_with_Eclipse
The debugger works fine for the server(which waits and accepts incoming client connections).
When i connect a client with: $ psql test .Does the server create a new thread for the client?
Is it possible to attach debugger and set breakpoints in parser.c or executor.c in postgresql source files so that i can analyse how postgresql queries are executed?
I have tried attaching debugger and set breakpoints in parser.c and executed some queries in the client.But it doesnt stop at the breakpoint.
Thanks in Advance
When i connect a client with: $ psql test, Does the server create a new thread for the client?
No. The server creates a new postgres (or, on Windows, postgres.exe) process that communicates with the postmaster and other processes via shared memory and signals. PostgreSQL uses a shared-nothing-by-default multiprocessing architecture rather than a shared-everything-by-default multithreading architecture.
Is it possible to attach debugger and set breakpoints in parser.c or executor.c in postgresql source files so that i can analyse how postgresql queries are executed?
Yes, if your debugger can follow backend forks from the postmaster, or if you directly attach your debugger to the backend you wish to debug. The latter is more common unless you're debugging backend startup.
A typical workflow is:
Connect with psql
SELECT pg_backend_pid()
Connect the debugger to that process ID
Set breakpoints and watches as desired and resume execution
In the same psql session, run the query you want to debug backend execution of
Switch to the debugger when it traps and start debugging
This works on Linux with gdb and Windows with Visual Studio. Presumably it works with Eclipse too.
More at the developer FAQ.
It is possible to instead debug the postmaster and use gdb's multi-process debugging features with follow-fork-mode, detach-on-fork, schedule-multiple and non-stop options, but it's complicated to get right, noisy, and will be confusing if you're used to gdb's normal break behaviour. It's also a bit awkward because PostgreSQL uses signals that gdb also uses, so some hacks are required to work around that. See a blog post I wrote on the topic earlier.
I recommend keeping it simple and attaching using pg_backend_pid.

Serial Port Program crashes (no core dump)

im making a C project for university in Linux, its basicaly a protocol for file transfer between 2 computers. The program works fine and it sends many files without any problem, but there is 1 or 2 files i have tested and the program just crashes without any report and i just dont know how to debug the problem. Any help would be appreciated.
I also dont know if i should post the code or not, because both files (application and protocol) have over 1.5k lines of code.
In most Linux Distributions the core dumping is disable by default (which can be viewed from the system resource limit "ulimit -c" will be zero if it is disabled). To enable the same, use "ulimi -c unlimited".
To add, in Ubuntu like modern distributions, they have customized program to send the report/core file to Ubuntu developers specified in "/proc/sys/kernel/core_pattern". Make sure to change it for development purpose to debug further.
You can even try "valgrind" or "gdb live debugging" to have more clarity about the problem.

Issue with program executed by crontab

I've coded a program in c for an embedded system (Devkit8000, which is a clone of the well known BeagleBoard) running Angstrom Linux.
The program creates a couple of threads, on of them is responsible of taking pictures with a camera connected to the board, and right now the second thread only moves that images to another path. The program should be running during the whole day, and the only way to stop it is sending a signal.
I edited the crontab to launch the program in a specific hour and to send a signal when it has to stop, the issue is that launching the program in this way cause the process to be killed after some time running, but, if i launch the program manually (through the command line), it works perfectly and dont get stopped.
I have no idea about the reason of this different behaviour between crontab and command line. I've checked the system logs but didnt find anything useful. I've also been reading a little and find that the OS can kill a process if it is using so much resources, but doesnt make sense that this happens in only 1 scenario (crontab vs manually)...
Any clue about what is happening?
Thank you in advance!
The main difference is that running a job through cron invokes a non-interactive non-login shell. The effect of that depends on the default shell for your user. For example, if you are using Korn shell or Bash then your .profile will not be executed, as it would on an interactive login shell. Korn shell 88 will execute .kshrc (the $ENV file) but ksh93 will not.
So, a good start might be to call your program from a script, after first "sourcing" your .profile file:
. $HOME/.profile
Failing that... When you say that the process is "killed", do you get such a message? If so, then that sounds like someone sending SIGKILL, i.e. kill -9. If not, then maybe you could run strace or ltrace to find out at what point it dies.

Is it possible to reduce the startup time of MacRuby scripts which use the ScriptingBridge?

I would like to use MacRuby with ScriptingBridge instead of AppleScript to control Mac applications which support AppleScript. I used to do this using appscript, which is effectively deprecated, hence the move the MacRuby and scripting bridge.
The only problem I have is that the ScriptingBridge framework takes about a second to load, even on a fast machine with a fast SSD. For example, this simple script takes about 0.9 seconds to run, with almost of the time spend loading the ScriptingBridge framework:
#!/usr/bin/env macruby
framework "ScriptingBridge"
textedit = SBApplication.applicationWithBundleIdentifier("com.apple.TextEdit")
textedit.activate
The equivalent osascript takes about 70 milliseconds to run, and py-appscript used to give similar times:
osascript -e 'tell application "TextEdit" to activate'
Is there any straightforward way to bundle/compile/shrink a MacRuby/ScriptingBridge script into something that starts more quickly?
I've tried using macrubyc to bundle the script into a standalone executable, but the resulting executable doesn't run much faster than the script when run normally, still taking about a second to run.
(My hunch is no, since a compilation step like macrubyc can't easily see which parts of the framework will be accessed by the script, making it hard to optimize.)

Resources