Application crashes only when launched from inittab on busybox - c

I'm writing an application for an embedded busybox system that allows TCP connections, then sends out messages to all connected clients. It works perfectly when I telnet to the box and run the application from a shell prompt, but I have problems when it is launched from the inittab. It will launch and I can connect to the application with one client. It successfully sends one message out to that client, then crashes. It will also crash if I connect a second client before any messages are sent out. Again, everything works perfectly if I launch it from a shell prompt instead.
The following errors are what comes up in the log:
<11>Jan 1 00:02:49 tmmpd.bin: ERROR: recvMessage failed, recv IO error
<11>Jan 1 00:02:49 tmmpd.bin: Some other LTK TCP error 103. Closing connection 10
<11>Jan 1 00:02:49 tmmpd.bin: ERROR: recvMessage failed, recv IO error
<11>Jan 1 00:02:49 tmmpd.bin: Some other LTK TCP error 103. Closing connection 10
Any suggestions would be greatly appreciated!

I was testing a bit in arm-qemu and busybox, and I was able to start a script as user test to run in background.
I have created a new user "test":
buildroot-dir> cat etc/passwd
test:x:1000:1000:Linux User,,,:/home/test:/bin/sh
Created a simple testscript.sh:
target_system> cat /home/test/testscript.sh
#!/bin/sh
while :
do
echo "still executing in bg"
sleep 10
done
To my /etc/init.d/rcS I added a startup command for it:
#!/bin/sh
mount -t proc none /proc
mount -t sysfs none /sys
/sbin/mdev -s
/bin/su test -c /home/test/testscript.sh& # < Added this
Now when I start the system, the script will run in the background, and when I grep for the process it has been started as user test (default root user is just 0):
target_system> ps aux | grep testscript
496 test 0:00 sh -c home/test/testscript.sh
507 test 0:00 {testscript.sh} /bin/sh home/test/testscript.sh

Related

How can I get the sender pid while receving the SIGSTOP?

As we know, SIGSTOP can't be handled, but my app is always stopped by it.
I receive this report(about my app stopped by SIGSTOP) by using WIFSTOPED/WSTOPSIG functions in my monitor process.
So, how can I do this job?
using audit !!
centos6:
adding line to /etc/audit/audit.rules(19 is the signal no. i need):
-a entry,always -F arch=b64 -S kill -k teste_kill -F a1=19
then
service auditd restart
finally, show log by this:
ausearch -k teste_kill

Erlang process hanging when it tries to kill itself

I am running my erlang process with this script
#!/bin/sh
stty -f /dev/tty icanon raw
erl -pa ./ -run thing start -run init -noshell
stty echo echok icanon -raw
my Erlang process:
-module(thing).
-compile(export_all).
process(<<27>>) ->
io:fwrite("Ch: ~w", [<<27>>]),
exit(normal);
process(Ch) ->
io:fwrite("Ch: ~w", [Ch]),
get_char().
get_char() ->
Ch = io:get_chars("p: ", 1),
process(Ch).
start() ->
io:setopts([{binary, true}]),
get_char().
When I run ./invoke.sh, I press keys and see the characters print as expected. When I hit escape, the shell window stops responding (I have to close the window from the terminal). Why does this happen?
When you call exit/1 that only terminates the erlang process, that doesn't stop the erlang runtime system (beam). Since you're running without a shell, you get that behaviour of the window not responding. If you kill the beam process from your task manager or by pkill you'll get your command line back.
An easy fix would be to replace
exit(normal)
with
halt() see doc

C: How to kill a process on another machine through SSH?

I am looking to run a program written in C on my machine and have it SSH into another machine to kill a program running on it.
Inside my program, I have attempted:
system("ssh username#machine.com && pkill sleep && exit");
which will cause my terminal to SSH into the remote machine, but it ends there. I have also tried:
execl("ssh","ssh","username#machine.com",NULL);
execl("pkill","pkill","sleep",NULL);
execl("exit","exit",NULL);
but it will not kill my dummy sleep process I have running.
I can't seem to figure out what is wrong with my process.
Your second example won't do what you want as it will execute each execl on the local machine. IE it will
Execute ssh usrname#machine.com
Execute pkill
Execute exit
But, actually, unless you are surrounding these by fork, the first execl if it succeeds in running at all will replace the current process, meaning the second and third ones never get called.
So, let's work out why the first (more hopeful) example doesn't work.
This should do the equivalent of:
/bin/sh -c 'ssh username#machine.com && pkill sleep && exit'
The && exit is superfluous. And you want the pkill to run on the remote machine. Therefore you want something that does:
/bin/sh -c 'ssh username#machine.com pkill sleep'
(note the absence of && so the pkill is run on the remote machine).
That means you want:
system("ssh username#machine.com pkill sleep");
If that doesn't work, check the command starting /bin/sh -c above works. Does ssh ask for a password, for instance? If so, it won't work. You will need to use a public key.
one can always run, over ssh, the command:
kill $(pgrep -o -f 'command optional other stuff')
Get your remote process to handle SIGTERM, where it can do its cleanup's. (including killing any processes its started)
Google 'man pgrep' to see what the -o and -f do. -f is important to correctly target the signal.
the 'pgrep' returns the pid with trailing \n but this does not need to be stripped off before passing it to 'kill'.
Yours
Allan

When I run a GDB/MI session with a command file, can the commands in the command file be parsed as MI commands?

I use ngdbmi (a node.js package, which spawns a GDB/MI child process) to control GDB, but sometimes GDB throws a timeout error, and the rsp log is completely worthless.
I doubt that ngdbmi or GDB/MI has bugs, so I first tested GDB/MI and wrote a command file to test GDB/MI, like:
$ sparc-rtems-gdb -i mi -x command_file
The test passes, but I have a question:
The command file consists of commands like target remote: 65535, not commands like -target-select remote localhost: 65535 (I tried this, but gdb/mi -x command_file doesn't identify). Therefore, I can't determine whether when running sparc-rtems-gdb -i mi -x command_file, GDB will parse commands to MI commands, or just perform commands as UI commands. (I doubted gdb/mi has bugs, gdb/ui is ok, but now I'm not sure.)

Stream a continuously growing file over tcp/ip

I have a project I'm working on, where a piece of Hardware is producing output that is continuously being written into a textfile.
What I need to do is to stream that file as it's being written over a simple tcp/ip connection.
I'm currently trying to that through simple netcat, but netcat only sends the part of the file that is written at the time of execution. It doesn't continue to send the rest.
Right now I have a server listening to netcat on port 9000 (simply for test-purposes):
netcat -l 9000
And the send command is:
netcat localhost 9000 < c:\OUTPUTFILE
So in my understanding netcat should actually be streaming the file, but it simply stops once everything that existed at the beginning of the execution has been sent. It doesn't kill the connection, but simply stops sending new data.
How do I get it to stream the data continuously?
Try:
tail -F /path/to/file | netcat localhost 9000
try:
tail /var/log/mail.log -f | nc -C xxx.xxx.xxx.xxx 9000
try nc:
# tail for get last text from file, then find the lines that has TEXT and then stream
# see documentation for nc, -l means create server, -k means not close when client disconnect, waits for anothers clients
tail -f /output.log | grep "TEXT" | nc -l -k 2000

Resources