Using MatLab Script To Run remote Linux Program via Telnet - c

I have a C program on Linux Fedora 14, and now I am trying to remotely running it from a different PC using MATLAB via telnet. But right now all I can do is calling putty from matlab to access Linux terminal, and run the program through this remote terminal. But it is useless for me because I can't automate the matlab script to call the program repeatedly, and read some value back.
To illustrate my situation. Say I have a program Hello as following:
void main (int argc, char* argv){
if(argc > 0){
printf("Hello %s \n", argv);
printf("result is %d", argc++);
}
return;
}
I want to have a MATLAB script that can run this program from a remote PC and input a name and read the result multiple times. But now all I have is calling system('C:\Putty\putty.exe <ip_address> -username -password') from matlab and get the remote terminal on Linux, then manually run ./hello <name>. How can I run the whole program from matlab directly through telnet (with or without putty, doesn't matter), and get the response from telnet?
Thanks.

Related

X11 DefaultRootWindow Segfault Only When Program is Run by Systemctl

I'm new to both Linux and C development, trying to take a screenshot in C with the X11 libs.
If I compile and run my program normally, the screenshot is properly taken with no issues. If I run my program as a service, like
sudo systemctl start screenshot
The program fails. Both the logs and analyzing the coredump with GDB only say
Program terminated with signal SIGSEGV, Segmentation fault.
I have set up manual logging in my code:
int main(int argc, char** argv ){
FILE *fp = fopen("log.txt", "w");
setvbuf(fp, NULL, _IONBF, 1024);
fputs("2", fp);
Display* display = XOpenDisplay(NULL);
fputs("5", fp);
Window root = DefaultRootWindow(display);
fputs("6", fp);
When run as a service, log.txt contains the sequence 25. If run from terminal like ./screenshot, the program terminates normally.
Any hints on finding the cause of the issue would be appreciated.
David pointing out to check whether display is NULL and some searching revealed that the issue is that the program can't open the display when running as a service.
Based on this Question: https://unix.stackexchange.com/questions/537628/error-cannot-open-display-on-systemd-service-which-needs-graphical-interface
Setting Environment in the systemd service file as
Environment=DISPLAY=:0.0
Environment=XAUTHORITY=/home/<username>/.Xauthority
resolved the problem and the service runs without issues.

Can lldb inspect what has been written to a file, or the data in an IPC mechanism it has generated/used, at a breakpoint?

Say with this simple code:
#include<stdio.h>
int main(int argc, char** argv){
printf("Hello World!\n");
return 0;
}
After stepping printf("Hello World!\n”); perhaps there’s a command to print that “Hellow World!\n” has been written to STDOUT.
And after return 0 perhaps there’s a command to see the exit codes generated and it will show 0.
Are there such commands or similar in lldb?
LLDB prints the exit status when a process exits:
(lldb) run
Process 76186 launched: '/tmp/a.out' (x86_64)
Process 76186 exited with status = 10 (0x0000000a)
and you can also access it with the SB API's:
(lldb) script lldb.process.GetExitStatus()
10
lldb doesn't have any special knowledge about all the ways a program might read or write data to a pipe, file handle, pty, etc... It also doesn't know how to interpose on file handles and tee-off the output. There's no particular reason it couldn't, but nobody has added that to date.
So you would have to build this yourself. If you know the API your code is using to read and write, you could use breakpoints to observe that - though that might get slow if you are observing a program that reads and writes a lot.

How to make c programe as daemon in ubuntu?

Hi I am new to the linux environment. I am trying to create daemon process.
#include<stdio.h>
int main()
{
int a=10,b=10,c;
c=sum(a,b);
printf("%d",c);
return (0);
}
int sum(int a,int b)
{
return a+b;
}
I want to create daemon process of it. May i know how can do this? Any help would be appreciated. Thank you.
A daemon generally doesn't use its standard input and output streams, so it is unclear how your program could be run as a daemon. And a daemon program usually don't have any terminal, so it cannot use clrscr. Read also the tty demystified page, and also daemon(7).
I recommend reading some good introduction to Linux programming, like the old freely downloadable ALP (or something newer). We can't explain all of it here, and you need to read an entire book. See also intro(2) and syscalls(2).
I also recommend reading more about OSes, e.g. the freely available Operating Systems: Three Easy Pieces textbook.
You could use the daemon(3) function in your C program to run it as a daemon (but then, you are likely to not have any input and output). You may want to log messages using syslog(3).
You might consider job control facilities of your shell. You could run your program in the background (e.g. type myprog myarg & in your interactive shell). You could use the batch command. However neither background processes nor batch jobs are technically daemons.
Perhaps you want to code some ONC-RPC or JSONRPC or Web API server and client. You'll find libraries for that. See also pipe(7), socket(7)
(take several days or several weeks to read much more)
First find what are the properties of daemon process, as of my knowledge a daemon process have these properties:
Should not have any parent (it itself should be parent)
Process itself is a session leader.
Environment change to root.
File mode creating mask should be zero.
No controlling terminal.
All terminal should be removed
Should not be un-mounted .
Implement the code by considering above properties which is
int i=0;
int main()
{
int pid;
pid=fork();
if(pid!=0) {
/** you can add your task here , whatever you want to run in background **/
exit(0);
}
else
{
setsid();//setting sessions
chdir("/");//root.. should'nt beunmounted
umask(0);
close(0);//all terminal are removed
close(1);
close(2);
while(1)
{
printf("i = %d \n",i);
i++;
}
}
return 0;
}
or you can go through man page of daemon()
int daemon(int nochdir, int noclose);
I hope it helps.
Instead of writing the code to make the C program a daemon I would go with an already mature tool like supervisor:
http://supervisord.org/
I think this below will work
screen cmd arg1 arg2
You can also try
nohup cmd arg1

GDB freezes when shell spawns

I have a simple C program:
#include <stdlib.h>
int main(int argc, char** argv){
execve(argv[1], &argv[1], NULL);
return 0;
}
If i run gdb --tui myprogram and spawn a shell with the command run "/bin/sh" then gdb will freeze, and i can only terminate it with CTRL-C.
My purpose is to execute shell commands from within gdb (I have a buffer overflow homework)
Is there a way to use the shell from within gdb?
EDIT
I solved the problem removing the --tui option.
Look into using gdbserver. This helpfully disassociates the gdb session from the binary. I use it to debug ncurses text UIs for instance.
On term1:
$ gdbserver :2345 /path/to/my/program
On term2:
$ gdb -q /path/to/my/program
> target remote localhost:2345
> break ......
> continue
When you run gdb in term2, do it from the directory where the source lives.
Also, once you know how to do this you can debug machines over the network. You can also debug from a x86 box a process running on a remote arm if you have the right tools in place. So this is another tool to add to the toolbox.
Good luck.

Dump core if SIGSEGV (in C)?

How can I dump the core when my program receives the SIGSEGV signal ? (The server that runs my program has very limited permissions and therefore core dump is disabled by default.)
I have written the following using gcore but I would like to use C functions instead. Can I somehow catch the core and write it to a folder somewhere ?
void segfaulthandler(int parameter)
{
char gcore[50];
sprintf(gcore, "gcore -s -c \"core\" %u", getpid());
system(gcore);
exit(1);
}
int main(void)
{
signal(SIGSEGV, segfaulthandler);
}
Unless there's a hard limit preventing you, you could use setrlimit(RLIMIT_CORE, ...) to increase the softlimit and enable coredumps - this corresponds to running ulimit -c in shell.
On linux, you typically can do:
$ ulimit -c unlimited
The resulting core file will be written in the current working directory of the process when the signal is received.

Resources