Format disk and create partition in C on LynxOS - c

Can you explain to me how I can do a mini program that does a system call in C in order to format the disk and create a new partition?
O/S is LynxOS.

Which commands would you execute at the shell?
Superficially, you could use some variant on this:
#include <stdlib.h>
static const char *cmds[] =
{
"command 1 with options",
"command 2 with different options",
0,
};
int main(void)
{
int i;
for (i = 0; cmds[i] != 0; i++)
if (system(cmds[i]) != 0)
exit(EXIT_FAILURE);
return(EXIT_SUCCESS);
}
I assume that the commands will provide appropriate diagnosis of any problems.
If you need to control the arguments, then you have more work to do.
The main caveat is "is this the disk that the o/s is running on", because if so, the chances are that the formatting of that disk will stop the programs from running successfully.

Related

make a program to run other program on C

I have a program that works like this
program1.exe program2.exe
I need to make it run like this
%USERPROFILE%\program1.exe program2.exe
How can that be done on C?
From what I could see, you're using Microsoft Windows.
There are (at least) two answers to your question, a simple one, and one tied to the Windows operating system interface, usually called Win32 API.
Let's use the simple one. If your prefer to have more control about the execution of the 2nd program, please comment.
#include <stdio.h> /* printf() */
#include <stdlib.h> /* system() */
int main(int argc, char* const* argv)
{
int rv;
if (argc < 2) {
printf("Please inform the name of the program to execute.\n");
return 1;
}
rv = system(argv[1]);
printf("Program execution returned %d\n", rv);
return 0;
}

How to properly call an executable in C program runtime?

I have a C application whose one of the jobs is to call an executable file. That file has performance measurement routines inserted during compilation, at the level of intermediate code. It can measure time or L1/L2/L3 cache misses. In other words, I have modified the LLVM compiler to insert a call to that function and print the result to stdout for any compiled program.
Now, like I mentioned at the beginning, I would like to execute the program (with this result returned to stdout) from a separate C application and save that result. The way I'm doing it right now is:
void executeProgram(const char* filename, char* time) {
printf("Executing selected program %s...\n", filename);
char filePath[100] = "/home/michal/thesis/Drafts/output/";
strcat(filePath, filename);
FILE *fp;
fp = popen(filePath, "r");
char str[30];
if (fp == NULL) {
printf("Failed to run command\n" );
exit(1);
}
while (fgets(str, sizeof(str) - 1, fp) != NULL) {
strcat(time, str);
}
pclose(fp);
}
where filename is the name of the compiled executable to run. The result is saved to time string.
The problem is, that the results I'm getting are pretty different and unstable compared to those that are returned by simply running the executable 'by hand' from the command line (./test16). They look like:
231425
229958
230450
228534
230033
230566
231059
232016
230733
236017
213179
90515
229775
213351
229316
231642
230875
So they're mostly around 230000 us, with some occasional drops. The same executable, run from within the other application, produces:
97097
88706
91418
97970
97972
94597
95846
95139
91070
95918
107006
89988
90882
91986
90997
88824
129136
94976
102191
94400
95215
95061
92115
96319
114091
95230
114500
95533
102294
108473
105730
Note that it is the same executable that's being called. Yet the measured time it returns is different. The program that is being measured consists of a function call to a simple nested loop, accessing array elements. Here is the code:
#include "test.h"
#include <stdio.h>
float data[1000][1000] = {0};
void test(void)
{
int i0, i1;
int N = 80;
float mean[1000];
for (i0 = 0; i0 < N; i0++)
{
mean[i0] = 0.0;
for (i1 = 0; i1 < N; i1++) {
mean[i0] += data[i0][i1];
}
mean[i0] /= 1000;
}
}
I'm suspecting that there is something wrong in the way the program is invoked in the code, maybe the process should be forked or something? Any ideas?
You didnt specify where exactly your time measuring subroutines are inserted, so all I can really offer is guesswork.
The results seem to hint to the exact opposite - running the application from shell is slower, so I wouldn't worry about the way you're starting the process from the C code. My guess would be - when you run your program from shell, it's the terminal that's slowing you down. When you're running the process from your C code, you pipe the output back to your 'starter' application, which is already waiting for input on the pipe.
As a side note, consider switching from strcat to something safer, like strncat.

linux application remote debugging without gdb

I wrote a linux program as below:
int g_para1 = 10;
int g_para2 = 11;
void SetPara(int para1, int para2)
{
g_para1 = para1;
g_para2 = para2;
}
void DumpPara()
{
printf("g_para1 = %d, g_para2 = %d\n", g_para1,g_para2);
}
void init()
{
int pid;
if(pid = fork())
exit(0);
else if(pid < 0)
exit(1);
setsid();
if(pid = fork())
exit(0);
else if(pid < 0)
exit(1);
return;
}
int main()
{
signal(SIGCHLD, SIG_IGN);
init();
while(1)
{
DumpPara();
sleep(10);
}
return 0;
}
And then compile and run it in the shell
gcc -o test test.c
./test
it will show the print "g_para1 = 10, g_para2 = 11" every 10 seconds.
My question is:
If I execute "SetPara 20, 30" in the shell, i want the print shows "g_para1 = 20, g_para2 = 30".
What should i do to make it work?
If I do nothing, it will show
SetPara 20,30
SetPara: command not found
First, an application (in C, compiled with debug info, e.g. with gcc -Wall -g) can be remotely debugged. If you have ssh available, you simply debug using gdb thru a terminal connection via ssh. Otherwise, read the documentation of GDB, it has a chapter on remote debugging.
Then, it looks like you wish to embed your application into a scripting language interpreter. Then try lua, or GNU guile, or perhaps even python or ocaml. Notice that embedding your application inside an interpreter (or symetrically, putting an interpreter inside your application) is a heavy architectural decision and has major impacts on its design.
BTW, some Unix shells can be extended thru plugins, e.g. zsh (with modules...). Plugins use dynamic loading facilities like dlopen(3) (you might consider using them in your application).
But an ordinary shell can only start external programs using fork(2) & execve(2)
BTW, you might consider making your application behave as a Web server by using some HTTP server library inside it, e.g. libonion. You could also consider remote procedure call techniques, e.g. JSON RPC
Read also Advanced Linux Programming
As for your improved question, just define a convention in main arguments: you could want that if you invoke your program with arguments --setpara 23 45 it will call SetPara(23,45) e.g. with code like:
int main (int argc, char**argv) {
if (argc>3 && !strcmp[argv[1], "--setpara")) {
int x = atoi(argv[2]);
int y = atoi(argv[3]);
SetPara (x,y);
}
but you should do serious arguments passing e.g. with getopt_long (and accept --help & --version). See this answer and that one.
If you want SetPara to be called elsewhere in your code, the arguments to main should trigger other behavior (e.g. initializing global data, initializing an embedded interpreter, etc.).
Notice that each process has its own address space in virtual memory. See also this answer.
PS: your question is very unclear; I tried to give various ways of approaching your issues, which I am mostly guessing. You should study -for inspiration at least- the source code of some existing free software related to your goals.

How to get windows size from Linux

everyone. I am still new to programming. I really need some help about the issues that I'm facing. So, the situation here is I'm trying to display a warning when the terminal size is below 80x24. For the record, my Os is Window, but I'm using a virtual machine to run Linux because all the files are in Linux. When i run the file using terminal, the warning display correctly. But the problem is when i try to run the file from Windows using PuTTY. The warning did not appear. I'm sure its because the function that I'm using can only read the Linux environment and not the Windows. Can anyone help me or point me to a direction on how to make it capable of getting the dimension of windows. The files should all remain in Linux. I am using C.
Here are just some part of the code to show about displaying warning and getting dimension.
//This is to display warning
int display_warning()
{
CDKSCREEN *cdkscreen = 0;
WINDOW *cursesWin = 0;
char *mesg[5];
char *buttons[] = {"[ OK ]"};
CDKDIALOG *confirm;
cursesWin = initscr();
cdkscreen = initCDKScreen (cursesWin);
initCDKColor();
mesg[0] = "</2>"The size of Window must be at least 80x24.";
confirm = newCDKDialog(cdkscreen, CENTER, CENTER, mesg, 1, buttons, A_REVERSE, TRUE,TRUE, FALSE);
setCDKDialogBackgroundColor(confirm, "</2>");
injectCDKDialog(confirm,TAB);
activateCDKDialog(confirm,0);
if (confirm -> exitType == vNORMAL){
destroyCDKDialog (confirm);
destroyCDKScreen (cdkscreen);
endCDK();
}
return 0;
}
//This is to get the dimension
int get_terminal_size()
{
int cols;
int lines;
#ifdef TIOCGSIZE
struct ttysize ts;
ioctl(0,TIOCGSIZE, &ts);
lines = ts.ts_linesl;
cols = ts.ts_cols;
#elif defined(TIOCGWINSZ)
struct winsize ts;
ioctl(0, TIOCGWINSZ, &ts);
lines = ts.ws_row;
cols = ts.ws_col;
#endif
if((lines <= 23)||(cols <= 79)){
display_warning();
}
return 0;
}
//then there will be the main function that i think is not necessary to put the code here.
All comment and help are very appreciated. I am a beginner in programming, so please excuse me if there are some basic things that i dont know.
Fikrie
The issue has nothing to do with PuTTY per se, and everything to do with SSH clients and pseudoterminals in general.
To avoid this issue, configure your PuTTY to use a pseudoterminal. (In the TTY panel, there is a "Don't allocate a pseudoterminal" checkbox. Make sure it is not checked.)
With ssh, you need to use the -t option to tell ssh to use a pseudoterminal.
Here is a simple example program you can use in Linux to obtain the terminal size. It does not require curses:
#include <unistd.h>
#include <sys/ioctl.h>
#include <errno.h>
#include <stdio.h>
static int get_size(const int fd, int *const rows, int *const cols)
{
struct winsize sz;
int result;
do {
result = ioctl(fd, TIOCGWINSZ, &sz);
} while (result == -1 && errno == EINTR);
if (result == -1)
return errno;
if (rows)
*rows = sz.ws_row;
if (cols)
*cols = sz.ws_col;
return 0;
}
int main(void)
{
int rows, cols;
if (!get_size(STDIN_FILENO, &rows, &cols) ||
!get_size(STDOUT_FILENO, &rows, &cols) ||
!get_size(STDERR_FILENO, &rows, &cols))
printf("%d rows, %d columns\n", rows, cols);
else
fprintf(stderr, "Terminal size is unknown.\n");
return 0;
}
The actual information is obtained using the TIOCGWINSZ TTY ioctl.
The pseudoterminal size is actually maintained by the kernel. If there is no pseudoterminal, just standard streams, there are no rows and columns; it's just a stream in that case. In particular, even tput lines and tput cols will fail then.
Many interactive command-line programs will refuse to work if there is no pseudoterminal. For example, top will report something like "TERM environment variable not set" or "top: failed tty get". Others will work, just not interactively; they'll output once only, but as if the terminal was infinitely tall and infinitely wide.
In summary, your application should recognize whether it is running in a pseudoterminal (with terminal size known, curses support possible, and so on), or in stream mode (via SSH or PuTTY, deliberately without a pseudoterminal -- or perhaps just because inputs and outputs are all directed to/from files or some such).

Two different C programs are accessing a file

I have an application on Linux platform which requires a server program to write data to a bin file continuously. At the same time another program needs to read the written values. Should I be concerned if I am not locking the file during the read and the write process?
You should be concerned. I assume you are sure that no other program (than the two executables mentioned in your question) are accessing that file. You should indeed lock to serialize that access. Use flock(2), or lockf(3) which uses fcntl(2)
BTW, is the file read and written sequentially? Did you consider using some higher-level thing, e.g. GDBM or some database like mariadb or postgresql or mongodb, etc...
Everything depends on what your requirements are? Can you modify the server process? If so, you have endless possiblities. This is a well studied problem, Interprocess Communication, wikipedia IPC.
Otherwise, in my own test program, it seemed that no locking was necessary to have a producer and consumer operating on the same file. This is anecdotal evidence only, I make no guarantees.
Producer:
int main() {
int fd = open("file", O_WRONLY | O_APPEND);
const char * str = "str";
const int str_len = strlen(str);
int sum = 0;
while (1) {
sum += write(fd, str, str_len);
printf("%d\n", sum);
}
close(fd);
}
Consumer:
int main() {
int fd = open("file", O_RDONLY);
char buf[10];
const int buf_size = sizeof(buf);
int sum = 0;
while (1) {
sum += read(fd, buf, buf_size);
printf("%d\n", sum);
}
close(fd);
}
(Includes:)
#include
#include
#include
#include
This program assumes the "file" exists already.
Just to add to what has already been said here, check your OS documentation. In principle there should not be problem when reading, if reading is atomic (I.e no task switch during the operation), should be ok. Also the OS could have its own restrictions and locks, so be careful.

Resources