I'm trying to implement the ls command in c with as many flags as possible, but i'm having issues with getting the correct Minor and Major of the files, here's an example of what i did.
> ls -l ~/../../dev/tty
crw-rw-rw- 1 root tty 5, 0 Nov 25 13:30
this is the normal ls command as you can see the Major is 5, and Minor is 0.
my program shows the following :
Minor: 6
Major: 0
i'm still a beginner so i didn't really understand the issue here, this is what i did so far (the program is not identical to the ls command yet, but only shows information about a file).
int disp_file_info(char **argv)
{
struct stat sb;
stat(argv[1], &sb);
printf("Inode: %d\n", sb.st_ino);
printf("Hard Links: %d\n", sb.st_nlink);
printf("Size: %d\n", sb.st_size);
printf("Allocated space: %d\n", sb.st_blocks);
printf("Minor: %d\n", minor(sb.st_dev));
printf("Major: %d\n", major(sb.st_dev));
printf("UID: %d\n", sb.st_uid);
printf("GID: %d\n", sb.st_gid);
}
for now this is only to obtain certain information about a file, everything seems to be correct when compared with the ls command except for Minor and Major.
You are using st_dev, which is the device on which the file resides. You want st_rdev, which is the device the special file "is"/represents. (You should first check whether the file is a device node, though.)
Related
I'm tasked with programming the linux cd command in C. I thought this would be fairly trivial by using the chdir() method, but my directories aren't changing. What's interesting is that the return status of chdir() is 0, not -1, meaning chdir() has not failed. Here are the two cases where I'm using chdir():
1.
char *dir = getenv("HOME"); // Here dir equals the home environment.
int ret = chdir(dir);
printf("chdir returned %d.\n", ret);
ret returns 1.
2.
int ret = chdir(dir); // Here dir equals the user's input.
printf("chdir returned %d.\n", ret);
ret returns 1 if the directory exists in my path.
Am I using chdir() wrong? I can't seem to find an answer for this anywhere. Any help would be much appreciated.
chdir() changes the working directory of the calling process only.
So when you have code like ...
int main() {
// 1
chdir("/"); // error handling omitted for clarity
// 2
}
... and compile that to a program example and then run it in a shell:
$ pwd # 3
/home/sweet
$ ./example # 4
$ pwd # 5
/home/sweet
Then you have two processes in play,
the shell, which is where you entered pwd and ./example
./example, the process launched (by the shell) with your compile program.
chdir() is part of your compiled program, not the shell, thus it affects only the process with your program, not the shell.
So, at // 1 the working directory of your program (in above example run) is /home/sweet, but at // 2 it is / as specified in the chdir() call above. This doesn't affect the shell and the output of pwd # 5 though!
I want to save all of the running processes' status from the /proc folder in a file. After reading some questions and answers here I think I should use the pstatus struct to determine which fields I want to save (correct me if I'm wrong?), but I don't know how I can efficiently loop through all of the running processes.
In Linux process status is saved in /proc/PID/status pseudo-file and represented in textual form (other OS have completely different structure of their procfs):
$ grep State /proc/self/status
State: R (running)
So you need a "parser" for that file:
void print_status(long tgid) {
char path[40], line[100], *p;
FILE* statusf;
snprintf(path, 40, "/proc/%ld/status", tgid);
statusf = fopen(path, "r");
if(!statusf)
return;
while(fgets(line, 100, statusf)) {
if(strncmp(line, "State:", 6) != 0)
continue;
// Ignore "State:" and whitespace
p = line + 7;
while(isspace(*p)) ++p;
printf("%6d %s", tgid, p);
break;
}
fclose(statusf);
}
To read all processes you have to use opendir()/readdir()/closedir() and open only directories that have numerical characters (other are sysctl variables, etc.):
DIR* proc = opendir("/proc");
struct dirent* ent;
long tgid;
if(proc == NULL) {
perror("opendir(/proc)");
return 1;
}
while(ent = readdir(proc)) {
if(!isdigit(*ent->d_name))
continue;
tgid = strtol(ent->d_name, NULL, 10);
print_status(tgid);
}
closedir(proc);
Alternatively, you may use procps tools which already implemented it.
This snippet below invokes two C programs that do just that:
find /proc -maxdepth 2 -wholename '/proc/[0-9]*/status' | xargs cat
It's an old question, but still definitely a relevant one.
If you don't want to mess with parsing procfs yourself, you should definitely check out pfs. It a library for parsing most of procfs written in C++. (Disclaimer: I'm the author of the library)
On a Linux system, every task (process or thread) has an entry under the /procfs directory.
When enumerating the directories under /procfs you'll get a directory for each running processes.
Under each directory, you can find two files: stat and status, that include the current process status.
The possible statuses are described (almost correctly) under man proc(5):
(3) state %c
One of the following characters, indicating process state:
R Running
S Sleeping in an interruptible wait
D Waiting in uninterruptible disk sleep
Z Zombie
T Stopped (on a signal) or (before Linux 2.6.33) trace stopped
t Tracing stop (Linux 2.6.33 onward)
W Paging (only before Linux 2.6.0)
X Dead (from Linux 2.6.0 onward)
x Dead (Linux 2.6.33 to 3.13 only)
K Wakekill (Linux 2.6.33 to 3.13 only)
W Waking (Linux 2.6.33 to 3.13 only)
P Parked (Linux 3.9 to 3.13 only)
This is almost correct, because there's an additional possible value that is missing from this list:
I Idle
If you want to get the value from stat, the simplest way to go is probably use as fopen & fscanf. The format is intimately described under man proc(5) as well. Note: Just be carefull with the comm value format. They say it's %s, but it's actually much more complicated, since it might include spaces or any other character, it might screw up your parser (one more reason to use a mature parsing library).
If you want to get the value from status, you should probably open the file using std::ifstream or something like that and use std::getline till the lines starts with Status:, and then extract the value you want.
This is what I found when I was working on one of my course projects. Below is the C code block to print out the information about an empty pipe which has not connected to any process yet.
{
int pfd[2], nread;
char s[100];
struct stat pipe_info;
if (pipe(pfd) == -1)
{
perror ("pipe");
return (-1);
}
if (fstat (pfd[0], &pipe_info) < 0)
perror ("fstat");
print_info (&pipe_info);
if (fstat (pfd[1], &pipe_info) < 0)
perror ("fstat");
print_info (&pipe_info);
return(0);
}
void print_info (struct stat *pipe_info)
{
printf ("mode %o\n", pipe_info->st_mode);
printf ("inode %d\n", pipe_info->st_ino);
printf ("device %d\n", pipe_info->st_dev);
printf ("minor device %d\n", pipe_info->st_rdev);
printf ("num links %d\n", pipe_info->st_nlink);
printf ("uid %d\n", pipe_info->st_uid);
printf ("gid %d\n", pipe_info->st_gid);
printf ("size %d\n", pipe_info->st_size);
printf ("atime %d\n", pipe_info->st_atime);
printf ("mtime %d\n", pipe_info->st_mtime);
printf ("ctime %d\n", pipe_info->st_ctime);
printf ("block size %d\n", pipe_info->st_blksize);
printf ("block %d\n", pipe_info->st_blocks);
}
I compiled the source code on both a Linux machine and a Solaris OS machine. What I found was that on the Linux machine, the number of links is 1 while on the Solaris OS machine, the number of links for the pipe is 0. I am fairly new to the kernels of both systems and would like to know why the number of links are different on the two systems?
The SunOS 5.10 / Solaris 2.x manual says this about the st_nlink field:
st_nlink This field should be used only by administrative commands.
which I read as "this field has a nonsensical value".
Contrariwise, the value for Linux makes sense: the pipe has a link to the process that created it. I expect that st_nlink would equal 2 once the other side was conected to a (forked) process. The Linux fstat claims POSIX compliance which is good. The Solaris man page I have makes no such claims.
If your underlying question is how can I tell if the farside of a pipe is connected, there are two answers:
Your program should know if it attached the farside
You can try to write the pipe and get some combination of EAGAIN, EWOULDBLOCK, EPIPE, or the SIGPIPE signal.
Option 2 would be problematic if the other side of the pipe is connected. You could work around it if you can create a message that would never be sent by the writer to be rejected by the reader.
The default limit for the max open files on Mac OS X is 256 (ulimit -n) and my application needs about 400 file handlers.
I tried to change the limit with setrlimit() but even if the function executes correctly, i'm still limited to 256.
Here is the test program I use:
#include <stdio.h>
#include <sys/resource.h>
main()
{
struct rlimit rlp;
FILE *fp[10000];
int i;
getrlimit(RLIMIT_NOFILE, &rlp);
printf("before %d %d\n", rlp.rlim_cur, rlp.rlim_max);
rlp.rlim_cur = 10000;
setrlimit(RLIMIT_NOFILE, &rlp);
getrlimit(RLIMIT_NOFILE, &rlp);
printf("after %d %d\n", rlp.rlim_cur, rlp.rlim_max);
for(i=0;i<10000;i++) {
fp[i] = fopen("a.out", "r");
if(fp[i]==0) { printf("failed after %d\n", i); break; }
}
}
and the output is:
before 256 -1
after 10000 -1
failed after 253
I cannot ask the people who use my application to poke inside a /etc file or something. I need the application to do it by itself.
rlp.rlim_cur = 10000;
Two things.
1st. LOL. Apparently you have found a bug in the Mac OS X' stdio. If I fix your program up/add error handling/etc and also replace fopen() with open() syscall, I can easily reach the limit of 10000 (which is 240 fds below my 10.6.3' OPEN_MAX limit 10240)
2nd. RTFM: man setrlimit. Case of max open files has to be treated specifically regarding OPEN_MAX.
etresoft found the answer on the apple discussion board:
The whole problem here is your
printf() function. When you call
printf(), you are initializing
internal data structures to a certain
size. Then, you call setrlimit() to
try to adjust those sizes. That
function fails because you have
already been using those internal
structures with your printf(). If you
use two rlimit structures (one for
before and one for after), and don't
print them until after calling
setrlimit, you will find that you can
change the limits of the current
process even in a command line
program. The maximum value is 10240.
For some reason (perhaps binary compatibility), you have to define _DARWIN_UNLIMITED_STREAMS before including <stdio.h>:
#define _DARWIN_UNLIMITED_STREAMS
#include <stdio.h>
#include <sys/resource.h>
main()
{
struct rlimit rlp;
FILE *fp[10000];
int i;
getrlimit(RLIMIT_NOFILE, &rlp);
printf("before %d %d\n", rlp.rlim_cur, rlp.rlim_max);
rlp.rlim_cur = 10000;
setrlimit(RLIMIT_NOFILE, &rlp);
getrlimit(RLIMIT_NOFILE, &rlp);
printf("after %d %d\n", rlp.rlim_cur, rlp.rlim_max);
for(i=0;i<10000;i++) {
fp[i] = fopen("a.out", "r");
if(fp[i]==0) { printf("failed after %d\n", i); break; }
}
}
prints
before 256 -1
after 10000 -1
failed after 9997
This feature appears to have been introduced in Mac OS X 10.6.
This may be a hard limitation of your libc. Some versions of solaris have a similar limitation because they store the fd as an unsigned char in the FILE struct. If this is the case for your libc as well, you may not be able to do what you want.
As far as I know, things like setrlimit only effect how many file you can open with open (fopen is almost certainly implemented in terms on open). So if this limitation is on the libc level, you will need an alternate solution.
Of course you could always not use fopen and instead use the open system call available on just about every variant of unix.
The downside is that you have to use write and read instead of fwrite and fread, which don't do things like buffering (that's all done in your libc, not by the OS itself). So it could end up be a performance bottleneck.
Can you describe the scenario that requires 400 files open ** simultaneously**? I am not saying that there is no case where that is needed. But, if you describe your use case more clearly, then perhaps we can recommend a better solution.
I know that's sound a silly question, but you really need 400 files opened at the same time?
By the way, are you running this code as root are you?
Mac OS doesn't allow us to easily change the limit as in many of the unix based operating system. We have to create two files
/Library/LaunchDaemons/limit.maxfiles.plist
/Library/LaunchDaemons/limit.maxproc.plist
describing the max proc and max file limit. The ownership of the file need to be changed to 'root:wheel'
This alone doesn't solve the problem, by default latest version of mac OSX uses 'csrutil', we need to disable it. To disable it we need to reboot our mac in recovery mode and from there disable csrutil using terminal.
Now we can easily change the max open file handle limit easily from terminal itself (even in normal boot mode).
This method is explained in detail in the following link. http://blog.dekstroza.io/ulimit-shenanigans-on-osx-el-capitan/
works for OSX-elcapitan and OSX-Seirra.
I'm writing a program to implement Dinic's max-flow algorithm over a network. The networks can be written either by hand or loaded from a file using stdin redirection.
I've been able to use gdb to debug the program with small files (around 30 lines), but I'm having trouble when I try to debug the program with bigger files (>1000 lines). The code itself is this:
uint32_t read_lines = 0;
while(!feof(stdin))
{
err = fscanf(stdin, "%u %u %u\n", &n1, &n2, &c);
if (err != 3)
{
printf("read_lines=%u\n", read_lines); /*for debugging purposes*/
}
read_lines += 1;
/* write to debug file */
fprintf(debug, "line %u: %u %u %u\n", read_lines, n1, n2, c);
}
If I run the program without gdb, it runs, not ok as it generates a segmentation fault (which is the reason I'm trying to use gdb), but it goes through this part of "parsing" the input file (and writing it into the output debugging file).
However, if I type:
gdb --args ./dinic --mode=NUM --verbose=LOW
(gdb) b 61
(gdb) run < tests/numterc.in
I get:
(gdb) Program exited with 01 code.
and when I open the debugging file it's about 2000 lines, when it should be at most 1000, which is the input file length.
I repeat, this happens with "big" files, it works correct with small ones.
The question would be, am I missing something when using gdb, or is this a gdb bug?
Ok, I could finally get a work-around. It seems that the --args option ain't working well, at least in my case. I have gdb 6.8-debian and debian 5.0.4.
What I had to do was run gdb without the --args option:
$gdb ./dinic
(gdb) b 61
(gdb) run --mode=NUM --verbose=LOW < tests/numterc.in
and it worked well. Maybe someone can find this useful.
I had the same problem and came up with the same solution to specify args in run. The option --args only can pass arguments, but but cannot do redirection of stdin which is usually (in non-debug context) redirected for you by the shell invoking the command. In the debug session your command is invoked by gdb where both argument list and redirections are specified by the value of the args variable. By using the --args option you initialize this variable (and the program file to debug as well). Just do
(gdb) show args
and this should be initialized to --mode=NUM --verbose=LOW in your case. But no redirection, so you specify them with run, which overrides args! So you have two options:
Specify also the redirection in args:
(gdb) set args --mode=NUM --verbose=LOW < tests/numterc.in
Specify also the redirection when invoking run