This is fairly simple application which creates a lightweight process (thread) with clone() call.
#define _GNU_SOURCE
#include <sched.h>
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <errno.h>
#include <stdlib.h>
#include <time.h>
#define STACK_SIZE 1024*1024
int func(void* param) {
printf("I am func, pid %d\n", getpid());
return 0;
}
int main(int argc, char const *argv[]) {
printf("I am main, pid %d\n", getpid());
void* ptr = malloc(STACK_SIZE);
printf("I am calling clone\n");
int res = clone(func, ptr + STACK_SIZE, CLONE_VM, NULL);
// works fine with sleep() call
// sleep(1);
if (res == -1) {
printf("clone error: %d", errno);
} else {
printf("I created child with pid: %d\n", res);
}
printf("Main done, pid %d\n", getpid());
return 0;
}
Here are results:
Run 1:
➜ LFD401 ./clone
I am main, pid 10974
I am calling clone
I created child with pid: 10975
Main done, pid 10974
I am func, pid 10975
Run 2:
➜ LFD401 ./clone
I am main, pid 10995
I am calling clone
I created child with pid: 10996
I created child with pid: 10996
I am func, pid 10996
Main done, pid 10995
Run 3:
➜ LFD401 ./clone
I am main, pid 11037
I am calling clone
I created child with pid: 11038
I created child with pid: 11038
I am func, pid 11038
I created child with pid: 11038
I am func, pid 11038
Main done, pid 11037
Run 4:
➜ LFD401 ./clone
I am main, pid 11062
I am calling clone
I created child with pid: 11063
Main done, pid 11062
Main done, pid 11062
I am func, pid 11063
What is going on here? Why "I created child" message is sometimes printed several times?
Also I noticed that adding a delay after clone call "fixes" the problem.
You have a race condition (i.e.) you don't have the implied thread safety of stdio.
The problem is even more severe. You can get duplicate "func" messages.
The problem is that using clone does not have the same guarantees as pthread_create. (i.e.) You do not get the thread safe variants of printf.
I don't know for sure, but, IMO the verbiage about stdio streams and thread safety, in practice, only applies when using pthreads.
So, you'll have to handle your own interthread locking.
Here is a version of your program recoded to use pthread_create. It seems to work without incident:
#define _GNU_SOURCE
#include <sched.h>
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <errno.h>
#include <stdlib.h>
#include <time.h>
#include <pthread.h>
#define STACK_SIZE 1024*1024
void *func(void* param) {
printf("I am func, pid %d\n", getpid());
return (void *) 0;
}
int main(int argc, char const *argv[]) {
printf("I am main, pid %d\n", getpid());
void* ptr = malloc(STACK_SIZE);
printf("I am calling clone\n");
pthread_t tid;
pthread_create(&tid,NULL,func,NULL);
//int res = clone(func, ptr + STACK_SIZE, CLONE_VM, NULL);
int res = 0;
// works fine with sleep() call
// sleep(1);
if (res == -1) {
printf("clone error: %d", errno);
} else {
printf("I created child with pid: %d\n", res);
}
pthread_join(tid,NULL);
printf("Main done, pid %d\n", getpid());
return 0;
}
Here is a test script I've been using to check for errors [it's a little rough, but should be okay]. Run against your version and it will abort quickly. The pthread_create version seems to pass just fine
#!/usr/bin/perl
# clonetest -- clone test
#
# arguments:
# "-p0" -- suppress check for duplicate parent messages
# "-c0" -- suppress check for duplicate child messages
# 1 -- base name for program to test (e.g. for xyz.c, use xyz)
# 2 -- [optional] number of test iterations (DEFAULT: 100000)
master(#ARGV);
exit(0);
# master -- master control
sub master
{
my(#argv) = #_;
my($arg,$sym);
while (1) {
$arg = $argv[0];
last unless (defined($arg));
last unless ($arg =~ s/^-(.)//);
$sym = $1;
shift(#argv);
$arg = 1
if ($arg eq "");
$arg += 0;
${"opt_$sym"} = $arg;
}
$opt_p //= 1;
$opt_c //= 1;
printf("clonetest: p=%d c=%d\n",$opt_p,$opt_c);
$xfile = shift(#argv);
$xfile //= "clone1";
printf("clonetest: xfile='%s'\n",$xfile);
$itermax = shift(#argv);
$itermax //= 100000;
$itermax += 0;
printf("clonetest: itermax=%d\n",$itermax);
system("cc -o $xfile -O2 $xfile.c -lpthread");
$code = $? >> 8;
die("master: compile error\n")
if ($code);
$logf = "/tmp/log";
for ($iter = 1; $iter <= $itermax; ++$iter) {
printf("iter: %d\n",$iter)
if ($opt_v);
dotest($iter);
}
}
# dotest -- perform single test
sub dotest
{
my($iter) = #_;
my($parcnt,$cldcnt);
my($xfsrc,$bf);
system("./$xfile > $logf");
open($xfsrc,"<$logf") or
die("dotest: unable to open '$logf' -- $!\n");
while ($bf = <$xfsrc>) {
chomp($bf);
if ($opt_p) {
while ($bf =~ /created/g) {
++$parcnt;
}
}
if ($opt_c) {
while ($bf =~ /func/g) {
++$cldcnt;
}
}
}
close($xfsrc);
if (($parcnt > 1) or ($cldcnt > 1)) {
printf("dotest: fail on %d -- parcnt=%d cldcnt=%d\n",
$iter,$parcnt,$cldcnt);
system("cat $logf");
exit(1);
}
}
UPDATE:
Were you able to recreate OPs problem with clone?
Absolutely. Before I created the pthreads version, in addition to testing OP's original version, I also created versions that:
(1) added setlinebuf to the start of main
(2) added fflush just before the clone and __fpurge as the first statement of func
(3) added an fflush in func before the return 0
Version (2) eliminated the duplicate parent messages, but the duplicate child messages remained
If you'd like to see this for yourself, download OP's version from the question, my version, and the test script. Then, run the test script on OP's version.
I posted enough information and files so that anyone can recreate the problem.
Note that due to differences between my system and OP's, I couldn't at first reproduce the problem on just 3-4 tries. So, that's why I created the script.
The script does 100,000 test runs and usually the problem will manifest itself within 5000-15000.
I can't recreate OP's issue, but I don't think the printf's are actually a problem.
glibc docs:
The POSIX standard requires that by default the stream operations are
atomic. I.e., issuing two stream operations for the same stream in two
threads at the same time will cause the operations to be executed as
if they were issued sequentially. The buffer operations performed
while reading or writing are protected from other uses of the same
stream. To do this each stream has an internal lock object which has
to be (implicitly) acquired before any work can be done.
Edit:
Even though the above is true for threads, as rici points out, there is a comment on sourceware:
Basically, there's nothing you can safely do with CLONE_VM unless the
child restricts itself to pure computation and direct syscalls (via
sys/syscall.h). If you use any of the standard library, you risk the
parent and child clobbering each other's internal states. You also
have issues like the fact that glibc caches the pid/tid in userspace,
and the fact that glibc expects to always have a valid thread pointer
which your call to clone is unable to initialize correctly because it
does not know (and should not know) the internal implementation of
threads.
Apparently, glibc isn't designed to work with clone if CLONE_VM is set but CLONE_THREAD|CLONE_SIGHAND are not.
Your processes both use the same stdout (that is, the C standard library FILE struct), which includes an accidentally shared buffer. That's undoubtedly causing problems.
Ass everyone suggests: it really seems to be a problem with, how shall I put it in case of clone(), process-safety? With a rough sketch of a locking version of printf (using write(2)) the output is as expected.
#define _GNU_SOURCE
#include <sched.h>
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <errno.h>
#include <stdlib.h>
#include <time.h>
#define STACK_SIZE 1024*1024
// VERY rough attempt at a thread-safe printf
#include <stdarg.h>
#define SYNC_REALLOC_GROW 64
int sync_printf(const char *format, ...)
{
int n, all = 0;
int size = 256;
char *p, *np;
va_list args;
if ((p = malloc(size)) == NULL)
return -1;
for (;;) {
va_start(args, format);
n = vsnprintf(p, size, format, args);
va_end(args);
if (n < 0)
return -1;
all += n;
if (n < size)
break;
size = n + SYNC_REALLOC_GROW;
if ((np = realloc(p, size)) == NULL) {
free(p);
return -1;
} else {
p = np;
}
}
// write(2) shoudl be threadsafe, so just in case
flockfile(stdout);
n = (int) write(fileno(stdout), p, all);
fflush(stdout);
funlockfile(stdout);
va_end(args);
free(p);
return n;
}
int func(void *param)
{
sync_printf("I am func, pid %d\n", getpid());
return 0;
}
int main()
{
sync_printf("I am main, pid %d\n", getpid());
void *ptr = malloc(STACK_SIZE);
sync_printf("I am calling clone\n");
int res = clone(func, ptr + STACK_SIZE, CLONE_VM, NULL);
// works fine with sleep() call
// sleep(1);
if (res == -1) {
sync_printf("clone error: %d", errno);
} else {
sync_printf("I created child with pid: %d\n", res);
}
sync_printf("Main done, pid %d\n\n", getpid());
return 0;
}
For the third time: it's only a sketch, no time for a robust version, but that shouldn't hinder you to write one.
As evaitl points out printf is documented to be thread-safe by glibc's documentation. BUT, this typically assumes that you are using the designated glibc function to create threads (that is, pthread_create()). If you do not, then you are on your own.
The lock taken by printf() is recursive (see flockfile). This means that if the lock is already taken, the implementation checks the owner of the lock against the locker. If the locker is the same as the owner, the locking attempt succeeds.
To distinguish between different threads, you need to setup properly TLS, which you do not do, but pthread_create() does. What I'm guessing happens is that in your case the TLS variable that identifies the thread is the same for both threads, so you end up taking the lock.
TL;DR: please use pthread_create()
Related
I use that very simple C program to execute a system call to php each second, in order to run a php script that sends pending push notification in my database to APNS (Apple notification service).
Anyway, this program causes a memory overflow after about 10 hours, so I reduced sleep time between thread creation from 1s to 10000us, and I could see in real time with htop that memory were increasing without never lower. Here is the program :
#include <stdlib.h>
#include <stdio.h>
#include <pthread.h>
#include <unistd.h>
typedef struct {
char* script_path ;
} arg_for_script ;
static void *start_instance(void *_args)
{
int id = abs(pthread_self());
arg_for_script* args = _args ;
printf("[SERVICE] start php script on thread %d\n",id);
fflush(stdout);
char cmd[200] ;
sprintf(cmd, "php -f %s %d", args->script_path, id );
system(cmd);
printf("[SERVICE] end of script on thread %d\n", id);
fflush(stdout);
pthread_exit(NULL);
}
int main(int argc, char* argv[])
{
if(argc < 2)
{
fprintf(stderr, "[SERVICE] Path of php notification script must be filled\n");
fflush(stderr);
return EXIT_FAILURE;
}
arg_for_script args ;
args.script_path = argv[1];
pthread_attr_t tattr ;
struct sched_param param;
param.sched_priority = 1 ;
pthread_attr_init(&tattr);
pthread_attr_setinheritsched(&tattr, PTHREAD_EXPLICIT_SCHED);
pthread_attr_setschedpolicy(&tattr, SCHED_FIFO);
pthread_attr_setschedparam(&tattr, ¶m);
while(1) {
pthread_t thrd;
// if(pthread_create(&thrd, &tattr, start_instance, (void *)&args) == -1) {
if(pthread_create(&thrd, NULL, start_instance, (void *)&args) == -1)
{
fprintf(stderr, "[SERVICE] Unable to create thread\n");
fflush(stderr);
return EXIT_FAILURE;
}
usleep( 10000);
}
// pthread_attr_destroy(&tattr);
return EXIT_SUCCESS ;
}
Here, I don't dynamically allocate any RAM with malloc. Why would this program increases memory usage ? What pointer should I free here ?
You aren't calling pthread_join() nor use pthread_detach(), so the resources allocated for the thread aren't freed. Namely each thread has it's own stack, which is probably what causes the rising memory consumption.
Some remarks about your implementation: Since you plan on executing a PHP script with system() and don't actually need to work on shared variables or file descriptors, it's better to use fork() and one of the variants of exec(). This will spawn a new process without the intermediate step of creating a thread. It's also not recommended to use system() because it often allows to exploit the program when the input isn't properly sanitized. In this case it might be fine, if you only call it manually.
I'm trying to create a seccomp filter that would blacklist the use of fork(). This is my code:
#include <seccomp.h>
#include <unistd.h>
#include <stdio.h>
#include <sys/types.h>
int main(void) {
int rc = -1;
int pid_t;
scmp_filter_ctx ctx;
ctx = seccomp_init(SCMP_ACT_ALLOW);
// possible issue for torsocks: needs arg count
rc = seccomp_rule_add(ctx, SCMP_ACT_KILL, SCMP_SYS(fork), 0);
printf("seccomp rule add return value: %d\n", rc);
rc = seccomp_load(ctx);
printf("seccomd_load return value: %d\n", rc);
pid_t = fork();
printf("%d\n", pid_t);
seccomp_release(ctx);
return 0;
}
I compile like so:
hc01#HC01:~/torsocks$ gcc test_seccomp.c -lseccomp
Then run to get the following output:
hc01#HC01:~/torsocks$ ./a.out
seccomp rule add return value: 0
seccomd_load return value: 0
15384
0
Implying that I was able to successfully fork and seccomp_add_rule and seccomp_load are running successfully. Can someone help me understand what I'm doing wrong? Thanks!
fork() from glibc is likely actually using the system call clone instead of fork, see man fork.
You can verify this by having a look at strace ./a.out.
Therefore try:
rc = seccomp_rule_add(ctx, SCMP_ACT_KILL, SCMP_SYS(clone), 0);
instead.
You should in any case default-block and not default-allow syscalls, because otherwise you would need to consider all existing syscalls and whether they can have the undesired effect (e.g. for creating a child process there would be at least vfork in addition to fork and clone) and also because new kernel versions may add arbitrary syscalls which could again produce the undesired effect.
I am trying to use clone() to create a child process to exec() some programs. I know that exec() replaces the original process and the process that calls it should end with it, so I use a child process to call exec(). However, for some reasons, after exec(), my parent process crashes also. Could someone tell me why is this happening? (if i replace clone with fork or vfork, it works)
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/wait.h>
#include <sched.h>
void delNL(char * arry){ //A function to delete the newline at the end of the array
char * position;
position = strchr(arry,'\n');
*position = '\0';
}
int mySysC(char * command){ //clone funtion
char *cmd[10]={" "};
int nb=0;
int cnb=0;
while(command[nb]!='\0'){
char coa[10];
int cici=0;
while(command[nb]!=' ' && command[nb]!='\0'){
coa[cici]=command[nb];
nb++;
cici++;
}
coa[cici]='\0';
char *nad=(char *)malloc(10);
strcpy(nad,coa);
cmd[cnb]=nad;
cnb++;
if(command[nb]==' '){
nb++;
}
}
cmd[cnb]=NULL;
execvp(cmd[0],cmd);
exit(0);
}
void my_system_c(char * command){ //clone version
void * stack = (void *)malloc(10000);
void * stackTop = stack + 100000;
pid_t pid = clone((void *)mySysC(command),stackTop,CLONE_THREAD,NULL); //clone
waitpid(pid,NULL,0);
}
int main(){
char commdd[100];
char ex[10]="os_exit";
while(1){
printf("Please enter your command or enter \"os_exit\" to exit:\n");
fgets(commdd,100,stdin);
delNL(commdd);
if(strlen(commdd)>0 && strcmp(commdd,ex)!=0){
my_system_c(commdd); //select version
}
else if(strcmp(commdd,ex)==0) break;
else printf("Empty command\n");
}
return 0;
}
execvp(cmd[0],cmd); this is what crashed the whole program. I add two prints one before and one after, the later one never runs. I don't understand because of I thought clone works just like fork, which creates a new process, and the end of the chile process won't affect the parent?
Thanks!!!
clone(…CLONE_THREAD…) is not the clone() you're looking for. It creates a new process in the same thread group as the parent process, and:
If any of the threads in a thread group performs an execve(2), then all threads other than the thread group leader are terminated, and the new program is executed in the thread group leader.
If you are looking for a way to start a process without using fork(), consider using posix_spawn() instead.
Additionally, the stack pointer you are passing to clone() is invalid. The stack you allocate is 10,000 bytes large, but the stack pointer is 100,000 bytes beyond the start of the stack -- and 90,000 bytes beyond its end.
This happens because you never actually call clone. This part:
clone((void *)mySysC(command), ...);
is equivalent to:
int result = mSysC(command);
void* first = (void*) result;
clone(first, ...);
so it calls your function before it ever calls clone. You need to pass it as a function pointer instead.
In addition to that, you should remove one zero from your stackTop to match the malloc, and avoid passing CLONE_THREAD since you want a new process:
void my_system_c(char * command){ //clone version
void * stack = (void *)malloc(10000);
void * stackTop = stack + 10000;
pid_t pid = clone(mySysC,stackTop,0,command); //clone
waitpid(pid,NULL,0);
}
I'm fairly new to threads in C. For this program I need to declare a thread which I pass in a for loop thats meant to print out the printfs from the thread.
I can't seem to get it to print in correct order. Here's my code:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#define NUM_THREADS 16
void *thread(void *thread_id) {
int id = *((int *) thread_id);
printf("Hello from thread %d\n", id);
return NULL;
}
int main() {
pthread_t threads[NUM_THREADS];
for (int i = 0; i < NUM_THREADS; i++) {
int code = pthread_create(&threads[i], NULL, thread, &i);
if (code != 0) {
fprintf(stderr, "pthread_create failed!\n");
return EXIT_FAILURE;
}
}
return EXIT_SUCCESS;
}
//gcc -o main main.c -lpthread
That's the classic example of understanding multi-threading.
The threads are running concurrently, scheduled by OS scheduler.
There is no such thing as "correct order" when we are talking about running in parallel.
Also, there is such thing as buffers flushing for stdout output. Means, when you "printf" something, it is not promised it will happen immediately, but after reaching some buffer limit/timeout.
Also, if you want to do the work in the "correct order", means wait until the first thread finishes it's work before staring next one, consider using "join":
http://man7.org/linux/man-pages/man3/pthread_join.3.html
UPD:
passing pointer to thread_id is also incorrect in this case, as a thread may print id that doesn't belong to him (thanks Kevin)
So I'm trying to execute this code given to me by my professor. It's dead simple. It forks, checks to see if the forking works properly, then executes another bit of code in a separate file.
For some reason, on my OS X 10.9.5 machine, it's failing to execute the second bit of code. Here are both of the programs:
exercise.c
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>
int main() {
pid_t child = fork();
if ((int)child < 0) {
fprintf(stderr, "fork error!\n");
exit(0);
} else if ((int)child > 0) {
int status;
(void) waitpid(child, &status, 0);
if (WIFEXITED(status)) {
printf("child %d exited normally and returned %d\n",
child, WEXITSTATUS(status));
} else if (WIFSIGNALED(status)) {
printf("\nchild %d was killed by signal number %d and %s leave a core dump\n",
child, WTERMSIG(status), (WCOREDUMP(status) ? "did" : "didn't"));
} else {
printf("child %d is dead and I don't know why!\n", child);
}
} else {
char *argv[] = { "./getcode" };
execve(argv[0], argv, NULL);
}
return 0;
}
And getcode.c
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <stdlib.h>
int main() {
int rc = 256;
printf("I am child process %d\n", getpid());
while ((rc > 255) || (rc < 0)) {
printf("please type an integer from 0 to 255: ");
scanf("%d", &rc);
}
exit(rc);
}
I compile both with the commands:
gcc -Wall -pedantic-errors exercise.c -o exercise
and
gcc -Wall -pedantic-errors getcode.c -o getcode
Unfortunately, the only thing I get back from the child process is a return code of 0
./exercise
child 903 exited normally and returned 0
I'm baffled. Can anyone help?
EDIT: Okay, so I included perror("execve") as requested, and it returns execve: Bad address. So how can I fix that?
EDIT2: All right. I fixed it. I've changed the bit of the above code to include this:
char *argv[] = { "./getcode",NULL };
execve(argv[0], argv, NULL);
Null termination fixes the argv issues.
You need to terminate argv with a NULL element. From the execve man page:
Both argv and envp must be terminated by a NULL pointer.
Also it is not clear that NULL is valid for the envp argument. The Linux man page says
On Linux, argv can be specified as NULL, which has the same effect as specifying this argument as a pointer to a list containing a single NULL pointer. Do not take advantage of this misfeature! It is nonstandard and nonportable: on most other UNIX systems doing this will result in an error (EFAULT).
Possibly specifying envp as NULL is similarly nonstandard. Use execv not execve if you don't need to specify an environment.
You should check the return value of execve. And use errno to determine the cause. Eg., use perror("execve") It may be complaining.
You're not checking the result of the execve call, so I suspect it's failing, and the child process is reaching the return 0 at the end of main.