For some reason, pthread_create isn't allowing me to pass a struct as an argument. The issue is not system related, although I have not had a chance to test it on anyone else's box. It simply won't allow me to pass a struct for some reason; it returns error #12.
The issue is not with memory. I know 12 is ENOMEM, and "that should be that", but it's not.. it simply won't accept my struct as a pointer.
struct mystruct info;
info.website = website;
info.file = file;
info.type = type;
info.timez = timez;
for(threadid = 0; threadid < thread_c; threadid++)
{
// printf("Creating #%ld..\n", threadid);
retcode = pthread_create(&threads[threadid], NULL, getstuff, (void *) &info);
//void * getstuff(void *threadid);
When I ran this code in GDB, for some reason, it didn't return code 12.. but when I run it from the command line, it returns 12.
Any ideas?
Error code 12 on Linux:
#define ENOMEM 12 /* Out of memory */
You are likely running out of memory. Make sure you're not allocating too many threads, and be sure to pthread_join threads when they're done (or use pthread_detach). Make sure you're not exhausting your memory through other means as well.
Passing a stack object as a parameter to pthread_create is a pretty bad idea, I'd allocate it on the heap. Error 12 is ENOMEM.
Try adding some proper error handling.
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
static void fail(const char *what, int code)
{
fprintf(stderr, "%s: %s\n", what, strerror(code));
abort();
}
...
if (retcode)
fail("pthread_create", retcode);
On my system, 12 is ENOMEM (out of memory).
Related
As the title suggests, I would like to ask if there is any way for me to map the data segment of my executable to another memory so that any changes to the second are updated instantly on the first. One initial thought I had was to use mmap, but unfortunately mmap requires a file descriptor and I do not know of a way to somehow open a file descriptor on my running processes memory. I tried to use shmget/shmat in order to create a shared memory object on the process data segment (&__data_start) but again I failed ( even though that might have been a mistake on my end as I am unfamiliar with the shm API). A similar question I found is this: Linux mapping virtual memory range to existing virtual memory range? , but the replies are not helpful.. Any thoughts are welcome.
Thank you in advance.
Some pseudocode would look like this:
extern char __data_start, _end;
char test = 'A';
int main(int argc, char *argv[]){
size_t size = &_end - &__data_start;
char *mirror = malloc(size);
magic_map(&__data_start, mirror, size); //this is the part I need.
printf("%c\n", test) // prints A
int offset = &test - &__data_start;
*(mirror + offset) = 'B';
printf("%c\n", test) // prints B
free(mirror);
return 0;
}
it appears I managed to solve this. To be honest I don't know if it will cause problems in the future and what side effects this might have, but this is it (If any issues arise I will try to log them here for future references).
Solution:
Basically what I did was use the mmap flags MAP_ANONYMOUS and MAP_FIXED.
MAP_ANONYMOUS: With this flag a file descriptor is no longer required (hence the -1 in the call)
MAP_FIXED: With this flag the addr argument is no longer a hint, but it will put the mapping on the address you specify.
MAP_SHARED: With this you have the shared mapping so that any changes are visible to the original mapping.
I have left in a comment the munmap function. This is because if unmap executes we free the data_segment (pointed to by &__data_start) and as a result the global and static variables are corrupted. When at_exit function is called after main returns the program will crash with a segmentation fault. (Because it tries to double free the data segment)
Code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdint.h>
#define _GNU_SOURCE 1
#include <unistd.h>
#include <sys/mman.h>
extern char __data_start;
extern char _end;
int test = 10;
int main(int argc, char *argv[])
{
size_t size = 4096;
char *shared = mmap(&__data_start, 4096, PROT_READ | PROT_WRITE, MAP_FIXED | MAP_ANONYMOUS | MAP_SHARED, -1, 0);
if(shared == (void *)-1){
printf("Cant mmap\n");
exit(-1);
}
printf("original: %p, shared: %p\n",&__data_start, shared);
size_t offset = (void *)&test - (void *)&__data_start;
*(shared+offset) = 50;
msync(shared, 4096, MS_SYNC);
printf("test: %d :: %d\n", test, *(shared+offset));
test = 25;
printf("test: %d :: %d\n", test, *(shared+offset));
//munmap(shared, 4096);
}
Output:
original: 0x55c4066eb000, shared: 0x55c4066eb000
test: 50 :: 50
test: 25 :: 25
For a UNIX/C project, I'm supposed to allocate two shared memory segments (which child processes will eventually access with read-only and write permissions, respectively) of two integers each. But any time I try to call shmat(3), it ends up returning -1, setting errno to EACCES, apparently indicating insufficient permissions. I've produced what seems to be the minimum required code fro the error below, with possibly a couple extra includes:
#define _SVID_SOURCE
#include <errno.h>
#include <stdio.h>
#include <sys/shm.h>
#include <sys/types.h>
#include <unistd.h>
#include <stdlib.h>
int main(int argc, char *argv[]){
int i, j, shmid, tshmid;
int * clock;
int * shmMsg;
tshmid = shmget(IPC_PRIVATE,sizeof(int)*2, IPC_CREAT | IPC_EXCL | 0777); //0777 permissions are more liberal than I need, but I've tried other various literals and numbers as well as just CREAT/EXCL.
if (tshmid < 1){
printf("Error: In parent process (%d), shmid came up %d \n",getpid(),shmid);
exit(-1);
}
clock = (int *) shmat(shmid,NULL,0); //I've also tried this with the second argument as (void *) 0, and with the third argument as "(SHM_R | SHM_W)" and "0777"
if (clock == (void *) -1){
printf("Error: First shmat couldn't shmattach to shmid #%d. ERRNO %d\n",shmid,errno);
shmdt(clock);
exit(-1);
} //it never even gets this far
shmdt(clock);
}
Each time, this produces an error message like:
Error: First shmat couldn't shmattach to shmid #1033469981. ERRNO 13
The longer version of my program initially returned an identical error, but errno was set to 43 (EIDRM: Segment identified by shm_id was removed). I've recursively chmodded the whole directory to full access, so that's not the issue, and every time my program crashes I have to manually deallocate the shared memory using ipcrm, so the shmids apply to actual segments. Why won't it attach?
This question already has answers here:
Getting a stack overflow exception when declaring a large array
(8 answers)
Closed 5 years ago.
i am creating a thread that will captures packets and will store some information in a structure "flow" for each packet. i am using static array of "flow" type structures. but when i run the program it retuns SIGSEGV error. here's the structure "flow":
typedef struct flow
{
unsigned int s_port;
unsigned int d_port;
char s_addr[20];
char d_addr[20];
int spi;
short total;
short data[10000];
struct timeval prev_t;
double ipt[10000];
flowParam info;
char status[100];
}flow;
note that flowParam is another structure whose object info is included in "flow". i also run program by commenting it but got same result...
and here's the main program:
int main()
{
pthread_t tid;
int err = pthread_create(&tid, NULL, Capture, NULL);
if (err != 0){
perror("\ncan't create capturing thread");
exit(-1);
}
else
printf("\nCapturing thread created!\n");
pthread_join(tid, NULL);
printf("Finished!!");
return 0;
}
void* Capture()
{
flow Register[5000]; /* flow Register */
//counter Counter[5000];
pthread_exit(NULL);
}
interestingly, when i use another structure "counter" and make its array in thread, it does not give such error.
typedef struct counter
{
char s_addr[20];
char d_addr[20];
}counter;
i tried to my best to solve this issue but could not find any clue.any help???
Your struct flow is very large, over 100KB. An array of 5000 of those is roughly 500MB. By making Register a local variable, which most likely lives on the stack, it is way too large and overflows the stack. This causes the segfault.
You should instead allocate memory for it dynamically. It's still a big structure, but there's a better chance of having memory for it on the heap.
void* Capture()
{
flow *Register = malloc(5000 * sizeof(flow));
...
free(Register);
pthread_exit(NULL);
}
Your struct flow is a bit over 100kB. Then you drop 5000 of those on the stack. That's 500MB of stack needed. Your system has limits on how much you can put on the stack and 500MB is definitely too much. Threads impose additional limits on how much you can put on the stack so they are not helping here, but I'm pretty sure this would go bad without threads.
I apologize in advance for my ignorance, this is giving me a lot more trouble than it should, but I've been banging my head into my desk for hours trying to come up with what I'm doing wrong. I want to write an application that has shared memory storing a struct. For some reason, I can't get off the ground to start, I keep getting a seg fault from accessing the members of my struct.
#include <stdio.h>
#include <sys/shm.h>
#include <sys/stat.h>
#define MAX_SEQUENCE 10
struct shared_data
{
long sequence[10];
int sequence_size;
};
typedef struct shared_data shared_data;
int main(int argc, char * argv[])
{
int segment_id;
shared_data * shared_memory;
segment_id = shmget(IPC_PRIVATE, sizeof(shared_data), S_IRUSR | S_IWUSR);
shared_memory = (shared_data *) shmat(segment_id, NULL, 0);
shared_memory->sequence_size = atoi(argv[1]);
printf("\n\nSequence Size: %d\n\n",shared_memory->sequence_size);
shmdt(shared_memory);
}
UPDATE: Thanks everyone, my system administrator was running diagnostics and somehow disabled shared memory.
Your code doesn't look to bad to me. The only obvious thing missing is some kind of check for the number of arguments passed like:
if (argc != 2)
return 1;
Is it possible you just missed to call your program with an argument. In this case it would be
atoi (argv[1])
that leads to your segfault.
BTW: additionally checking return values of shmget and shmat might be a good idea too.
In C on FreeBSD, how does one access the CPU utilization?
I am writing some code to handle HTTP redirects. If the CPU load goes above a threshold on a FReeBSD system, I want to redirect client requests. Looking over the man pages, kvm_getpcpu() seems to be the right answer, but the man pages (that I read) don't document the usage.
Any tips or pointers would be welcome - thanks!
After reading the answers here, I was able to come up with the below. Due to the poor documentation, I'm not 100% sure it is correct, but top seems to agree. Thanks to everyone who answered.
#include <stdio.h>
#include <string.h>
#include <sys/types.h>
#include <sys/sysctl.h>
#include <unistd.h>
#define CP_USER 0
#define CP_NICE 1
#define CP_SYS 2
#define CP_INTR 3
#define CP_IDLE 4
#define CPUSTATES 5
int main()
{
long cur[CPUSTATES], last[CPUSTATES];
size_t cur_sz = sizeof cur;
int state, i;
long sum;
double util;
memset(last, 0, sizeof last);
for (i=0; i<6; i++)
{
if (sysctlbyname("kern.cp_time", &cur, &cur_sz, NULL, 0) < 0)
{
printf ("Error reading kern.cp_times sysctl\n");
return -1;
}
sum = 0;
for (state = 0; state<CPUSTATES; state++)
{
long tmp = cur[state];
cur[state] -= last[state];
last[state] = tmp;
sum += cur[state];
}
util = 100.0L - (100.0L * cur[CP_IDLE] / (sum ? (double) sum : 1.0L));
printf("cpu utilization: %7.3f\n", util);
sleep(1);
}
return 0;
}
From the MAN pages
NAME
kvm_getmaxcpu, kvm_getpcpu -- access per-CPU data
LIBRARY
Kernel Data Access Library (libkvm, -lkvm)
SYNOPSIS
#include <sys/param.h>
#include <sys/pcpu.h>
#include <sys/sysctl.h>
#include <kvm.h>
int
kvm_getmaxcpu(kvm_t *kd);
void *
kvm_getpcpu(kvm_t *kd, int cpu);
DESCRIPTION
The kvm_getmaxcpu() and kvm_getpcpu() functions are used to access the
per-CPU data of active processors in the kernel indicated by kd. The
kvm_getmaxcpu() function returns the maximum number of CPUs supported by
the kernel. The kvm_getpcpu() function returns a buffer holding the per-
CPU data for a single CPU. This buffer is described by the struct pcpu
type. The caller is responsible for releasing the buffer via a call to
free(3) when it is no longer needed. If cpu is not active, then NULL is
returned instead.
CACHING
These functions cache the nlist values for various kernel variables which
are reused in successive calls. You may call either function with kd set
to NULL to clear this cache.
RETURN VALUES
On success, the kvm_getmaxcpu() function returns the maximum number of
CPUs supported by the kernel. If an error occurs, it returns -1 instead.
On success, the kvm_getpcpu() function returns a pointer to an allocated
buffer or NULL. If an error occurs, it returns -1 instead.
If either function encounters an error, then an error message may be
retrieved via kvm_geterr(3.)
EDIT
Here's the kvm_t struct:
struct __kvm {
/*
* a string to be prepended to error messages
* provided for compatibility with sun's interface
* if this value is null, errors are saved in errbuf[]
*/
const char *program;
char *errp; /* XXX this can probably go away */
char errbuf[_POSIX2_LINE_MAX];
#define ISALIVE(kd) ((kd)->vmfd >= 0)
int pmfd; /* physical memory file (or crashdump) */
int vmfd; /* virtual memory file (-1 if crashdump) */
int unused; /* was: swap file (e.g., /dev/drum) */
int nlfd; /* namelist file (e.g., /kernel) */
struct kinfo_proc *procbase;
char *argspc; /* (dynamic) storage for argv strings */
int arglen; /* length of the above */
char **argv; /* (dynamic) storage for argv pointers */
int argc; /* length of above (not actual # present) */
char *argbuf; /* (dynamic) temporary storage */
/*
* Kernel virtual address translation state. This only gets filled
* in for dead kernels; otherwise, the running kernel (i.e. kmem)
* will do the translations for us. It could be big, so we
* only allocate it if necessary.
*/
struct vmstate *vmst;
};
I believe you want to look into 'man sysctl'.
I don't know the exact library, command, or system call; however, if you really get stuck, download the source code to top. It displays per-cpu stats when you use the "-P" flag, and it has to get that information from somewhere.