I have a working HTTP Apache-like web server implemented in C, and my problem is that I don't know how to initialize the queue (and therefore how to enqueue threads into it), mostly because I'm not sure how to check if there is a previous thread to join before proceeding with the current one.
The server can exploit pipeline requests to increase its response speed, using threads in a
more sophisticated way: the web server can generate a new thread for each request for a new
resource, and simultaneously prepare responses; however, since the resources must be returned
to the client in the same order in which the requests were received by the server (FIFO), it will
take a coordination phase between the various response threads.
This coordination phase is achieved by implementing a sort of "waiting room for the doctor"
in which each patient, when entering, asks who was the last to arrive, keeps track of it and
enters the doctor's office only when the person in front of him leaves. In this way, everyone has
a partial view of the queue (cares for only one person) but this partial view allows a correct
implementation of a FIFO queue.
Here is the description of what do I have to do:
Likewise, each new thread will have to store the identifier of the thread that handles the previous
request and wait for its termination using the system call pthread_join (). The first thread,
obviously, will not have to wait for anyone and the last thread will have to be waited by the main
thread that handles the requests on that connection before closing the connection itself and
returning to wait for new connection requests.
I am having trouble initializing properly the to_join data structure, mostly because I don't understand how to compute the index i of the thread to join.- how can I differenciate the first and last thread in an array of pointers?
Here is the code (I could only modify in between the TO BE DONE START and TO BE DONE END comments):
#include "incApache.h"
pthread_mutex_t accept_mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t mime_mutex = PTHREAD_MUTEX_INITIALIZER;
int client_sockets[MAX_CONNECTIONS]; /* for each connection, its socket FD */
int no_response_threads[MAX_CONNECTIONS]; /* for each connection, how many response threads */
pthread_t thread_ids[MAX_THREADS];
int connection_no[MAX_THREADS]; /* connection_no[i] >= 0 means that i-th thread belongs to connection connection_no[i] */
pthread_t *to_join[MAX_THREADS]; /* for each thread, the pointer to the previous (response) thread, if any */
int no_free_threads = MAX_THREADS - 2 * MAX_CONNECTIONS; /* each connection has one thread listening and one reserved for replies */
struct response_params thread_params[MAX_THREADS - MAX_CONNECTIONS]; /* params for the response threads (the first MAX_CONNECTIONS threads are waiting/parsing requests) */
pthread_mutex_t threads_mutex = PTHREAD_MUTEX_INITIALIZER; /* protects the access to thread-related data structures */
pthread_t thread_ids[MAX_CONNECTIONS];
int connection_no[MAX_CONNECTIONS];
void *client_connection_thread(void *vp) {
int client_fd;
struct sockaddr_storage client_addr;
socklen_t addr_size;
pthread_mutex_lock(&threads_mutex);
int connection_no = *((int *) vp);
/*** properly initialize the thread queue to_join ***/
/*** TO BE DONE 3.1 START ***/
//to_join[0] = thread_ids[new_thread_idx];
//pthread_t *first; Am I perhaps supposed to initialize the to_join data structure as a queue with two pointers
//pthread_t *last; indicating the first and last element? How can I do it on an array of pointers?
/*** TO BE DONE 3.1 END ***/
pthread_mutex_unlock(&threads_mutex);
#endif
for (;;) {
addr_size = sizeof(client_addr);
pthread_mutex_lock(&accept_mutex);
if ((client_fd = accept(listen_fd, (struct sockaddr *) &client_addr, &addr_size)) == -1)
fail_errno("Cannot accept client connection");
pthread_mutex_unlock(&accept_mutex);
client_sockets[connection_no] = client_fd;
char str[INET_ADDRSTRLEN];
struct sockaddr_in *ipv4 = (struct sockaddr_in *) &client_addr;
printf("Accepted connection from %s\n", inet_ntop(AF_INET, &(ipv4->sin_addr), str, INET_ADDRSTRLEN));
manage_http_requests(client_fd
, connection_no);
}
}
#pragma clang diagnostic pop
void send_resp_thread(int out_socket, int response_code, int cookie,
int is_http1_0, int connection_idx, int new_thread_idx,
char *filename, struct stat *stat_p)
{
struct response_params *params = thread_params + (new_thread_idx - MAX_CONNECTIONS);
debug(" ... send_resp_thread(): idx=%lu\n", (unsigned long)(params - thread_params));
params->code = response_code;
params->cookie = cookie;
params->is_http1_0 = is_http1_0;
params->filename = filename ? my_strdup(filename) : NULL;
params->p_stat = stat_p;
pthread_mutex_lock(&threads_mutex);
connection_no[new_thread_idx] = connection_idx;
debug(" ... send_resp_thread(): parameters set, conn_no=%d\n", connection_idx);
/*** enqueue the current thread in the "to_join" data structure ***/
/*** TO BE DONE 3.1 START ***/
//Again, should I use a standard enqueue implementation? But then how would I keep track of the last node ot arrive?
/*** TO BE DONE 3.1 END ***/
if (pthread_create(thread_ids + new_thread_idx, NULL, response_thread, connection_no + new_thread_idx))
fail_errno("Could not create response thread");
pthread_mutex_unlock(&threads_mutex);
debug(" ... send_resp_thread(): new thread created\n");
}
void *response_thread(void *vp)
{
size_t thread_no = ((int *) vp) - connection_no;
int connection_idx = *((int *) vp);
debug(" ... response_thread() thread_no=%lu, conn_no=%d\n", (unsigned long) thread_no, connection_idx);
const size_t i = thread_no - MAX_CONNECTIONS;
send_response(client_sockets[connection_idx],
thread_params[i].code,
thread_params[i].cookie,
thread_params[i].is_http1_0,
(int)thread_no,
thread_params[i].filename,
thread_params[i].p_stat);
debug(" ... response_thread() freeing filename and stat\n");
free(thread_params[i].filename);
free(thread_params[i].p_stat);
return NULL;
}
I am having trouble initializing properly the to_join data structure,
mostly because I don't understand how to compute the index i of the
thread to join.- how can I differenciate the first and last thread in
an array of pointers?
Assignment is different from initialization, and operating on one element is different from operating on the whole array. As far as I can determine, you're not actually to initialize to_join in that function (so the comment is misleading). Instead, you're only to assign an appropriate value to a single element.
That analysis follows from my interpretation of the names, scope, and documentation comments of the various global variables and from the name, signature, and initial lines of the function in question:
it appears that the various arrays hold data pertaining to multiple threads of multiple connections, as the role of one of the file-scope connection_no arrays is to associate threads with connections.
it appears that the function is meant to be the thread-start function for connection-associated threads.
no thread started at a time when any other connection-associated threads are running should do anything other than set data pertaining to itself, lest it clobber data on which other threads and connections rely.
Now, as for the actual question -- how do you determine which thread the new one should join? You can't. At least, not relying only on the template code presented in the question, unmodified.*
Hypothetically, if you could access the version of the connection_no array that associates threads with connections then you could use it to find the indexes of all threads associated with the current connection. You could then get their thread IDs from the corresponding thread_ids array (noting that there is another name collision here), and their join targets from the join_to array. The first thread for the connection is the one that does not join to another, and the last is the one that is not joined by any other. That analysis is not altogether straightforward, but there are no real tricks to it. Details are left as the exercise they are meant to be.
But even if the file-scope name collisions were resolved, you could not perform the above analysis because the file-scope connection_no array is shadowed by a local variable of the same name inside the whole area where you are permitted to insert code.*
Note also that you appear to need to choose a thread index for the new thread, which in general will not be 0. It looks like you need to scan the thread_ids or connection_no array to find an available index.
*Unless you cheat. I take the intent to be for you to insert code (only) into the body of the client_connection_thread function, but you could, in fact, split that function into two or more by inserting code into the designated area. If the second file-scope declarations of connection_no and thread_ids were assumed to be ignored or missing in practice, then splitting up the function could provide a workaround for the shadowing issue. For example:
/*** properly initialize the thread queue to_join ***/
/*** TO BE DONE 3.1 START ***/
return client_connection_thread_helper1(connection_no);
} // end of function
// parameter 'con' is the number of this thread's connection
void *client_connection_thread_helper1(int con) {
int my_index;
// ... Find an available thread index (TODO: what if there isn't one?) ...
thread_ids[my_index] = pthread_self();
connection_no[my_index] = con; // connection_no is not shadowed in this scope
pthread_t *last = NULL;
// ... Find the last (other) thread associated with connection 'con', if any ...
// You can determine the first, too, but that does not appear to be required.
to_join[my_index] = last;
return client_connection_thread_helper2(con);
}
// A second additional function is required for the remaining bits of
// client_connection_thread(), because they need the local connection_no
void *client_connection_thread_helper2(int connection_no) {
int client_fd;
struct sockaddr_storage client_addr;
socklen_t addr_size;
/*** TO BE DONE 3.1 END ***/
pthread_mutex_unlock(&threads_mutex);
I suppose it is possible that figuring out the need and implementation for such function-splitting was intended to be part of the exercise, but that would be a dirty trick, and overall it seems more likely that the exercise is just poorly formed.
Related
I am really new to the pthread and time classes and I am currently doing a homework assignment in which I have to send packets of strings at specific times using the pthread_cond_timedwait() command. The command is called in a thread declared to the sendPackets() function; a function that will send all packets to the target IP. The thread initializes just fine but after storing the time that I would like the thread to unblock and uses it as an argument in timedwait(), the function returns the ETIMEDOUT error. Now I am aware that my condition variable could be (and probably is) the reason why it is timing out. I have tried to do research on this function but no matter how much searching I did I haven't found any solutions to my problem (and this is probably because of something simple I overlooked).
Established as global variables are the mutex object and the pthread_cond_t object. They have a global scope so that all threads can access them. I also have established a struct in order to hold information about the set of packets that i'm sending:
struct info{
int socket;
int size;
int count;
float interval;
struct sockaddr_in echoServAddr;
};
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t thread = PTHREAD_COND_INITIALIZER;
After the CLA's are read in (these determine things such as packet count, interval, size in bytes, and server port), I check to see if the program was called as a server or a client (the program is supposed to be interchangeable depending on the presence of the flag -S). If the main method is a client, it goes into the following if statement and initializes the sendPackets() thread. An info pointer is created and initialized and casted to a void pointer in order to pass arguments to the sendPackets() function.
if(isClient){
/*Create a datagram UDP socket*/
if((sock = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP)) < 0){
DieWithError("socket() failed\n");
}
/* Construct the server address structure */
memset(&echoServAddr, 0, sizeof(echoServAddr));
echoServAddr.sin_family = AF_INET;
echoServAddr.sin_addr.s_addr = inet_addr(servIP);
echoServAddr.sin_port = htons(echoServPort);
enum threads {sender=0,receiver};
struct info *packets = (struct info*)malloc(sizeof(struct info));
packets->size = size;
packets->count = ping_packet_count;
packets->socket = sock;
packets->echoServAddr = echoServAddr;
packets->interval = ping_interval;
pthread_t tid[2];
int a,b; //Thread creation return variables
a = pthread_create(&(tid[sender]),NULL,&sendPackets,(void*)packets);
pthread_join(tid[sender], NULL);
//pthread_join(tid[receiver], NULL);
pthread_mutex_destroy(&lock);
}
Once the thread begins, it acquires the lock and proceeds to carry out its code. Start time is the time the program had begun processing packets. Current time represents the time that the program is at when calculating when to send the next packet, and send packet is the start time + the delay for each packet (sendTime = start_time + [id#] * packet_interval). After testing the code a bit, ive noticed the program doesn't time out until the time specified by sendTime(), which even further shows me that I am just doing something wrong with my condition variable since im so unfamiliar with them. Last little note: clk_id is a macro I had set to CLOCK_REALTIME.
void* sendPackets(void *p){
printf("Starting sendPackets function...\n");
pthread_mutex_lock(&lock);
printf("Sender has aquired lock\n\n");
struct info *packet = (struct info*)p;
printf("Packet Details: Socket: %d Size %d Count %d Interval:%f\n\n",packet->socket,packet->size,packet->count,packet->interval);
struct timespec startTime = {0};
for(int i = 0; i < packet->count; i++){
struct timespec sendTime = {0};
struct timespec currentTime = {0};
float delay = packet->interval * i;
int delayInt = (int) delay;
unsigned char echoString[packet->size];
char strbffr[200] = "";
inet_ntop(AF_INET,&(packet->echoServAddr.sin_addr.s_addr),strbffr,200*sizeof(char));
sendTime.tv_sec = 0;
sendTime.tv_nsec = 0;
printf("PacketID:%d Delay:%f DelayInt:%d\n",i,delay,delayInt);
if(i == 0){
clock_gettime(clk_id,&startTime);
startTime.tv_sec+=1;
}
clock_gettime(clk_id,¤tTime);
sendTime.tv_sec = startTime.tv_sec + delayInt;
sendTime.tv_nsec = startTime.tv_nsec + (int)((delay - delayInt) * 1000000000);
printf("startTime: tv_sec = %d tv_nsec = %d\n",(int)startTime.tv_sec,(int)startTime.tv_nsec);
printf("sendTime: tv_sec = %d tv_nsec = %d\n",(int)sendTime.tv_sec,(int)sendTime.tv_nsec);
printf("currentTime: tv_sec = %d tv_nsec = %d\n\n",(int)currentTime.tv_sec,(int)currentTime.tv_nsec);
int r_wait;
if((r_wait = pthread_cond_timedwait(&thread,&lock,&sendTime)) != 0){
clock_gettime(clk_id,¤tTime);
printf("currentTime: tv_sec = %d tv_nsec = %d\n\n",(int)currentTime.tv_sec,(int)currentTime.tv_nsec);
printf("Received error for timedwait:%s\n",strerror(r_wait));
exit(1);
}
if (sendto(packet->socket, echoString, packet->size, 0, (struct sockaddr *) &packet->echoServAddr, sizeof(packet->echoServAddr)) != packet->size){
DieWithError("sendto() sent a different number of bytes than expected\n");
}
printf("Sent %d to IP:%s\n",i,strbffr);
}
for(int i = 0; i < packet->count; i++){
unsigned char echoString[packet->size];
char strbffr[200] = "";
inet_ntop(AF_INET,&(packet->echoServAddr.sin_addr.s_addr),strbffr,200*sizeof(char));
if (sendto(packet->socket, echoString, packet->size, 0, (struct sockaddr *) &packet->echoServAddr, sizeof(packet->echoServAddr)) != packet->size){
DieWithError("sendto() sent a different number of bytes than expected\n");
}
printf("Sent %d to IP:%s\n",i,strbffr);
}
pthread_mutex_unlock(&lock);
printf("Sender has released lock\n");
printf("Yielding Sender\n\n");
sched_yield();
I am aware that this is a lot of stuff to take in. If there is any other part of my code that you would like to take a look at that I haven't mentioned then please feel free to post a comment stating what you would like to see. I'm pretty confident this is every data structure in my code that is relevant to the issue, however, I could always be wrong.
Here is an image of the output of my program from the print statements I have listed.
You appear to be using pthread_cond_timedwait() as a timer: you don't expect the CV to be signaled (which would terminate the wait early), but rather for the calling thread to be suspended for the full specified timeout.
In that case, ETIMEDOUT is exactly what you should expect when everything works as intended. You should check for that and accept it, and you should perform appropriate handling if you see anything else. In particular, pthreads CV's can exhibit spurious wakeup, so if your pthread_cond_wait() ever returns normally then you need to loop back and wait again to ensure that the full timeout elapses before you proceed.
In short, you should not view an ETIMEDOUT return code as indicating that something went wrong, but rather that (for your particular purposes) everything went right.
I would like to read (asynchronously) BLOCK_SIZE bytes of one file, and the BLOCK_SIZE bytes of the second file, printing what has been read to the buffer as soon as the respective buffer has been filled. Let me illustrate what I mean:
// in main()
int infile_fd = open(infile_name, O_RDONLY); // add error checking
int maskfile_fd = open(maskfile_name, O_RDONLY); // add error checking
char* buffer_infile = malloc(BLOCK_SIZE); // add error checking
char* buffer_maskfile = malloc(BLOCK_SIZE); // add error checking
struct aiocb cb_infile;
struct aiocb cb_maskfile;
// set AIO control blocks
memset(&cb_infile, 0, sizeof(struct aiocb));
cb_infile.aio_fildes = infile_fd;
cb_infile.aio_buf = buffer_infile;
cb_infile.aio_nbytes = BLOCK_SIZE;
cb_infile.aio_sigevent.sigev_notify = SIGEV_THREAD;
cb_infile.aio_sigevent.sigev_notify_function = print_buffer;
cb_infile.aio_sigevent.sigev_value.sival_ptr = buffer_infile;
memset(&cb_maskfile, 0, sizeof(struct aiocb));
cb_maskfile.aio_fildes = maskfile_fd;
cb_maskfile.aio_buf = buffer_maskfile;
cb_maskfile.aio_nbytes = BLOCK_SIZE;
cb_maskfile.aio_sigevent.sigev_notify = SIGEV_THREAD;
cb_maskfile.aio_sigevent.sigev_notify_function = print_buffer;
cb_maskfile.aio_sigevent.sigev_value.sival_ptr = buffer_maskfile;
and the print_buffer() function is defined as follows:
void print_buffer(union sigval sv)
{
printf("%s\n", __func__);
printf("buffer address: %p\n", sv.sival_ptr);
printf("buffer: %.128s\n", (char*)sv.sival_ptr);
}
By the end of the program I do the usual clean up, i.e.
// clean up
close(infile_fd); // add error checking
close(maskfile_fd); // add error checking
free(buffer_infile);
printf("buffer_inline freed\n");
free(buffer_maskfile);
printf("buffer_maskfile freed\n");
The problem is, every once in a while buffer_inline gets freed before print_buffer manages to print its contents to the console. In a usual case I would employ some kind of pthread_join() but as far as I know this is impossible since POSIX does not specify that sigev_notify_function must be implemented using threads, and besides, how would I get the TID of such thread to call pthread_join() on?
Don't do it this way, if you can avoid it. If you can, just let process termination take care of it all.
Otherwise, the answer indicated in Andrew Henle's comment above is right on. You need to be sure that no more sigev_notify_functions will improperly reference the buffers.
The easiest way to do this is simply to countdown the number of expected notifications before freeing the buffers.
Note: your SIGEV_THREAD function is executed in a separate thread, though not necessarily a new thread each time. (POSIX.1-2017 System Interfaces ยง2.4.2) Importantly, you are not meant to manage this thread's lifecycle: it is detached by default, with PTHREAD_CREATE_JOINABLE explicitly noted as undefined behavior.
As an aside, I'd suggest never using SIGEV_THREAD in robust code. Per spec, the signal mask of the sigev_notify_function thread is implementation-defined. Yikes. For me, that makes it per se unreliable. In my view, SIGEV_SIGNAL and a dedicated signal-handling thread are much safer.
I am right now trying to create a program where multiple threads are querying for data that needs to be processed and then written to disk. Currently I am using pragma and pragma critical in order to ensure that the data is being written to as intended.
This is quite costly though as threads are having to wait for one another. I read that it should be possible to have a single thread handle all write to disks for you while the others can focus on getting the incoming data and parsing it. How would I go about doing this?
The program is an XDP-based packet parser than only stores particular information regarding each packet. The code is based upon this project code here: https://github.com/xdp-project/xdp-tutorial/blob/master/tracing04-xdp-tcpdump/xdp_sample_pkts_user.c
static int print_bpf_output(void *data, int size)
{
struct {
__u16 cookie;
__u16 pkt_len;
__u8 pkt_data[SAMPLE_SIZE];
} __packed *e = data;
struct pcap_pkthdr h = {
.caplen = SAMPLE_SIZE,
.len = e->pkt_len,
};
struct timespec ts;
int i, err;
if (e->cookie != 0xdead) {
printf("BUG cookie %x sized %d\n",
e->cookie, size);
return LIBBPF_PERF_EVENT_ERROR;
}
err = clock_gettime(CLOCK_MONOTONIC, &ts);
if (err < 0) {
printf("Error with gettimeofday! (%i)\n", err);
return LIBBPF_PERF_EVENT_ERROR;
}
h.ts.tv_sec = ts.tv_sec;
h.ts.tv_usec = ts.tv_nsec / NANOSECS_PER_USEC;
if (verbose) {
printf("pkt len: %-5d bytes. hdr: ", e->pkt_len);
for (i = 0; i < e->pkt_len; i++)
printf("%02x ", e->pkt_data[i]);
printf("\n");
}
pcap_dump((u_char *) pdumper, &h, e->pkt_data);
pcap_pkts++;
return LIBBPF_PERF_EVENT_CONT;
}
This function would be called by numerous threads, and I want the pcap_dump calls to be executed by a single, different thread.
Yes,that is a common way to avoid delays where the disk is fast enough to handle the average data rate, but where occasional data peaks, disk cache writes, directory updates and other such cause intermittent data loss.
You need a producer-consumer queue. Such a class or code/struct, using condvars or semaphores,is easily found on SO or elsewhere on the net. The queue only needs to queue up pointers.
Don't use a wide queue to queue up the bulk data. As soon as it is read from [wherever], read it in to a malloced buffer/struct that has the data, path, command and anything else that the write thread might need to perform the write. Queue up the struct pointer to the write thread. In the write thread, loop round the P-C queue pop, get the pointers, do the write, (or whatever is commanded by the struct command field), and,if no error, free the struct. If there is some problem, you could load an error message into some field of the struct and queue it off again to some error-logging thread, store it in a queue to try again later, whatever you want, really.
This way, you insulate the rest of your app from those unavoidable, occasional disk delays. That is very important with high-latency disks, eg. those on a network. It also makes housekeeping operations much easier, for instance, some hour timer could queue up a struct whose command field instructs the thread to open a new file with a date-time stamp in the filename, so making it easier to track the data later without wading through one, massive, file:) Such operations, without the queue and write thread, would surely inflict a massive delay to your app:(
I am learning to do socket programmming and multithreaded programming in c on windows.
I have designed a project where there will be three types of nodes for backup(server, client and storage node).
I have created the following to have one server and multiple clients and storage nodes.
The server needs to create two kinds of threads based on the type of client requesting the service(to be explicit normal client or storage node).
I am using a blocking i/o mode.
The structure of the code is like this:
Server:
int main()
{
//initialization and other things
while ((new_socket = accept(srv_sock, (struct sockaddr *)&client, &c)) != INVALID_SOCKET)
{
_beginthreadex(0, 0, handle_client, &new_socket, 0, 0);
}
}
uint32_t __stdcall handle_client(void *data)
{
SOCKET* sock = (SOCKET*)data;
SOCKET client_sock = *sock;
//other
recv_size = recv(client_sock, header_buf, HDR_LEN, 0);
//fixed length header
if (!strncmp(connect_with, "storageNode", strlen(connect_with)))
//check if client is a normal client or a storage node
{
_beginthreadex(0, 0, handle_storage_node, sock, 0, 0);
return 0;
}
else
{
//continue with request from normal client
}
}
uint32_t __stdcall handle_storage_node(void *data)
{
SOCKET* sock_SN = (SOCKET*)data;
SOCKET str_node_sock = *sock_SN;
//continue with request from storage node
}
The main reason among other things for me to want to change it into an overlapped i/o is because some times(probably once in a thousand times) a message from a normal client ends up as a message from storage node and vice versa.
I think the reason for that is winsock is not strictly thread safe. Plus as a beginner I want to learn to do it in another way.
So, what should be the equivalent structure for the overlapped i/o implementation? And how do I stop the messages from being delivered to the wrong thread?
PS:- I am a beginner take it easy on me!
Your problem is not Overlapped mode or not. It's that your program acts on invalidated data.
In lines like this
_beginthreadex(0, 0, handle_client, &new_socket, 0, 0);
you are passing the address of a variable on the stack to the new thread. This address will be outside of the while loop iteration. And most likely will be used to store the next socket handle when accept succeeds the next time.
To fix this you could heap allocate each socket instance and pass that function to your worker thread.
Overlapped most will just complicate everything. If you don't know why exactly you need it you there is no reason to use it.
I am trying to implement some RTOS threads on Arm MBED OS over a K64F board. I am parting from the RTOS examples and I have succesfully run and communicated different threads using Queues. I am having problems when copying char* values from one struct to another to get a message from one queue to another. I believe I am misunderstanding something and that my problem is related to pointers and memory handling but I am not able to get through it.
I have defined diferent queues to send data to various threads. I have also created a basic data structure containing everything I need to go among these threads. In this struct I have a char* variable (rHostAddr) containing the remote host address that requested a service.
MemoryPool<cMsg, 16> AMPool;
Queue<cMsg, 16> AMQueue;
MemoryPool<cMsg, 16> ioLedPool;
Queue<cMsg, 16> ioLedQueue;
typedef struct{
...
char* rHostAddr;
...
} cMsg;
In the Main Thread I am creating this data structure and putting it in the first queue (AMQueue).
--- Main Thread ---
cMsg *message = AMPool.alloc();
char* rcvaddrs = "111.111.111.111";
message->rHostAddr = "111.111.111.111";
rcvaddrs = (char*)addr.get_ip_address();
message->rHostAddr = rcvaddrs;
AMQueue.put(message);
On the Thread 1 I wait for a message to arrive and on certain conditions I copy the whole structure to a new one created from the corresponding pool and insert it on a new queue (ioLedQueue).
--- Thread 1 ---
cMsg *msg;
cMsg *ledm = ioLedPool.alloc();
osEvent evt = AMQueue.get();
msg = (cMsg*)evt.value.p;
msg.rHostAddr = ledm.rHostAddr;
printf("\t -- Host 1 -- %s\n\r", ledm->rHostAddr);
ioLedQueue.put(ledm);
On the Thread 2 I get the message structure and the data .
--- Thread 2 ---
cMsg *msg;
osEvent evt = ioLedQueue.get();
msg = (cMsg*)evt.value.p;
printf("\t -- Host 2 -- %s\n\r", msg->rHostAddr);
On this stage rHostAddr is empty. I can see the value on the printf "Host 1" but not in the "Host 2"
I believe (if I am not wrong) that the problem comes from assigning with = operand, as I am copying the address, not the value, and it is lost when first pool memory is freed. I have tried copying the value with memcpy, strcpy and even my own char by char but system hangs when calling this methods.
How can I copy the value through this queues?
I move it here as the correct answer was written as a comment. Converting the value to a array of chars was the way to go, so the string data is part of the struct.
char rHostAddr[40];
Now the assignation can be done with srtcpy method and it is passed through all the process correctly:
char* rcvaddrs = (char*)addr.get_ip_address();
strcpy(message->rHostAddr,rcvaddrs);
Take a look at this solution from ARM mbed:
https://github.com/ARMmbed/mbed-events