I have a piece of MPI C code which looks something like the following:
for(i=0;i<NTask;i++)
{
got_initial_bit_of_data[i]=0;
if(need_to_communicate with i)
MPI_ISend(&bit_of_pre_data_for_i,1,MPI_INT,partner,0,MPI_COMM_WORLD,&pre_requests[i]);
}
while(1)
{
MPI_Testsome(NTask,pre_requests,&ndone,idxs,MPI_STATUSES_IGNORE)
if(ndone)
{
for(i=0;i<ndone;i++)
{
MPI_ISend(&the_main_block_of_data_for_i,size_of_block,MPI_BYTE,idxs[i],1,MPI_COMM_WORLD,&main_requests[idxs[i]]);
}
}
//Other stuff that doesn't matter
MPI_IProbe(MPI_ANY_SOURCE,0,MPI_COMM_WORLD,&flag,&status);
if(!flag)
{
MPI_IProbe(MPI_ANY_SOURCE,1,MPI_COMM_WORLD,&flag,&status);
}
if(flag)
{
//Receiving the initial little bit of data
if(status.MPI_TAG==0)
{
//Location 1
got_initial_bit_of_data[status.MPI_SOURCE]=1;
MPI_Recv(&useful_location,1,MPI_INT,status.MPI_SOURCE,MPI_STATUS_IGNORE);
}
//Receiving the main bit of data
else if(status.MPI_TAG==1)
{
//Location 2
if(got_initial_bit_of_data[status.MPI_SOURCE]!=1)
//Something has gone horribly wrong...
//Receive the main bit of data here...
}
}
}
Obviously I've omitted lots of details because the full code is several hundreds of lines long. If something I've done looks a bit odd, it's probably because it is because of something in the omitted code block.
The idea is that at the start each processor sends an "announcement" message to those processors it wants to talk to. When it detects that those processors have received this message (that is when MPI_Testsome indicates the "announcement" MPI_Isend is complete), it should send a big chunk of data.
From the point of view of a processor receiving data, it should first receive the announcement message at location 1, which will cause MPI_Testsome to indicate that the Isend is complete and send the big chunk of data. The receiving processor should then receive the main block of data at location 2. Following this logic, it should be impossible to reach location 2 with got_initial_bit_of_data[status.MPI_SOURCE] being 0, but this is precisely what does happen very occasionally and I'd like to work out why.
Either I've got the logic of the code wrong, or there's some subtlety of IProbe and Testsome that I'm missing.
I'm also exiting and re-entering this entire block of code, with different processors moving in and out at different points in time, but only when all their ISends have been processed (as determined by Testsome saying that they're completed).
If the above explanation doesn't make any sense, what I want to know is are there any circumstances under which Testsome claim that an ISend is completed without the matching receive completing (or even starting)? Is a processor making a call to IProbe enough to cause Testsome to consider a request completed for instance?
If the above explanation doesn't make any sense, what I want to know is are there any circumstances under which Testsome claim that an ISend is completed without the matching receive completing (or even starting)? Is a processor making a call to IProbe enough to cause Testsome to consider a request completed for instance?
All that MPI_Testsome guarantees is that the buffer you were using from ISend is no longer needed by MPI. If you want to guarantee that the recipient has started the receive, use the synchronous form, ISSend.
Related
I'm learning c and messing around with xcb lib (instead of X11) on a raspberry pi4.
The problem is that when implementing the events loop with xcb_poll_for_event instead of xcb_wait_for_event, one core of four is 100% full. What am I doing wrong? And is there any benefit of using wait_for_event (blocking way) instead of xcb_poll_for_event(non blocking)?
The goal is to create a window where the user interact with keyboard/mouse/gamepad on objects, like a game. Can anyone give a hand?
The relevant code is:
int window_loop_test(xcb_connection_t *connection, Display *display){
/* window loop non blocked waiting for events */
int running = 1;
while (running) {
xcb_generic_event_t *event = xcb_poll_for_event(connection);
if (event) {
switch (event->response_type & ~0x80) {
case XCB_EXPOSE: {
// TODO
break;
}
case XCB_KEY_PRESS: {
/* Quit on 'q' key press */
/* write key pressed on console */
const xcb_key_press_event_t *press =
(xcb_key_press_event_t *)event;
XKeyEvent keyev;
keyev.display = display;
keyev.keycode = press->detail;
keyev.state = press->state;
char key[32];
XLookupString(&keyev, key, sizeof(key) - 1, NULL, NULL);
// key[len] = 0;
printf("Key pressed: %s\n", key);
printf("Mod state: %d\n", keyev.state);
if (*key == 'q')
running = 0;
break;
}
}
free(event);
}
}
return 0;
}
Polling and waiting each have their advantages and are good for different situations. Neither is "wrong" per se, but you need to use the correct one for your specific use case.
xcb_wait_for_event(connection) is a blocking call. The call will not return until an event is available, and the return value is is that event (unless an error occurs). It is good for situations where you only want the thread to respond to events, but otherwise not do anything. In that case, there is no need to spend CPU resources when no events are coming in.
xcb_poll_for_event(connection) is a non-blocking call. The call always returns immediately, but the result will be NULL if no event is available. It is good for situations where you want the thread to be able to do useful work even if no events are coming in. As you found out, it's not good if the thread only needs to respond to events, as it can consume CPU resources unnecessarily.
You mention that your goal is to create a game or something similar. Given that there are many ways to architect a game, either function can be suitable. But there are a couple of basic things to keep in mind that will determine which function you want to use. There may be other considerations as well, but this will give you an idea of what to look out for.
First of all, is your input system running on the same thread as other systems (simulation, rendering, etc)? If so, it's probably important to keep that thread available for work other than waiting for input events. In this case, xcb_poll_for_event() is almost required, otherwise your thread will be blocked until an event comes in. However, if your input system is on its own thread that doesn't block your other threads, it may be acceptable to use xcb_wait_for_event() and let that thread sleep when no events are coming in.
The second consideration is how quickly you need to respond to input events. There's often a delay in waking up a thread, so if fast response times are important you'll want to avoid letting the thread sleep in the first place. Again, xcb_poll_for_event() will be your friend in this case. If response times are not critical, xcb_wait_for_events() is an option.
Is there any way, using static analysis tools(I'm using Codesonar now), to detect unreleased lock problems (something like unreleased semaphores) in the following program?(The comment part marked by arrows)
The project is a multi-task system using Round-robin scheduling, where new_request() is an interrupt task comes randomly and send_buffer() is another period task.
In real case, get_buffer() and send_buffer() are various types of wrappers, which contains many call layers until actual lock/unlock process. So I can't simply specify get_buffer() as lock function in settings of static analysis tool.
int bufferSize = 0; // say max size is 5
// random task
void new_request()
{
int bufferNo = get_buffer(); // wrapper
if (bufferNo == -1)
{
return; // buffer is full
}
if (check_something() == OK)
{
add_to_sendlist(bufferNo); // for asynchronous process of send_buffer()
}
else // bad request
{
// ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓
// There should be clear_buffer placed here
// but forgotten. Eventually the buffer will be
// full and won't be cleared since 5th bad request comes.
// ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
do_nothing();
// clear_buffer(bufferNo);
}
}
int get_buffer()
{
if(bufferSize < 5)
{
bufferSize++;
return bufferSize;
}
else
{
wait_until_empty(); // wait until someone is sent by send_buffer()
return -1;
}
}
// clear specifiled one in buffer
void clear_buffer(int bufferNo)
{
delete(bufferNo)
bufferSize--;
}
// period task
void send_buffer()
{
int sent = send_1st_stuff_in_list();
clear_buffer(sent);
}
yoyozi - Fair disclosure: I'm an engineer at GrammaTech who works on CodeSonar.
First some general things. The relevant parts of the manual for this are on the page: codesonar/doc/html/C_Module/LibraryModels/ConcurrencyModelsLocks.html. Especially the bottom of the page on Resolving Lock Operation Identification Problems.
Based on your comments, I think you have already read this, since you address setting the names in the configuration settings.
So then the question is how many different wrappers do you have? If it is only a few, then the settings in the configuration file are the way to go. If there are many, that gets tedious. And if there are very many it becomes practically impossible.
So knowing some estimate for how many wrapper sets you have would help.
Even with the wrappers accounted for, it may be that the deadlock and race detectors aren't quite what you need for your problem.
If I understand your issue correctly, you have a queue with limited space, and by accident malformed items don't get cleaned out of the queue, and so the queue gets full and that stalls all processing. While you may have multiple threads involved in this implementation, the issue itself would still be a problem in a basically serial setting.
The best way to work with an issue like this is to try and make a simpler example that displays the same core problem. If you can do this in a way that can be shared with GrammaTech, we can work with you on ways to adjust settings or maybe provide hints to the analysis so it can find this issue.
If you would like to talk about this in more detail, and with prodetction against public disclosure of your code, please contact us at support_at_grammatech_dot_com, where the at and dot should be replaced as needed to make a well formed email address.
I am coding the communication between 2 DSPs through SPI. The start code is quite simple, DSP-1 is sending and DSP-2 is receiving (Of course, DSP-1 also receives but I don't care so far, vice versa for DSP-2)
That works fine. One thousand 16bit data were sent and received correctly.
However, when I add an random delay in DSP-1(master) side, I found DSP-2 begin to lost some data. It is confusing me that I didn't change anything at DSP-2 side for receiving and I am polling quite often.
So anyidea why the delay on sender's side might affect the receiver? (I double checked the DSP1 did send correct sequence.)
And I am thinking to convert to interrupt mechanism, will that solve this kind of issue for all?
my DSP2's polling code is:
for(;;) //my main program for receving
{
spi_xmit(data); //For sending, not care so far
while(SpiaRegs.SPIFFRX.bit.RXFFST == 0) {} //polling
while(SpiaRegs.SPIFFRX.bit.RXFFST != 0)
{
rdata[seq] = SpiaRegs.SPIRXBUF;
seq++;
}
if(seq>1000) break;
}
Iam looking to write a socket program based on libev. I noticed that several examples as stated in https://github.com/coolaj86/libev-examples/blob/master/src/unix-echo-server.c use the call backs based on init. For example,
main() {
......
ev_io_init(&client.io, client_cb, client.fd, EV_READ|EV_WRITE);
ev_io_start(EV_A_ &server.io);
}
static void client_cb (EV_P_ ev_io *w, int revents)
{
if (revents & EV_READ)
{
....
} else if (revents & EV_WRITE) {
......
}
}
My question comes from the expected behaviour, say for example, all that i read when in EV_READ is stored in a linked list. Lets say I keep getting free flow of packets to read, will i ever get a chance to get into EV_WRITE? I have to send out all that I recv through read to another socket. So Will it be once EV_READ and second time EV_WRITE? In other words when will EV_WRITE be unblocked? Or do I need to block EV_READ for EV_WRITE to be called. Can someone help me understand this?
I think you should keep write callback separated from read callback:
main() {
ev_io_init(&read.io, read_cb, client.fd, EV_READ);
ev_io_init(&write.io, writead_cb, client.fd, EV_WRITE);
ev_io_start(EV_A_ &read.io);
ev_io_start(EV_A_ &write.io);
}
This is my solution.
To answer shortly: If you allways check for one type of event first and then have an else
if for the other you risk starvation. In general I would check for both, unless the specified protocol made it impossible for both to be activated at the same time.
Here is a more iffy answer:
The link in your question does not contain a code structure such as your question. The client https://github.com/coolaj86/libev-examples/blob/master/src/unix-echo-client.c does have a similar callback. You will notice it disables write events, when it has written once.
// once the data is sent, stop notifications that
// data can be sent until there is actually more
// data to send
ev_io_stop(EV_A_ &send_w);
ev_io_set(&send_w, remote_fd, EV_READ);
ev_io_start(EV_A_ &send_w);
That looks like an attempt to avoid starvation of the pipe READ event branch. Even though Im not very familiar with libev, the github examples you linked to do not seem very robust. E.g static void stdin_cb (EV_P_ ev_io *w, int revents)does not use the return value of getline() to detect EOF. Also the send() and recv() socket operation return values are not inspected for how much was read or written (though on local named pipe streams the amounts will most likely match the amounts that were requested). If this was later changed to a TCP based connection, checking the amounts would be vital.
I'm trying to acquire data from an MCU, save them to a file and plot them. The code functions properly for some time, then just hangs randomly (sometimes after 1 sec, sometimes after 1 minute ...!). Also the serialport timeouts are not respected, i.e. I'm not receiving any timeout exceptions. I'm using an FTDI232RL chip. The only time I get a timeout exception is when I unplug it while the program is running.
Code:
private: System::Void START_Click(System::Object^ sender, System::EventArgs^ e) {
seconds=0;
minutes=0;
hours=0;
days=0;
t=0;
if((this->comboBox4->Text == String::Empty)||(this->textBox2->Text == String::Empty)||(this->textBox3->Text == String::Empty)){
this->textBox1->Text="please select port, save file directory and logging interval";
timer1->Enabled=false;
}
else{ // start assigning
w=Convert::ToDouble(this->textBox3->Text);
double q=fmod(w*1000,10);
if(q!=0){
MessageBox::Show("The logging interval must be a multiple of 0.01s");
}
else{
period=static_cast<int>(w*1000);
this->interval->Interval = period;
try{ // first make sure port isn't busy/open
if(!this->serialPort1->IsOpen){
// select the port whose name is in comboBox4 (select port)
this->serialPort1->PortName=this->comboBox4->Text;
//open the port
this->serialPort1->Open();
this->serialPort1->ReadTimeout = period+1;
this->serialPort1->WriteTimeout = period+1;
String^ name_ = this->serialPort1->PortName;
START=gcnew String("S");
this->textBox1->Text="Logging started";
timer1->Enabled=true;
interval->Enabled=true;
myStream=new ofstream(directory,ios::out);
*myStream<<"time(ms);ADC1;ADC2;ADC3;ADC4;ADC5;ADC6;ADC7;ADC8;";
*myStream<<endl;
chart1->Series["ADC1"]->Points->Clear();
chart1->Series["ADC2"]->Points->Clear();
chart1->Series["ADC3"]->Points->Clear();
chart1->Series["ADC4"]->Points->Clear();
chart1->Series["ADC5"]->Points->Clear();
chart1->Series["ADC6"]->Points->Clear();
chart1->Series["ADC7"]->Points->Clear();
chart1->Series["ADC8"]->Points->Clear();
backgroundWorker1->RunWorkerAsync();
}
else
{
this->textBox1->Text="Warning: port is busy or isn't open";
timer1->Enabled=false;
interval->Enabled=false;
}
}
catch(UnauthorizedAccessException^)
{
this->textBox1->Text="Unauthorized access";
timer1->Enabled=false;
interval->Enabled=false;
}
}
}
}
private: System::Void backgroundWorker1_DoWork(System::Object^ sender, System::ComponentModel::DoWorkEventArgs^ e) {
while(!backgroundWorker1->CancellationPending){
if(backgroundWorker1->CancellationPending){
e->Cancel=true;
return;
}
t+=period;
if(t<10*period){
this->chart1->ChartAreas["ChartArea1"]->AxisX->Minimum=0;
this->chart1->ChartAreas["ChartArea1"]->AxisX->Maximum=t+10*period;
}
else {
this->chart1->ChartAreas["ChartArea1"]->AxisX->Minimum=t-10*period;
this->chart1->ChartAreas["ChartArea1"]->AxisX->Maximum=t+10*period;
}
*myStream<<t<<";";
for (int n=0;n<8;n++){
adc_array[n]= this->serialPort1->ReadByte();
}
Array::Copy(adc_array,ADC,8);
for(int f=0; f<8; f++){
*myStream<<ADC[f]<<";";
}
*myStream<<endl;
backgroundWorker1->ReportProgress(t);
}
}
private: System::Void backgroundWorker1_ProgressChanged(System::Object^ sender, System::ComponentModel::ProgressChangedEventArgs^ e) {
chart1->Series["ADC1"]->Points->AddXY(t,ADC[0]);
chart1->Series["ADC2"]->Points->AddXY(t,ADC[1]);
chart1->Series["ADC3"]->Points->AddXY(t,ADC[2]);
chart1->Series["ADC4"]->Points->AddXY(t,ADC[3]);
chart1->Series["ADC5"]->Points->AddXY(t,ADC[4]);
chart1->Series["ADC6"]->Points->AddXY(t,ADC[5]);
chart1->Series["ADC7"]->Points->AddXY(t,ADC[6]);
chart1->Series["ADC8"]->Points->AddXY(t,ADC[7]);
}
the user is allowed to define intervals in seconds for data acquisition (in the code this interval is w after conversion to double). In this case, the program sends a pulse to the MCU requesting a new data transmission. So far, I have been testing this for 1 second intervals (note, during each interval the MCU sends 8 frames, each representing an ADC). However, I need to get this to run for 10ms intervals at some point. Will this be possible? Any idea on how to solve the few problems I mentioned at the beginning?
Thanks in advance
UPDATE
Just to give you an idea of what's happening:
I commented the charting part and ran the program for about 5 minutes, with a reading interval of 1s. So I expected to get around 5x60=300 values in the output file, but I only got 39 (i.e. starting from 1s till 39s). The program was still running, but the data were not getting stored anymore.
Testing was done in release mode and not debug mode. In debug mode, setting a break point under serialport->readbyte(), does not reproduce the problem. My guess is it's a timing issue between program and MCU.
You are making several standard mistakes. First off, do NOT unplug the cable when the port is opened. Many USB emulators don't know how to deal with that, the FTDI driver is particularly notorious about that. They just make the port disappear while it is in use, this invariably gives code that uses the port a severe heart attack. An uncatchable exception is common.
Secondly, you are accessing properties of a class that is not thread-safe in a worker thread. The Chart control was made to be used only in a UI thread, accessing the ChartAreas property in a worker is going to buy you a lot of misery. Getting an InvalidOperationException is pretty typical when you violate threading requirements, it is however not consistently implemented. Nastiness includes random AccessViolationExceptions, corrupted data and deadlock.
Third, you are setting completely unrealistic goals. Pursuing an update every 10 milliseconds is pointless, the human eye cannot perceive that. Anything past 50 milliseconds just turns into a blur. Something that is taken advantage of when you watch a movie in the cinema, it displays at 24 frames per second. The failure mode for that is unpleasant as well, you'll eventually reach a point where you are pummeling the UI thread (or the Chart control) with more updates than it can process. The side effect is that the UI stops painting itself, it is too busy trying to keep up with the deluge of invoke requests. And the amount of memory your program consumes keeps building, the update queue grows without bounds. That does eventually end with an OOM exception, it takes a while to consume 2 jiggabytes however. You will need to prevent this from happening, you need to throttle the rate at which you invoke. A simple thread-safe counter can take care of that.
Forth, you are accessing the data you gather in more than one thread without taking care of thread-safety. The ADC array content is being changed by the worker while the UI thread is reading it. Various amounts of misery from that, bad data at a minimum. A simply workaround is to pass a copy of the data to the ReportProgress method. In general, address these kind of threading problems by using pull instead of push. Get rid of the fire-hose problem by having the UI thread pace the requests instead of trying to have the UI thread keep up.