what is an appropriate way to detect errors in CAN-Bus? - c

I am communicating with a battery that sends data with CAN Bus protocol (J1939). I use PIC 18F26K83. My goal is to display remaining state of charge on a display. For example I have value %99 in 60 of the arrays and %1 in 40 of them then I will display %99). However, it does not look like a reliable solution because I do not know how many garbage datas I receive. Please note that I cannot use some error detection algorithms such as check sum because I have no access to microcontrollers in batter, I can only use receiver side (display).
Edit: I am aware of CRC in CAN Bus but it seems like some times it does not work since sometimes I get garbage.

Yes, you can use CRC calculation, because CRC is calculated also at receiver side by Communication Controllers. Thats how a CRC error is detected, for instance. To elaborate:
battery sends complete message; message gets interference on phys layer; receiver (your PIC) receives message and calculates the CRC on it; does not match with the CRC tag included in message;
PIC CC will have a REC error +1, and will not ACK the message to the Battery.
You will detect every type of CAN message error on receiver side, except bit Error, which is also irrelevant since it will result in incomplete message.
Basically, you shouldn't be able to rely on the received CAN message content to deduce Battery level, if the message is corrupted (garbage). It is simply discarded before arrives to the Application layer of your PIC.

Related

How can I get current microphone input level with C WinAPI?

Using Windows API, I want to implement something like following:
i.e. Getting current microphone input level.
I am not allowed to use external audio libraries, but I can use Windows libraries. So I tried using waveIn functions, but I do not know how to process audio input data in real time.
This is the method I am currently using:
Record for 100 milliseconds
Select highest value from the recorded data buffer
Repeat forever
But I think this is way too hacky, and not a recommended way. How can I do this properly?
Having built a tuning wizard for a very dated, but well known, A/V conferencing applicaiton, what you describe is nearly identical to what I did.
A few considerations:
Enqueue 5 to 10 of those 100ms buffers into the audio device via waveInAddBuffer. IIRC, when the waveIn queue goes empty, weird things happen. Then as the waveInProc callbacks occurs, search for the sample with the highest absolute value in the completed buffer as you describe. Then plot that onto your visualization. Requeue the completed buffers.
It might seem obvious to map the sample value as follows onto your visualization linearly.
For example, to plot a 16-bit sample
// convert sample magnitude from 0..32768 to 0..N
length = (sample * N) / 32768;
DrawLine(length);
But then when you speak into the microphone, that visualization won't seem as "active" or "vibrant".
But a better approach would be to give more strength to those lower energy samples. Easy way to do this is to replot along the μ-law curve (or use a table lookup).
length = (sample * N) / 32768;
length = log(1+length)/log(N);
length = max(length,N)
DrawLine(length);
You can tweak the above approach to whatever looks good.
Instead of computing the values yourself, you can rely on values from Windows. This is actually the values displayed in your screenshot from the Windows Settings.
See the following sample for the IAudioMeterInformation interface:
https://learn.microsoft.com/en-us/windows/win32/coreaudio/peak-meters.
It is made for the playback but you can use it for capture also.
Some remarks, if you open the IAudioMeterInformation for a microphone but no application opened a stream from this microphone, then the level will be 0.
It means that while you want to display your microphone peak meter, you will need to open a microphone stream, like you already did.
Also read the documentation about IAudioMeterInformation it may not be what you need as it is the peak value. It depends on what you want to do with it.

backpropagation with multiple outputs

I am currently writing a Neural Network module and I already understand how everything works with just one output. But when having multiple outputs I was told to sum up the error of each output in order to calculate the loss function, which doesn't make any sense to me, because then we don't really know which synapse/weight is responsible for the error.
For example we have a NN with the shape 2|1|2 (inputs, hidden, outputs)...
So the Neuron in the hidden layer is connected to each output neuron by some weigh. If we now propagate forward and receive an error for each output neuron and sum this error up, each weight connected with the neuron in the hidden layer is adjusted by exactly the same amount. Does someone now if I am mistaken or if I understood something wrong?
I think you misunderstood, the loss function is usually calculated individually for each output for backpropagation. If you want to know the total error in the output to track your progress, then I suppose you could use the sum of the errors for that.

Flow Control Algorithm selection

I need to select an algorithm to implement the following scenario in C.
Scenario : Sender will send same sized data packets at random intervals to
receiver. Receive can process data slower than the inqueue rate even though sender sends each packet at random interval.
1.How can I avoid sender overrun & process all the data in receiver side
without missing the data from sender eventhough it is fast ?
Which algorithm will be useful for this case (leaky bucket / Circular Queue / any other) ?? Please suggest

Size of data send via NetLink

I'm trying to get the size of a chunk of data sent via NetLink (Linux Kernel).
I have tried with size = nlh->nlmsg_len - NLMSG_HDRLEN, but that isn't returning the correct size.
What's the correct way to get the message's data size?
Why would you believe nlh->nlmsg_len - NLMSG_HDRLEN is not returning the {message size without header} for the message you are looking at? If nlmsg_len contains, for example, the value 16 (that's what NLMSG_HDRLEN should be), then the payload of this message is empty. Your buffer may have more messages in it that want to be read.
As a side note, I recommend you use libmnl for Netlink message parsing and construction.

Only one write() call sends data over socket connection

First stackoverflow question! I've searched...I promise. I haven't found any answers to my predicament. I have...a severely aggravating problem to say the least. To make a very long story short, I am developing the infrastructure for a game where mobile applications (an Android app and an iOS app) communicate with a server using sockets to send data to a database. The back end server script (which I call BES, or Back End Server), is several thousand lines of code long. Essentially, it has a main method that accepts incoming connections to a socket and forks them off, and a method that reads the input from the socket and determines what to do with it. Most of the code lies in the methods that send and receive data from the database and sends it back to the mobile apps. All of them work fine, except for the newest method I have added. This method grabs a large amount of data from the database, encodes it as a JSON object, and sends it back to the mobile app, which also decodes it from the JSON object and does what it needs to do. My problem is that this data is very large, and most of the time does not make it across the socket in one data write. Thus, I added one additional data write into the socket that informs the app of the size of the JSON object it is about to receive. However, after this write happens, the next write sends empty data to the mobile app.
The odd thing is, when I remove this first write that sends the size of the JSON object, the actual sending of the JSON object works fine. It's just very unreliable and I have to hope that it sends it all in one read. To add more oddity to the situation, when I make the size of the data that the second write sends a huge number, the iOS app will read it properly, but it will have the data in the middle of an otherwise empty array.
What in the world is going on? Any insight is greatly appreciated! Below is just a basic snippet of my two write commands on the server side.
Keep in mind that EVERYWHERE else in this script the read's and write's work fine, but this is the only place where I do 2 write operations back to back.
The server script is on a Ubuntu server in native C using Berkeley sockets, and the iOS is using a wrapper class called AsyncSocket.
int n;
//outputMessage contains a string that tells the mobile app how long the next message
//(returnData) will be
n = write(sock, outputMessage, sizeof(outputMessage));
if(n < 0)
//error handling is here
//returnData is a JSON encoded string (well, char[] to be exact, this is native-C)
n = write(sock, returnData, sizeof(returnData));
if(n < 0)
//error handling is here
The mobile app makes two read calls, and gets outputMessage just fine, but returnData is always just a bunch of empty data, unless I overwrite sizeof(returnData) to some hugely large number, in which case, the iOS will receive the data in the middle of an otherwise empty data object (NSData object, to be exact). It may also be important to note that the method I use on the iOS side in my AsyncSocket class reads data up to the length that it receives from the first write call. So if I tell it to read, say 10000 bytes, it will create an NSData object of that size and use it as the buffer when reading from the socket.
Any help is greatly, GREATLY appreciated. Thanks in advance everyone!
It's just very unreliable and I have to hope that it sends it all in one read.
The key to successful programming with TCP is that there is no concept of a TCP "packet" or "block" of data at the application level. The application only sees a stream of bytes, with no boundaries. When you call write() on the sending end with some data, the TCP layer may choose to slice and dice your data in any way it sees fit, including coalescing multiple blocks together.
You might write 10 bytes two times and read 5 then 15 bytes. Or maybe your receiver will see 20 bytes all at once. What you cannot do is just "hope" that some chunks of bytes you send will arrive at the other end in the same chunks.
What might be happening in your specific situation is that the two back-to-back writes are being coalesced into one, and your reading logic simply can't handle that.
Thanks for all of the feedback! I incorporated everyone's answers into the solution. I created a method that writes to the socket an iovec struct using writev instead of write. The wrapper class I'm using on the iOS side, AsyncSocket (which is fantastic, by the way...check it out here -->AsyncSocket Google Code Repo ) handles receiving an iovec just fine, and behind the scenes apparently, as it does not require any additional effort on my part for it to read all of the data correctly. The AsyncSocket class does not call my delegate method didReadData now until it receives all of the data specified in the iovec struct.
Again, thank you all! This helped greatly. Literally overnight I got responses for an issue I've been up against for a week now. I look forward to becoming more involved in the stackoverflow community!
Sample code for solution:
//returnData is the JSON encoded string I am returning
//sock is my predefined socket descriptor
struct iovec iov[1];
int iovcnt = 0;
iov[0].iov_base = returnData;
iov[0].iov_len = strlen(returnData);
iovcnt = sizeof(iov) / sizeof(struct iovec);
n = writev(sock, iov, iovcnt)
if(n < 0)
//error handling here
while(n < iovcnt)
//rebuild iovec struct with remaining data from returnData (from position n to the end of the string)
You should really define a function write_complete that completely writes a buffer to a socket. Check the return value of write, it might also be a positive number, but smaller than the size of the buffer. In that case you need to write the remaining part of the buffer again.
Oh, and using sizeof is error-prone, too. In the above write_complete function you should therefore print the given size and compare it to what you expect.
Ideally on the server you want to write the header (the size) and the data atomically, I'd do that using the scatter/gather calls writev() also if there is any chance multiple threads can write to the same socket concurrently you may want to use a mutex in the write call.
writev() will also write all the data before returning (if you are using blocking I/O).
On the client you may have to have a state machine that reads the length of the buffer then sits in a loop reading until all the data has been received, as large buffers will be fragmented and come in in various sized blocks.

Resources