Transfering an int as two bytes on the Arduino - c

I'm sampling at high frequencies and need to transmit the 10-bit ADC value via UART out of my Arduino.
By default, it uses a byte per character. So if doing an analogRead would yield the value of "612", it would send via UART "6" as one byte, "1" as one byte, "2" as one byte, and the line terminator as the last byte.
Given that my sampling rate is truncated by this communication, it's important that it's as efficient and uniform as possible, so I'm trying to force it to use two bytes to transmit that data, doesn't matter what the data actual is (by default it would use three bytes to transmit "23", four bytes to transmit "883" and five bytes to transmit "1001").
Currently, I'm doing something like this, which is the best way I've found:
int a = 600; //Just an example
char high = (char)highByte(a);
char low = (char)lowByte(a);
Serial.print(high);
Serial.println(low);
Currently this uses three bytes (including \n) regardless of the value. Is there an even more efficient method?
Just printing it with something like
Serial.print(foo, BIN);
Doesn't work at all. It actually uses one byte per every single bit of the binary representation of foo, which is quite silly.

I might be missing something, but why don't you use Serial.write(byte)?
You can use a methode like this one:
void writeIntAsBinary(int value){
Serial.write(lowByte(value));
Serial.write(highByte(value));
}
what are you planning to do with the data on your computer?

If you're sending binary data over a serial line, you really shouldn't confuse everything by using a text-style linefeed separator.
On the other hand, it's kind of hard (for the other end) to know which byte is which, without some kind of synchronization help.
But, since you only have 10 bits of payload, but send 16 bits of data, you can "do a UTF-8" and use a free bit to signal "start of value". This will require using only 7 bits of each 8-bit byte for your payload, but that's fine since 7 + 7 = 14 which is way more than 10. We can let the 8th bit mean "this is the high byte of a new pair of bytes":
const int a = 600;
const unsigned char high = ((a >> 7) & 0x7f) | 0x80;
const unsigned char low = (a & 0x7f);
Serial.print(high);
Serial.print(low);
In the above, the two bytes transmitted will be:
high == ((600 >> 7) & 0x7f) | 0x80 == 4 | 0x80 == 0x84
low == (600 & 0x7f) == 88 == 0x58
The receiver will have to do the above in reverse:
const int value = ((high & 0x7f) << 7) | low;
This should work, and uses the most-significant bit of the high byte, which is sent first, to signify that that is indeed the high byte. The low byte will never have the MSB set.

Related

Bitshifting vs array indexing, which is more appropriate for usart interfaces on 32bit MCUs

I have an embedded project with a USART HAL. This USART can only transmit or receive 8 or 16 bits at a time (depending on the usart register I chose i.e. single/double in/out). Since it's a 32-bit MCU, I figured I might as well pass around 32-bit fields as (from what I have been lead to understand) this is a more efficient use of bits for the MPU. Same would apply for a 64-bit MPU i.e. pass around 64-bit integers. Perhaps that is misguided advice, or advice taken out of context.
With that in mind, I have packed the 8 bits into a 32-bit field via bit-shifting. I do this for both tx and rx on the usart.
The code for the 8-bit only register is as follows (the 16-bit register just has half the amount of rounds for bit-shifting):
int zg_usartTxdataWrite(USART_data* MPI_buffer,
USART_frameconf* MPI_config,
USART_error* MPI_error)
{
MPI_error = NULL;
if(MPI_config != NULL){
zg_usartFrameConfWrite(MPI_config);
}
HPI_usart_data.txdata = MPI_buffer->txdata;
for (int i = 0; i < USART_TXDATA_LOOP; i++){
if((USART_STATUS_TXC & usart->STATUS) > 0){
usart->TXDATAX = (i == 0 ? (HPI_usart_data.txdata & USART_TXDATA_DATABITS) : (HPI_usart_data.txdata >> SINGLE_BYTE_SHIFT) & USART_TXDATA_DATABITS);
}
usart->IFC |= USART_STATUS_TXC;
}
return 0;
}
EDIT: RE-ENTERTING LOGIC OF ABOVE CODE WITH ADDED DEFINES FOR CLARITY OF TERNARY OPERATOR IMPLICIT PROMOTION PROBLEM DISCUSSED IN COMMENTS SECTION
(the HPI_usart and USART_data structs are the same just different levels, I have since removed the HPI_usart layer, but for the sake of this example I will leave it in)
#define USART_TXDATA_LOOP 4
#define SINGLE_BYTE_SHIFT 8
typedef struct HPI_USART_DATA{
...
uint32_t txdata;
...
}HPI_usart
HPI_usart HPI_usart_data = {'\0'};
const uint8_t USART_TXDATA_DATABITS = 0xFF;
int zg_usartTxdataWrite(USART_data* MPI_buffer,
USART_frameconf* MPI_config,
USART_error* MPI_error)
{
MPI_error = NULL;
if(MPI_config != NULL){
zg_usartFrameConfWrite(MPI_config);
}
HPI_usart_data.txdata = MPI_buffer->txdata;
for (int i = 0; i < USART_TXDATA_LOOP; i++){
if((USART_STATUS_TXC & usart->STATUS) > 0){
usart->TXDATAX = (i == 0 ? (HPI_usart_data.txdata & USART_TXDATA_DATABITS) : (HPI_usart_data.txdata >> SINGLE_BYTE_SHIFT) & USART_TXDATA_DATABITS);
}
usart->IFC |= USART_STATUS_TXC;
}
return 0;
}
However, I now realize that this is potentially causing more issues than it solves because I am essentially internally encoding these bits which then have to be decoded almost immediately when they are passed through to/from different data layers. I feel like it's a clever and sexy solution, but I'm now trying to solve a problem that I shouldn't have created in the first place. Like how to extract variable bit fields when there is an offset i.e. in gps nmea sentences where the first 8 bits might be one relevant field and then the rest are 32bit fields. So it ends up being like this:
32-bit array member 0:
bits 24-31 bits 15-23 bits 8-15 bits 0-7
| 8-bit Value | 32-bit Value A, bits 24-31 | 32-bit Value A, bits 16-23 | 32-bit Value A, bits 8-15 |
32-bit array member 1:
bits 24-31 bits 15-23 bits 8-15 bits 0-7
| 32-bit Value A, bits 0-7 | 32-bit Value B, bits 24-31 | 32-bit Value B, bits 16-23 | 32-bit Value B, bits 8-15 |
32-bit array member 2:
bits 24-31 15-23 8-15 ...
| 32-bit Value B, bits 0-7 | etc... | .... | .... |
The above example requires manual decoding, which is fine I guess, but it's different for every nmea sentence and just feels more manual than programmatic.
My question is this: bitshifting vs array indexing, which is more appropriate?
Should I just have assigned each incoming/outgoing value to a 32-bit array member and then just index that way? I feel like that is the solution since it would not only make it easier to traverse the data on other layers, but I would be able to eliminate all this bit-shifting logic and then the only difference between an rx or tx function would be the direction the data is going.
It does mean a small rewrite of the interface and the resulting gps module layer, but that feels like less work and also a cheap lesson early on in my project.
Also any thoughts and general experience on this would be great.
Since it's a 32-bit MCU, I figured I might as well pass around 32-bit fields
That's not really the programmer's call to make. Put the 8 or 16 bit variable in a struct. Let the compiler add padding if needed. Alternatively you can use uint_fast8_t and uint_fast16_t.
My question is this: bitshifting vs array indexing, which is more appropriate?
Array indexing is for accessing arrays. If you have an array, use it. If not, then don't.
While it is possible to chew through larger chunks of data byte by byte, such code must be written much more carefully, to prevent running into various subtle type conversion and pointer aliasing bugs.
In general, bit shifting is preferred when accessing data up to the CPU's word size, 32 bits in this case. It is fast and also portable, so that you don't have to take endianess in account. It is the preferred method of serialization/de-serialization of integers.

logic operators & bit separation calculation in C (PIC programming)

I am programming a PIC18F94K20 to work in conjunction with a MCP7941X I2C RTCC ship and a 24AA128 I2C CMOS Serial EEPROM device. Currently I have code which successfully intialises the seconds/days/etc values of the RTCC and starts the timer, toggling a LED upon the turnover of every second.
I am attempting to augment the code to read back the correct data for these values, however I am running into trouble when I try to account for the various 'extra' bits in the values. The memory map may help elucidate my problem somewhat:
Taking, for example, the hours column, or the 02h address. Bit 6 is set as 1 to toggle 12 hour time, adding 01000000 to the hours bit. I can read back the entire contents of the byte at this address, but I want to employ an if statement to detect whether 12 or 24 hour time is in place, and adjust accordingly. I'm not worried about the 10-hour bits, as I can calculate that easily enough with a BCD conversion loop (I think).
I earlier used the bitwise OR operator in C to augment the original hours data to 24. I initialised the hours in this particular case to 0x11, and set the 12 hour control bit which is 0x64. When setting the time:
WriteI2C(0x11|0x64);
which as you can see uses the bitwise OR.
When reading back the hours, how can I incorporate operators into my code to separate the superfluous bits from the actual time bits? I tried doing something like this:
current_seconds = ReadI2C();
current_seconds = ST & current_seconds;
but that completely ruins everything. It compiles, but the device gets 'stuck' on this sequence.
How do I separate the ST / AMPM / VBATEN bits from the actual data I need, and what would a good method be of implementing for loops for the various circumstances they present (e.g. reading back 12 hour time if bit 6 = 0 and 24 hour time if bit6 = 1, and so on).
I'm a bit of a C novice and this is my first foray into electronics so I really appreciate any help. Thanks.
To remove (zero) a bit, you can AND the value with a mask having all other bits set, i.e., the complement of the bits that you wish to zero, e.g.:
value_without_bit_6 = value & ~(1<<6);
To isolate a bit within an integer, you can AND the value with a mask having only those bits set. For checking flags this is all you need to do, e.g.,
if (value & (1<<6)) {
// bit 6 is set
} else {
// bit 6 is not set
}
To read the value of a small integer offset within a larger one, first isolate the bits, and then shift them right by the index of the lowest bit (to get the least significant bit into correct position), e.g.:
value_in_bits_4_and_5 = (value & ((1<<4)|(1<<5))) >> 4;
For more readable code, you should use constants or #defined macros to represent the various bit masks you need, e.g.:
#define BIT_VBAT_EN (1<<3)
if (value & BIT_VBAT_EN) {
// VBAT is enabled
}
Another way to do this is to use bitfields to define the organisation of bits, e.g.:
typedef union {
struct {
unsigned ones:4;
unsigned tens:3;
unsigned st:1;
} seconds;
uint8_t byte;
} seconds_register_t;
seconds_register_t sr;
sr.byte = READ_ADDRESS(0x00);
unsigned int seconds = sr.seconds.ones + sr.seconds.tens * 10;
A potential problem with bitfields is that the code generated by the compiler may be unpredictably large or inefficient, which is sometimes a concern with microcontrollers, but obviously it's nicer to read and write. (Another problem often cited is that the organisation of bit fields, e.g., endianness, is largely unspecified by the C standard and thus not guaranteed portable across compilers and platforms. However, it is my opinion that low-level development for microcontrollers tends to be inherently non-portable, so if you find the right bit layout I wouldn't consider using bitfields “wrong”, especially for hobbyist projects.)
Yet you can accomplish similarly readable syntax with macros; it's just the macro itself that is less readable:
#define GET_SECONDS(r) ( ((r) & 0x0F) + (((r) & 0x70) >> 4) * 10 )
uint8_t sr = READ_ADDRESS(0x00);
unsigned int seconds = GET_SECONDS(sr);
Regarding the bit masking itself, you are going to want to make a model of that memory map in your microcontroller. The simplest, cudest way to do that is to #define a number of bit masks, like this:
#define REG1_ST 0x80u
#define REG1_10_SECONDS 0x70u
#define REG1_SECONDS 0x0Fu
#define REG2_10_MINUTES 0x70u
...
And then when reading each byte, mask out the data you are interested in. For example:
bool st = (data & REG1_ST) != 0;
uint8_t ten_seconds = (data & REG1_10_SECONDS) >> 4;
uint8_t seconds = (data & REG1_SECONDS);
The important part is to minimize the amount of "magic numbers" in the source code.
Writing data:
reg1 = 0;
reg1 |= st ? REG1_ST : 0;
reg1 |= (ten_seconds << 4) & REG1_10_SECONDS;
reg1 |= seconds & REG1_SECONDS;
Please note that I left out the I2C communication of this.

Formatting SprintF in C

The question i have with the code below, is where i have used sprintf, i want it to insert a formatted int, because the client then picks up the data and pulls out data according to char array. So the client will pick up from the received code the delay from [0] and [1]. Where as another variable may be taken from the [2] and [3] that is sent from another bit of code. What is the way to format it like in printf to be saved in a char[]
int sock = *(int*)data->sock;
int i,startDelay =0;
char buffer[SEND_MESSAGE_LENGTH];
puts("Run Machine Called");
for(startDelay=11;startDelay>=0;startDelay--)
{
printf("Start Delay:%i\n",startDelay);
sprintf(buffer,"%2i",startDelay);
printf("Send Data - %2i - Start Delay\n",*buffer-'0');
//write_sock(sock,buffer);
sleep(1);
}
I'm not certain, but I think you're talking about a 2-byte (16-bit) integer value. If so, then sprintf is not the right tool for the job. Instead, you should take the integer and mask and shift to extract the 16 bits:
buffer[0] = startDelay & 0xFF; // low byte
buffer[1] = (startDelay >> 8) & 0xFF; // high byte
Of course, since your values are smaller than 255, the high byte here will always be zero, so it simplifies to:
buffer[0] = startDelay & 0xFF;
buffer[1] = 0;
It's not clear to me what the byte order should be, so you may have to reverse these and put the high byte in buffer[0] and the low byte in buffer[1].

converting little endian hex to big endian decimal in C

I am trying to understand and implement a simple file system based on FAT12. I am currently looking at the following snippet of code and its driving me crazy:
int getTotalSize(char * mmap)
{
int *tmp1 = malloc(sizeof(int));
int *tmp2 = malloc(sizeof(int));
int retVal;
* tmp1 = mmap[19];
* tmp2 = mmap[20];
printf("%d and %d read\n",*tmp1,*tmp2);
retVal = *tmp1+((*tmp2)<<8);
free(tmp1);
free(tmp2);
return retVal;
};
From what I've read so far, the FAT12 format stores the integers in little endian format.
and the code above is getting the size of the file system which is stored in the 19th and 20th byte of boot sector.
however I don't understand why retVal = *tmp1+((*tmp2)<<8); works. is the bitwise <<8 converting the second byte to decimal? or to big endian format?
why is it only doing it to the second byte and not the first one?
the bytes in question are [in little endian format] :
40 0B
and i tried converting them manually by switching the order first to
0B 40
and then converting from hex to decimal, and I get the right output, I just don't understand how adding the first byte to the bitwise shift of second byte does the same thing?
Thanks
The use of malloc() here is seriously facepalm-inducing. Utterly unnecessary, and a serious "code smell" (makes me doubt the overall quality of the code). Also, mmap clearly should be unsigned char (or, even better, uint8_t).
That said, the code you're asking about is pretty straight-forward.
Given two byte-sized values a and b, there are two ways of combining them into a 16-bit value (which is what the code is doing): you can either consider a to be the least-significant byte, or b.
Using boxes, the 16-bit value can look either like this:
+---+---+
| a | b |
+---+---+
or like this, if you instead consider b to be the most significant byte:
+---+---+
| b | a |
+---+---+
The way to combine the lsb and the msb into 16-bit value is simply:
result = (msb * 256) + lsb;
UPDATE: The 256 comes from the fact that that's the "worth" of each successively more significant byte in a multibyte number. Compare it to the role of 10 in a decimal number (to combine two single-digit decimal numbers c and d you would use result = 10 * c + d).
Consider msb = 0x01 and lsb = 0x00, then the above would be:
result = 0x1 * 256 + 0 = 256 = 0x0100
You can see that the msb byte ended up in the upper part of the 16-bit value, just as expected.
Your code is using << 8 to do bitwise shifting to the left, which is the same as multiplying by 28, i.e. 256.
Note that result above is a value, i.e. not a byte buffer in memory, so its endianness doesn't matter.
I see no problem combining individual digits or bytes into larger integers.
Let's do decimal with 2 digits: 1 (least significant) and 2 (most significant):
1 + 2 * 10 = 21 (10 is the system base)
Let's now do base-256 with 2 digits: 0x40 (least significant) and 0x0B (most significant):
0x40 + 0x0B * 0x100 = 0x0B40 (0x100=256 is the system base)
The problem, however, is likely lying somewhere else, in how 12-bit integers are stored in FAT12.
A 12-bit integer occupies 1.5 8-bit bytes. And in 3 bytes you have 2 12-bit integers.
Suppose, you have 0x12, 0x34, 0x56 as those 3 bytes.
In order to extract the first integer you only need take the first byte (0x12) and the 4 least significant bits of the second (0x04) and combine them like this:
0x12 + ((0x34 & 0x0F) << 8) == 0x412
In order to extract the second integer you need to take the 4 most significant bits of the second byte (0x03) and the third byte (0x56) and combine them like this:
(0x56 << 4) + (0x34 >> 4) == 0x563
If you read the official Microsoft's document on FAT (look up fatgen103 online), you'll find all the FAT relevant formulas/pseudo code.
The << operator is the left shift operator. It takes the value to the left of the operator, and shift it by the number used on the right side of the operator.
So in your case, it shifts the value of *tmp2 eight bits to the left, and combines it with the value of *tmp1 to generate a 16 bit value from two eight bit values.
For example, lets say you have the integer 1. This is, in 16-bit binary, 0000000000000001. If you shift it left by eight bits, you end up with the binary value 0000000100000000, i.e. 256 in decimal.
The presentation (i.e. binary, decimal or hexadecimal) has nothing to do with it. All integers are stored the same way on the computer.

Large bit arrays in C

Our OS professor mentioned that for assigning a process id to a new process, the kernel incrementally searches for the first zero bit in a array of size equivalent to the maximum number of processes(~32,768 by default), where an allocated process id has 1 stored in it.
As far as I know, there is no bit data type in C. Obviously, there's something I'm missing here.
Is there any such special construct from which we can build up a bit array? How is this done exactly?
More importantly, what are the operations that can be performed on such an array?
Bit arrays are simply byte arrays where you use bitwise operators to read the individual bits.
Suppose you have a 1-byte char variable. This contains 8 bits. You can test if the lowest bit is true by performing a bitwise AND operation with the value 1, e.g.
char a = /*something*/;
if (a & 1) {
/* lowest bit is true */
}
Notice that this is a single ampersand. It is completely different from the logical AND operator &&. This works because a & 1 will "mask out" all bits except the first, and so a & 1 will be nonzero if and only if the lowest bit of a is 1. Similarly, you can check if the second lowest bit is true by ANDing it with 2, and the third by ANDing with 4, etc, for continuing powers of two.
So a 32,768-element bit array would be represented as a 4096-element byte array, where the first byte holds bits 0-7, the second byte holds bits 8-15, etc. To perform the check, the code would select the byte from the array containing the bit that it wanted to check, and then use a bitwise operation to read the bit value from the byte.
As far as what the operations are, like any other data type, you can read values and write values. I explained how to read values above, and I'll explain how to write values below, but if you're really interested in understanding bitwise operations, read the link I provided in the first sentence.
How you write a bit depends on if you want to write a 0 or a 1. To write a 1-bit into a byte a, you perform the opposite of an AND operation: an OR operation, e.g.
char a = /*something*/;
a = a | 1; /* or a |= 1 */
After this, the lowest bit of a will be set to 1 whether it was set before or not. Again, you could write this into the second position by replacing 1 with 2, or into the third with 4, and so on for powers of two.
Finally, to write a zero bit, you AND with the inverse of the position you want to write to, e.g.
char a = /*something*/;
a = a & ~1; /* or a &= ~1 */
Now, the lowest bit of a is set to 0, regardless of its previous value. This works because ~1 will have all bits other than the lowest set to 1, and the lowest set to zero. This "masks out" the lowest bit to zero, and leaves the remaining bits of a alone.
A struct can assign members bit-sizes, but that's the extent of a "bit-type" in 'C'.
struct int_sized_struct {
int foo:4;
int bar:4;
int baz:24;
};
The rest of it is done with bitwise operations. For example. searching that PID bitmap can be done with:
extern uint32_t *process_bitmap;
uint32_t *p = process_bitmap;
uint32_t bit_offset = 0;
uint32_t bit_test;
/* Scan pid bitmap 32 entries per cycle. */
while ((*p & 0xffffffff) == 0xffffffff) {
p++;
}
/* Scan the 32-bit int block that has an open slot for the open PID */
bit_test = 0x80000000;
while ((*p & bit_test) == bit_test) {
bit_test >>= 1;
bit_offset++;
}
pid = (p - process_bitmap)*8 + bit_offset;
This is roughly 32x faster than doing a simple for loop scanning an array with one byte per PID. (Actually, greater than 32x since more of the bitmap is will stay in CPU cache.)
see http://graphics.stanford.edu/~seander/bithacks.html
No bit type in C, but bit manipulation is fairly straight forward. Some processors have bit specific instructions which the code below would nicely optimize for, even without that should be pretty fast. May or may not be faster using an array of 32 bit words instead of bytes. Inlining instead of functions would also help performance.
If you have the memory to burn just use a whole byte to store one bit (or whole 32 bit number, etc) greatly improve performance at the cost of memory used.
unsigned char data[SIZE];
unsigned char get_bit ( unsigned int offset )
{
//TODO: limit check offset
if(data[offset>>3]&(1<<(offset&7))) return(1);
else return(0);
}
void set_bit ( unsigned int offset, unsigned char bit )
{
//TODO: limit check offset
if(bit) data[offset>>3]|=1<<(offset&7);
else data[offset>>3]&=~(1<<(offset&7));
}

Resources