I'm new to C and I created some code that doesn't work...
I get a warning while initLetterLib(): integer conversion resulted in truncation
I try to memcpy my libraryLetter into my outputLED, but it doesn't work.
I just get 0x00 into my outputLED.
I tried to copy something else in outputLED - this worked really fine.
But I dont get why there is a problem with my libraryLetters...
#define LETTER_WIDTH 6
typedef unsigned char letter[LETTER_WIDTH];
letter libraryLetters[128];
void initLetterLib(){
*libraryLetters[0x20] = 0x000000000000; // Blank
*libraryLetters['A'] = 0xFE909090FE00;
*libraryLetters['H'] = 0xFE101010FE00;
*libraryLetters['L'] = 0xFE0202020200;
*libraryLetters['O'] = 0xFE828282FE00;
*libraryLetters['U'] = 0xFE020202FE00;
*libraryLetters['R'] = 0xFE9894946200;
*libraryLetters['Z'] = 0x868A92A2C200;
*libraryLetters['I'] = 0x0000FE000000;
*libraryLetters['F'] = 0xFE9090808000;
}
// takes a String and generates the outputsequence for LEDs
unsigned char * stringToLEDText(char* textString)
{
static unsigned char outputLED[LED_STEPS];
unsigned char i = 0; // index
// check length of string text
unsigned short length = strlen(textString);
// if more than 10 letters are used return error
if (length > LETTERS_LED_OUTPUT)
{
printf("Error: Too much letters. Just 10 Letters are allowed\n");
return 0;
}
// through complete string
for (i = 0; i < length; i++)
{
memcpy(&outputLED[i * LETTER_WIDTH], &(libraryLetters[textString[i]]),
LETTER_WIDTH);
}
// fills rest with 0
for (i = length * LETTER_WIDTH; i < LED_STEPS; i++)
{
outputLED[i] = 0x00;
}
return outputLED;
}
Any ideas?
Thanks
Fabian
Your code doesn't make much sense. First of all, hiding an array behind a typedef is not a good idea. Get rid of that.
Using the default "primitive data types" of C is not a good idea either, since these are non-portable and of varied length. Instead use the stdint.h types. This is pretty much mandatory practice in embedded systems programming.
As for the actual problem, you can't assign an array like this
*libraryLetters[0x20] = 0x000000000000;
This doesn't make any sense. You are telling the compiler to store a 64 bit integer in the first byte of your 6 byte array. What you probably meant to do is this:
const uint8_t letters [128][LETTER_WIDTH] =
{
[0x20] = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
['A'] = {0xFE, 0x90, 0x90, 0x90, 0xFE, 0x00};
...
};
Assuming this is a symbol table for some display. If so it should be const and allocated in flash.
*libraryLetters[x] is of type unsigned char and you are trying to assign a number to it outside the range of an unsigned char.
It looks like you are trying to assign a sequence of 6 bytes to *libraryLetters[x]. One way to do that is using memcpy, for example:
memcpy(libraryLetters['A'], "\xFE\x90\x90\x90\xFE\x00", 6);
You define your letter type as unsigned char which will only hold a single byte, but then you attempt to store a 6 byte integer. So you only get the last byte, which is zero in all your letters, if you want to be able to use an arbitrary length of the letter array. Otherwise, it will be much easier to use a 64-byte type, as suggested in the comments.
Instead, you should add the letters as
libraryLetters['H'][0] = 0xFE;
libraryLetters['H'][1] = 0x90;
...
Or you could use memcpy(libraryLetters['A'], letter_number, LETTER_WIDTH) as suggested by Ian Abbott.
Related
I am trying to make an ssd1306 library for learning stm32, for which I've made an *init[] array. All of the defines and macros return uint8_t. The idea behind this was that I'll detect where NULL is in the memory and stop incrementing (the logical bug would be that this would fail if there were exactly 4,8,12,16... array elements in the rows, which is never the case). In this, I assumed that all the memory addresses allocated to the array would be initialized to 0x00 (obviously not), and I ran into a problem where a single random byte would corrupt everything. So, I was looking for a way to make all the memory locations 0x00. Feel free to suggest any new ways I could do the same thing.
I could declare *init[], do calloc, and then initialize every sub-array individually, but that would make adding new init parameters that much more cumbersome. If all else fails, ill use it.
uint8_t *init[]= {
(uint8_t[]){MUX_RATIO_ADDR, SET_MUX_RATIO(0X1F)},
(uint8_t[]){DISPLAY_OFFSET_ADDR, SET_DISPLAY_OFFSET(0X00)},
(uint8_t[]){DISPLAY_START_LINE(0)},
(uint8_t[]){SEG_REMAP(0x00)},
(uint8_t[]){SCAN_DIRECTION(0X00)},
(uint8_t[]){COM_CONFIG_ADDR, COM_CONFIG(0X01)},
(uint8_t[]){CONTRAST_CONTROL_ADDR, SET_CONTRAST(0X7F)},
(uint8_t[]){VER_ADDR_MODE},
(uint8_t[]){0X21, 0X3F, 0X7F},
(uint8_t[]){0X22, 0X4, 0X7},
(uint8_t[]){DISPLAY_ON(0)},
(uint8_t[]){INVERSE_MODE(0)},
(uint8_t[]){DISPLAY_CLK_DIVIDE_ADDR, DISPLAY_CLK_DIVIDE(1,8)},
(uint8_t[]){0xdb, 0x40},
(uint8_t[]){CHARGE_PUMP_ADDR, CHARG_PUMP_ON},
(uint8_t[]){NORMAL_MODE(1)}
};
for (int i=0; i<sizeof(init)/sizeof(init[0]); i++) {
for(int j=0; init[i][j] != '\0'; j++) {
HAL_I2C_Mem_Write(&hi2c1, DEV_ADDR, COMMAND_MODE, 1, &init[i][j], 1, 50);
}
}
HAL function declaration for referance:
HAL_StatusTypeDef HAL_I2C_Mem_Write (I2C_HandleTypeDef*, uint16_t , uint16_t , uint16_t, uint8_t*, uint16_t , uint32_t );
I'll detect where NULL is in the memory
What NULL? NULL is a macro for null pointer constants. Most likely with the value 0, but there are no null pointers anywhere in your code. No null terminated strings either.
In this, I assumed that all the memory addresses allocated to the array would be initialized to 0x00 (obviously not)
In the code you posted, everything is data and initialized to the values you have provided. You have intentionally made it as compact as possible. For example (uint8_t[]){0xdb, 0x40} means "allocate 2 bytes and fill them with this data". There is no null termination like in strings, because these are 8 bit integers, not strings, plus you didn't allocate any room for one either. You only get implicit null termination when you initialize with string literals: "like this".
Anyway, the whole code is nonsensical for a microcontroller program. This is a constant table, it should be const all over so that it ends up in flash, not in RAM. It's a huge waste of RAM memory. Compound literals (uint8_t[]) { ... } also get allocated in RAM. Don't use them in an embedded system unless you know what they actually do.
The best fix to salvage this code is probably to do this:
static const uint8_t init[][4]=
{
{MUX_RATIO_ADDR, SET_MUX_RATIO(0X1F)},
{DISPLAY_OFFSET_ADDR, SET_DISPLAY_OFFSET(0X00)}
...
};
Now the code is in flash and there's explicitly at least one zero at the end of each 1D array, because they are at most 3 bytes long. Array initialization is guaranteed to fill up spare bytes that weren't set with zeroes.
I wonder, if just a single array would be ok. This way, you would also save all the uint8_t pointers, you just need an additional byte instead.
static const uint8_t ssd1306_init[] = {
// len, data
2, MUX_RATIO_ADDR, SET_MUX_RATIO(0X1F),
2, DISPLAY_OFFSET_ADDR, SET_DISPLAY_OFFSET(0X00),
1, DISPLAY_START_LINE(0),
1, SEG_REMAP(0x00),
1, SCAN_DIRECTION(0X00),
2, COM_CONFIG_ADDR, COM_CONFIG(0X01),
2, CONTRAST_CONTROL_ADDR, SET_CONTRAST(0X7F),
1, VER_ADDR_MODE,
3, 0X21, 0X3F, 0X7F,
3, 0X22, 0X4, 0X7,
1, DISPLAY_ON(0),
1, INVERSE_MODE(0),
2, DISPLAY_CLK_DIVIDE_ADDR, DISPLAY_CLK_DIVIDE(1,8),
2, 0xdb, 0x40,
2, CHARGE_PUMP_ADDR, CHARG_PUMP_ON,
1, NORMAL_MODE(1),
};
for (int i = 0; i < sizeof(init)/sizeof(init[0]); i++)
{
for (int j = 0; j < init[i]; j++)
{
HAL_I2C_Mem_Write(&hi2c1, DEV_ADDR, COMMAND_MODE, 1, &init[i+1+j], 1, 50);
}
i += j; // advance i by number bytes transmitted
}
On the other side, when I look at your I2C Write function, it looks like you write one byte at a time anyway, no matter if a CMD, a ADDR/DATA pair or just a value triple. Why then the split into arrays and check for '\0' at all? If it works (*), my example would not even need the additional 1st byte for length. Just write the commands in an array of bytes, and send the array out:
static const uint8_t ssd1306_init[] = {
MUX_RATIO_ADDR, SET_MUX_RATIO(0X1F),
DISPLAY_OFFSET_ADDR, SET_DISPLAY_OFFSET(0X00),
...
}
for (int i = 0; i < sizeof(init)/sizeof(init[0]); i++)
{
HAL_I2C_Mem_Write(&hi2c1, DEV_ADDR, COMMAND_MODE, 1, &init[i], 1, 50);
}
(*) This "if it works" is about, if the device accepts it this way. I also wonder, if the HAL function would accept a certain number of bytes and does a burst transfer or such.
In that case, you might even use a DMA driven transfer.
But for this, you should consult the device and the HAL documentation more closely.
For an embedded system I am writing code in c to validate a received byte array based on the provided CRC. The system is active in an RTU Modbus.
In my unit test I have the following (correct) byte array:
unsigned char frame[7] = { 0x01, 0x04, 0x02, 0x03, 0xFF, 0x80, 0xF9 }
The last two bytes are the provided CRC code that I want to validate.
My approach is to split the received array into two arrays. The first array being n-2 in length and the second array being 2 in length. Afterwards, create my own CRC code based on the first array, and finally I want to validating if the second array and my own CRC code are the same.
This is the code I have right now:
bool validateCrc16ModbusFrame(unsigned char frame[])
{
// A valid response frame consists of at least 6 bytes.
size_t size = sizeof frame;
if (size < 6) {
return false;
}
// Split the frame into the 'bytes to check' and the 'provided CRC.'
int newSize = size - 2;
unsigned char* bytesToCheck = (unsigned char*)_malloca(newSize + 1); // Not sure about this line.
char providedCrc[2];
memcpy(bytesToCheck, frame, newSize * sizeof(int));
memcpy(providedCrc, &frame[newSize], 2 * sizeof(int));
// Calculate the CRC with the bytes to check.
uint16_t calculatedCrc = calculateCrc16Modbus(bytesToCheck, newSize); // This function calculates the correct CRC code.
_freea(bytesToCheck); // Not sure about this line.
// The CRC is provided as two uint8_t bytes. Convered the two uint8_t to one uint16_t.
uint8_t firstByteProvidedCrc = providedCrc[0];
uint8_t secondByteProvidedCrc = providedCrc[1];
uint16_t uint16ProvidedCrc = ((uint16_t)firstByteProvidedCrc << 8) | secondByteProvidedCrc;
// Compare the provided CRC and the calculated CRC.
bool result = uint16ProvidedCrc == calculatedCrc;
return result;
}
But when I run the test code it crashes with the message '!! This test has probaly CRASHED !!' When I debug the test code I get an exception with the message 'TestProjectName.exe has triggered a breakpoint.' I think the problem arises from creating and/or freeing the memory for the dynamic byte array.
Anyone know what I'm doing wrong?
Thanks in advance.
Kind regards, Frenk
The problem is the memcpy calls are multiplying newsize by sizeof(int), when only newsize+1 characters are allocated. They should probably be:
memcpy(bytesToCheck, frame, newSize); /* no sizeof(int) */
memcpy(providedCrc, &frame[newSize], 2); /* no sizeof(int) */
Also you don't need to copy or split the array. You can calculate the CRC on the original array including the appended CRC, and the resulting CRC will be zero if the CRC is not post complemented, or some non-zero constant value if the CRC is post complemented.
I am trying to convert char array to unsigned short but its not working as it should.
char szASCbuf[64] = "123456789123456789123456789";
int StoreToFlash(char szASCbuf[], int StartAddress)
{
int iCtr;
int ErrorCode = 0;
int address = StartAddress;
unsigned short *us_Buf = (unsigned short*)szASCbuf;
// Write to flash
for(iCtr=0;iCtr<28;iCtr++)
{
ErrorCode = Flash_Write(address++, us_Buf[iCtr]);
if((ErrorCode &0x45)!= 0)
{
Flash_ClearError();
}
}
return ErrorCode;
}
When I see the Conversion, on us_Buf[0] I have value 12594, us_Buf[1]= 13108 like that and I have values only uptous_Buf[5]` after that it is "0" all remaining address.
I have tried to declare char array like this also
char szASCbuf[64] = {'1','2','3','4','5','6','7','8','9','1',.....'\0'};
I am passing the parameters to function like this
StoreToFlash(szASCbuf, FlashPointer); //Flashpointe=0
I am using IAR embedded workbench for ARM. Big enedian 32.
Any suggestions where i am doing wrong?
Thanks in advance.
Reinterpreting the char array szASCbuf as an array of short is not safe because of alignment issues. The char type has the least strict alignment requirements and short is usually stricter. This means that szAscBuf might start at address 13, whereas a short should start at either 12 or 14.
This also violates the strict aliasing rule, since szAscBuf and us_Buf are pointing at the same location while having different pointer types. The compiler might perform optimisations which don't take this into account and this could manifest in some very nasty bugs.
The correct way to write this code is to iterate over the original szASCBuf with a step of 2 and then do some bit-twiddling to produce a 2-byte value out of it:
for (size_t i = 0; i < sizeof(szAscbuf); i += 2) {
uint16_t value = (szAscbuf[i] << 8) | szAscbuf[i + 1];
ErrorCode = Flash_Write(address++, value);
if (ErrorCode & 0x45) {
Flash_ClearError();
}
}
If you really intended to treat the digit characters with their numeric value, this will do it:
uint16_t value = (szAscbuf[i] - '0') + (szAscbuf[i + 1] - '0');
In case you just want the numeric value of each character in a 2-byte value (1, 2, 3, 4, ...), iterate over the array with a step of 1 and fetch it this way:
uint16_t value = szAscbuf[i] - '0';
That's normal !
Your char array is "123456789123456789123456789" or {'1','2','3','4','5','6','7','8','9','1',.....'\0'}
But in ASCII '1' is 0x31, so when you read the array as a short * on a big endian architecture, it gives :
{ 0x3132, 0x3334, ... }
say differently in decimal :
{ 12594, 13108, ... }
I am sending byte arrays between a TCP socket server & client in C. The information that I am sending is a series of integers.
I have it working, but because I am not too conversant with C, I was wondering if anyone could suggest a better solution, or at least to look and tell me that I'm not being too crazy or using outdated code with what I'm doing.
First, I generate a random decimal value, let's say "350". I need to transmit this over the socket connection as a hex byte array. It is decoded back to its decimal value at the other end.
So far, I convert it to hex this way:
unsigned char hexstr[4];
sprintf(hexstr, "%02X", numToConvert); \\ where numToConvert is a decimal integer value like 350
At this point, I have a string in hexstr that's something like "15E" (again, using the hex value of 350 for an example).
Now, I need to store this in a byte array so that it looks something like: myArray = {0X00, 0X00, 0X01, 0X5E};
Obviously I can't just write: myArray = {0X00, 0X00, 0X01, 0X5E} because the values will be different every time, since a new random number is generated every time.
Currently, I do it like this (pseudocode because the string manipulation part is irrelevant but long):
lastTwoChars = getLastTwoCharsFromString(hexstr); // so lastTwoChars would now contain "5E"
Then (actual code):
sscanf(lastTwoChars, "%0X", &res); // now the variable res contains the byte representation of lastTwoChars, is my understanding
Then finally:
myArray[3] = res;
Then, I take the next two rightmost chars from hexstr (again, using the sample value of "15E", this would be "01" -- if there's only 1 more character, as in this case "1" was the only character left after taking out "5E" from "15E", I add 0s to the left to pad) and convert that the same way using sscanf, then insert into myArray[2]. Repeat for myArray[1] and myArray[0].
Then I send the array using write().
So, after hours of plugging away at it, this all does work... but because I don't use C very much, I have a nagging suspicion that there's something I am missing in all this. Can anyone comment if what I'm doing seems OK, or there's something obvious I'm using improperly or neglecting to use?
#include <stdio.h>
#include <limits.h>
int main(){
unsigned num = 0x15E;//num=350
int i, size = sizeof(unsigned);
unsigned char myArray[size];
for(i=size-1;i>=0;--i, num>>=CHAR_BIT){
myArray[i] = num & 0xFF;
}
for(i=0;i<size;++i){
printf("0X%02hhX ", myArray[i]);//0X02X
}
printf("\n");
return 0;
}
On the transmit side, convert a 32-bit number to a four byte array with this code
void ConvertValueToArray( uint32_t value, uint8_t array[] )
{
int i;
for ( i = 3; i >= 0; i-- )
{
array[i] = value & 0xff;
value >>= 8;
}
}
On the receive side, convert the byte array back into a number with this code
uint32_t ConvertArrayToValue( uint8_t array[] )
{
int i;
uint32_t value = 0;
for ( i = 0; i < 4; i++ )
{
value <<= 8;
value |= array[i];
}
return( value );
}
Note that it's important not to use generic types like int when writing this kind of code, since an int can be different sizes on different systems. The fixed-sized types are defined in <stdint.h>.
Here's a simple test that demonstrates the conversions (without actually sending the byte arrays over the network).
#include <stdio.h>
#include <stdint.h>
int main( void )
{
uint32_t input, output;
uint8_t byte_array[4];
input = 350;
ConvertValueToArray( input, byte_array );
output = ConvertArrayToValue( byte_array );
printf( "%u\n", output );
}
If your array is 4-byte aligned (and even if it isn't on machines that support unaligned access), you can use the htonl function to convert a 32-bit integer from host to network byte order and store the whole thing at once:
#include <arpa/inet.h> // or <netinet/in.h>
...
*(uint32_t*)myArray = htonl(num);
I need to know how to put bits into a character array.
for example,
I want to put 0001 bits into a character array using C or C++.
Need your help guys. Thanks.
Maybe this more generic code will give you the idea:
void setBitAt( char* buf, int bufByteSize, int bitPosition, bool value )
{
if(bitPosition < sizeof(char)*8*bufByteSize)
{
int byteOffset= bitPosition/8;
int bitOffset = bitPosition - byteOffset*8;
if(value == true)
{
buf[byteOffset] |= (1 << bitOffset);
}
else
{
buf[byteOffset] &= ~(1 << bitOffset);;
}
}
}
//use it as follow:
char chArray[16];
setBitAt(chArray,16*sizeof(char),5,true); //to set bit at pos 5 to 1
Is that really all?
char buf[1];
buf[0] = char(1);
If you want bit masking then it would be something like
enum Enum
{
MASK_01 = 0x1,
MASK_02 = 0x2,
MASK_03 = 0x4,
MASK_04 = 0x8,
};
char buf[4];
buf[0] = Enum::MASK_01;
buf[1] = Enum::MASK_02;
buf[2] = Enum::MASK_03;
buf[3] = Enum::MASK_04;
If you provide information on what you are actually trying to do, we may be able to help you more.
EDIT: Thanks for the extra information. Does this help:
enum Enum
{
BIT_0000000000000001 = 0x0001,
BIT_0000000000000010 = 0x0002,
BIT_0000000000000100 = 0x0004,
BIT_0000000000001000 = 0x0008,
BIT_0000000000010000 = 0x0010,
BIT_0000000000100000 = 0x0020,
BIT_0000000001000000 = 0x0040,
BIT_0000000010000000 = 0x0080,
BIT_0000000100000000 = 0x0100,
BIT_0000001000000000 = 0x0200,
BIT_0000010000000000 = 0x0400,
BIT_0000100000000000 = 0x0800,
BIT_0001000000000000 = 0x1000,
BIT_0010000000000000 = 0x2000,
BIT_0100000000000000 = 0x4000,
BIT_1000000000000000 = 0x8000,
};
int main( int argc, char* argv[] )
{
char someArray[8];
memset( someArray, 0, 8 );
// create an int with the bits you want set
int combinedBits = BIT_0000000000000001|
BIT_0000000000000010|
BIT_1000000000000000;
// clear first two bytes
memset( someArray, 0, 2 );
// set the first two bytes in the array
*(int*)someArray |= combinedBits;
// retrieve the bytes
int retrievedBytes = *(int*)someArray;
// test if a bit is set
if ( retrievedBytes & BIT_0000000000000001 )
{
//do something
}
}
Now the naming of the enums is intentionally intense for clarity. also you may notice that there are only 16 bits in the enum, instead of a possible 32 for an int. This is because you mentioned the first two bytes. Using this method, only the first two bytes of the array will be changed, using those enums. Im not sure if this code would be messed up by endianess, so you will have to make sure you test on your own machines. HTH.
You put bits in a character array using C or C++ the way you put anything into anything else -- they're all bits anyway.
Since sizeof(char) == 1 by definition, you can only put 8 bits per element of the array.
If you need help with how to twiddle bits, that's an entirely different issue and has nothing to do with chars and arrays.
C doesn't support binary literals, so you'll have to represent the value as hex.
char *p;
*p++ = 0x10;
*p++ = 0xFE;
Take a look at the functions hton() and htonl() for converting multi-byte values to network-byte-order.