I am trying to convert char array to unsigned short but its not working as it should.
char szASCbuf[64] = "123456789123456789123456789";
int StoreToFlash(char szASCbuf[], int StartAddress)
{
int iCtr;
int ErrorCode = 0;
int address = StartAddress;
unsigned short *us_Buf = (unsigned short*)szASCbuf;
// Write to flash
for(iCtr=0;iCtr<28;iCtr++)
{
ErrorCode = Flash_Write(address++, us_Buf[iCtr]);
if((ErrorCode &0x45)!= 0)
{
Flash_ClearError();
}
}
return ErrorCode;
}
When I see the Conversion, on us_Buf[0] I have value 12594, us_Buf[1]= 13108 like that and I have values only uptous_Buf[5]` after that it is "0" all remaining address.
I have tried to declare char array like this also
char szASCbuf[64] = {'1','2','3','4','5','6','7','8','9','1',.....'\0'};
I am passing the parameters to function like this
StoreToFlash(szASCbuf, FlashPointer); //Flashpointe=0
I am using IAR embedded workbench for ARM. Big enedian 32.
Any suggestions where i am doing wrong?
Thanks in advance.
Reinterpreting the char array szASCbuf as an array of short is not safe because of alignment issues. The char type has the least strict alignment requirements and short is usually stricter. This means that szAscBuf might start at address 13, whereas a short should start at either 12 or 14.
This also violates the strict aliasing rule, since szAscBuf and us_Buf are pointing at the same location while having different pointer types. The compiler might perform optimisations which don't take this into account and this could manifest in some very nasty bugs.
The correct way to write this code is to iterate over the original szASCBuf with a step of 2 and then do some bit-twiddling to produce a 2-byte value out of it:
for (size_t i = 0; i < sizeof(szAscbuf); i += 2) {
uint16_t value = (szAscbuf[i] << 8) | szAscbuf[i + 1];
ErrorCode = Flash_Write(address++, value);
if (ErrorCode & 0x45) {
Flash_ClearError();
}
}
If you really intended to treat the digit characters with their numeric value, this will do it:
uint16_t value = (szAscbuf[i] - '0') + (szAscbuf[i + 1] - '0');
In case you just want the numeric value of each character in a 2-byte value (1, 2, 3, 4, ...), iterate over the array with a step of 1 and fetch it this way:
uint16_t value = szAscbuf[i] - '0';
That's normal !
Your char array is "123456789123456789123456789" or {'1','2','3','4','5','6','7','8','9','1',.....'\0'}
But in ASCII '1' is 0x31, so when you read the array as a short * on a big endian architecture, it gives :
{ 0x3132, 0x3334, ... }
say differently in decimal :
{ 12594, 13108, ... }
Related
There is a wide string represented as unsigned short* as in the code:
/// data type unsigned short, data length 2 byte
typedef unsigned short MI_U16;
static int _MI_PVR_UniStrlen(MI_U16 *pu16Str) {
int iStrlen = 0;
bool bStrEnd = false;
MI_U16 *pu16UniStr = NULL;
if (pu16Str == NULL) {
return 0;
}
pu16UniStr = (MI_U16*)pu16Str;
while(bStrEnd == false) {
if (*pu16UniStr == 0) {
bStrEnd = true;
break;
} else {
pu16UniStr++; //changing this line to pu16UniStr += sizeof(MI_U16); makes correct output 5
}
iStrlen++;
}
return iStrlen;
}
int main(void) {
unsigned short* str = L"abcde";
int l1 = _MI_PVR_UniStrlen(str);
printf("\nle1 = %d\n", l1); // 1
return 0;
}
Output is 1, which means that pointer pu16UniStr in the function is incremented by 1 byte only, and because second byte is 0 the loop is stopped doing only 1 iteration.
I can't get a clue about why it behaves this way ?
Shouldn't L tell the compiler to increment a pointer by 2 byte ?
Also, the fact that after changing increment of pu16UniStr (see the comment) we get correct output of 5, makes me think that the function is implemented incorrectly, am I right ?
p.s.
there are no chances to use something like wcslen()
Character constants prefixed with L have type wchar_t. This type is most likely 4 bytes on your system. So when you use an unsigned short * to index it, you only see half of the bytes for a given character.
Change the pointer type to unsigned int or uint32_t, or better yet just use wchar_t.
I'm developing an ARM embedded application. I'm kind of stuck on a silly problem - I have an array of unsigned 8-bit integers:
uint8_t days[42] = { 0 };
It's initialized with some data - the initialization algorithm introduces a lot of variables confusing and irrelevant to the problem, so I will not repost it here. I see this array in the debugger variable watch, and I'm certain it is filled with integer values from 0 to 31.
I'd like to take any element of this array, say 15th, and convert it to char* so that it can be displayed on my LCD screen. I rewrite it using sprintf function:
char d[3] = { '0', '0', '0' };
sprintf(d, "%d", days[15]);
Just one note: no, I can't use the stdlib itoa() function, because it does not conform to MISRA-C standards, which I am obliged to follow.
As a result, I only get a binary zero value in my d buffer. Any ideas?
For MISRA-C compliance, you can certainly not use sprintf() or anything else from stdio.h either. You generally want to avoid sprintf like the plague on any embedded system anyhow.
Writing a simple decimal integer to string conversion routine is quite basic stuff... here's my attempt of a MISRA-C (2004 and 2012) compatible version:
#include <stdint.h>
void dec_to_str (char* str, uint32_t val, size_t digits);
int main (void)
{
char str[3u + 1u]; // assuming you want null terminated strings?
dec_to_str(str, 31u, 3u);
return 0;
}
void dec_to_str (char* str, uint32_t val, size_t digits)
{
size_t i=1u;
for(; i<=digits; i++)
{
str[digits-i] = (char)((val % 10u) + '0');
val/=10u;
}
str[i-1u] = '\0'; // assuming you want null terminated strings?
}
Note: the uint32_t variable could get swapped out for an uint8_t, but then you need to add type casts all over the place, to prevent implicit type promotions, as required by MISRA. The code will then turn really ugly, like this:
str[digits-i] = (char)(uint8_t)((uint8_t)(val % 10u) + '0');
The only sane thing to do then, is to split that mess into several lines:
uint8_t ch = (uint8_t)(val % 10u);
ch = (uint8_t)(ch + '0');
str[digits-i] = (char)ch;
#include "stdafx.h"
#include <stdio.h>
int days[2] = {12,14};
char d[3] = {'0', '0', 0};
int _tmain(int argc, _TCHAR* argv[])
{
d[0] = days[1] / 10 + 0x30; // convert 10's digit to ascii
d[1] = days[1] % 10 + 0x30; // convert 1's digit to ascii
// Debugging help
printf(d);
getchar();
return 0;
}
I´m making a program that builds a binary message. I´m using char strings to hold the binary value. So I´ve initialized a bunch of char strings that has the default values. Then I combine them by running a for loop and read them into a large string (aismsg/ais_packet). And everything worked fine untill I added msg14Text[], then the string I´m building (aismsg/ais_packet) gets shortened as shown below (even though I´m not using the variable). Seems like when I add msg14Text[], it changes the value of one of the other strings. Is this maybe a memory allocation problem?
Part of the code:
char ais_packet[257]; //Allokerer array for ais data pakke.
char aismsg[175]; //Allokerer array for meldingen.
int burst_nr = 1; //Indicates with burst it is transmittin (1-7).
char ramp_up[] = "00000000"; //Ramp up buffer.
char train_seq[] = "010101010101010101010101"; //Training sequence 24 bits of alternating 0-1.s
char hdlc_flag[] = "01111110"; //HDLC Start and END flag.
char buffer[] = "000000000000000000000000"; //Data packet buffer.
char msgID1[] = "000001"; //msg. 1.
char msgID14[] ="010100"; //msg. 14.
char repeat[] = "00"; //repetert 0 ganger.
char mmsi[] = "000111010110111100110100010101"; //Gir 123456789 som MMSI.
char nav_stat[] = "1111"; //Gir 15= AIS-SART test, endres til 14 (1110) for aktiv AIS-SART.x'
char rot[] = "10000000"; //Rate of Turn -128 betyr ikkje tilgjengelig.
char sogBin[] = "1111111111"; //Tilsvarer 1023 = not available = default.
char pos_acc[] = "0"; //Posisjonsnøyaktighet over 10m. 1 = under 10m.
char lonBin[] = "0110011110010001101011000000"; // Tilsvarer 181 grader som er default verdi for Longitude.
char latBin[] = "011010000010010000101000000"; // Tilsvarer 91 grader som er default verdi for Latitude.
char cogBin[] = "111000010000"; //Tilsvarer 3600 = not available = default.
char headingBin[] = "111111111"; //511 = not available = default
char timestamp[] = "111100"; //Tid siden melding er generert, 60 = default = ts not available.
char spec_man[] = "01"; //Special manouver 0 = default, 1 = not engaged in special manouver
char spare[] = "000";
char spareMSG14[] = "00"; //Reserved.
char raim[] = "0"; //RAIM 0 = not in use.
char comm_state[] = "00011100000000000000"; // First 2bit: Sync state: 3 = no UTC sync = default, 0 = UTC sync. 0011100000000000000
char msg14Text[] = "100100101101111011111100"; //CAUSING TROUBLE!!!! for AIS melding 14 står "Test" med 6-bit ASCII koding.
The enitre code for the function can be found at pastebin.com/wj0RxyLX
Output of ais packet with msg14Text[]:
00000000
Output of ais packet without msg14Text[]:
0000000001010101010101010101010101111110000001000001110101101111001101000101011111100000000011010000000000000110100011000101111000000101100100000100001100101110000100000000110011111000100000011100000000000000001000100110100101111110000000000000000000000000
aispacket should consist of the following variables:
ramp_up[] + train_seq[] + hdlc_flag[] + Datapacket(168bit) + crc(16bit) + hdlc_flag[] + buffer[] + '\0'
"Is this maybe a memory allocation problem?"
You don't explicitly allocate any memory in your code. Note that char repeat[] = "00"; is statically allocated array whose size is equal to size of 3 chars and whose content is being initialized by string literal "00".
Problem is most likely in copying of these strings into ais_packet since you do that in nonstandard way (character by character) which causes your code to be hard to read and it's quite easy to make a mistake there:
for(int k=0; k<256; k++)
{
...
if(k==256) // are you sure that value of k will reach 256 ?
I recommend you to use C-style functions that have been created for this purpose: Craete ais_packet by copying first string into it by using strcpy and keep extending content of this ais_packet by appending other strings by using strcat.
This question will also help you: Using strcat in C
At the end of the ugly for (k=0; k < 168; k++) { if ... else if ...} loop
else if(k==168)
{
aismsg[k] = '\0';
k=0;
}
This will make either (k <=168) the loop run forever, or (k <168) never be executed. (there are more instances of this pattern)
BTW another way to do the same (also faster) would be
....
unsigned dst=0;
memcpy (array+dst, src1, 123);
dst += 123;
memcpy(array+dst, src2, 234);
dst += 234;
...
array[dst] = 0;
Just a thought, but if you are building binary messages, why not use actual binary instead of char arrays? Here's a way to use structs inside a union to bit pack binary data.
// declaration
typedef union
{
uint32_t packed;
struct {
uint16_t sample1: 12; // 12 bits long
uint16_t sample2: 14;
uint16_t 6; // unused bits
} data;
} u1;
// instantiation
u1 pack1;
// setting
pack1.data.sample1 = 1234;
//getting
uint16_t newval = pack1.data.sample2;
// setting bit 6 in sample 1
pack1.data.sample1 |= (1 << 6);
// setting lo nibble in sample1 to 0101
pack1.data.sample1 &= 0b11110101;
// getting the whole packed value
uint32_t binmsg = pack1.packed;
n.b. I know that this question has been asked on StackOverflow before in a variety of different ways and circumstances, but the search for the answer I seek doesn't quite help my specific case. So while this initially looks like a duplicate of a question such as How can I convert an integer to a hexadecimal string in C? the answers given, are accurate, but not useful to me.
My question is how to convert a decimal integer, into a hexadecimal string, manually. I know there are some beat tricks with stdlib.h and printf, but this is a college task, and I need to do it manually (professor's orders). We are however, permitted to seek help.
Using the good old "divide by 16 and converting the remainder to hex and reverse the values" method of obtaining the hex string, but there must be a big bug in my code as it is not giving me back, for example "BC" for the decimal value "188".
It is assumed that the algorithm will NEVER need to find hex values for decimals larger than 256 (or FF). While the passing of parameters may not be optimal or desirable, it's what we've been told to use (although I am allowed to modify the getHexValue function, since I wrote that one myself).
This is what I have so far:
/* Function to get the hex character for a decimal (value) between
* 0 and 16. Invalid values are returned as -1.
*/
char getHexValue(int value)
{
if (value < 0) return -1;
if (value > 16) return -1;
if (value <= 9) return (char)value;
value -= 10;
return (char)('A' + value);
}
/* Function asciiToHexadecimal() converts a given character (inputChar) to
* its hexadecimal (base 16) equivalent, stored as a string of
* hexadecimal digits in hexString. This function will be used in menu
* option 1.
*/
void asciiToHexadecimal(char inputChar, char *hexString)
{
int i = 0;
int remainders[2];
int result = (int)inputChar;
while (result) {
remainders[i++] = result % 16;
result /= (int)16;
}
int j = 0;
for (i = 2; i >= 0; --i) {
char c = getHexValue(remainders[i]);
*(hexString + (j++)) = c;
}
}
The char *hexString is the pointer to the string of characters which I need to output to the screen (eventually). The char inputChar parameter that I need to convert to hex (which is why I never need to convert values over 256).
If there is a better way to do this, which still uses the void asciiToHexadecimal(char inputChar, char *hexString) function, I am all ears, other than that, my debugging seems to indicate the values are ok, but the output comes out like \377 instead of the expected hexadecimal alphanumeric representation.
Sorry if there are any terminology or other problems with the question itself (or with the code), I am still very new to the world of C.
Update:
It just occurred to me that it might be relevant to post the way I am displaying the value in case its the printing, and not the conversion which is faulty. Here it is:
char* binaryString = (char*) malloc(8);
char* hexString = (char*) malloc(2);
asciiToBinary(*(asciiString + i), binaryString);
asciiToHexadecimal(*(asciiString + i), hexString);
printf("%6c%13s%9s\n", *(asciiString + i), binaryString, hexString);
(Everything in this code snip-pit works except for hexString)
char getHexValue(int value)
{
if (value < 0) return -1;
if (value > 16) return -1;
if (value <= 9) return (char)value;
value -= 10;
return (char)('A' + value);
}
You might wish to print out the characters you get from calling this routine for every value you're interested in. :) (printf(3) format %c.)
When you call getHexValue() with a number between 0 and 9, you return a number between 0 and 9, in the ASCII control-character range. When you call getHexValue() with a number between 10 and 15, you return a number between 65 and 75, in the ASCII letter range.
The sermon? Unit testing can save you hours of time if you write the tests about the same time you write the code.
Some people love writing the tests first. While I've never had the discipline to stick to this approach for long, knowing that you have to write tests will force you to write code that is easier to test. And code that is easier to test is less coupled (or 'more decoupled'), which usually leads to fewer bugs!
Write tests early and often. :)
Update: After you included your output code, I had to comment on this too :)
char* binaryString = (char*) malloc(8);
char* hexString = (char*) malloc(2);
asciiToBinary(*(asciiString + i), binaryString);
asciiToHexadecimal(*(asciiString + i), hexString);
printf("%6c%13s%9s\n", *(asciiString + i), binaryString, hexString);
hexString has been allocated one byte too small to be a C-string -- you forgot to leave room for the ASCII NUL '\0' character. If you were printing hexString by the %c format specifier, or building a larger string by using memcpy(3), it might be fine, but your printf() call is treating hexString as a string.
In general, when you see a
char *foo = malloc(N);
call, be afraid -- the C idiom is
char *foo = malloc(N+1);
That +1 is your signal to others (and yourself, in two months) that you've left space for the NUL. If you hide that +1 in another calculation, you're missing an opportunity to memorize a pattern that can catch these bugs every time you read code. (Honestly, I found one of these through this exact pattern on SO just two days ago. :)
Is the target purely hexadecimal, or shall the function be parametizable. If it's constrained to hex, why not exploit the fact, that a single hex digit encodes exactly four bits?
This is how I'd do it:
#include <stdlib.h>
#include <limits.h> /* implementation's CHAR_BIT */
#define INT_HEXSTRING_LENGTH (sizeof(int)*CHAR_BIT/4)
/* We define this helper array in case we run on an architecture
with some crude, discontinous charset -- THEY EXIST! */
static char const HEXDIGITS[0x10] =
{'0', '1', '2', '3', '4', '5', '6', '7',
'8', '9', 'a', 'b', 'c', 'd', 'e', 'f'};
void int_to_hexstring(int value, char result[INT_HEXSTRING_LENGTH+1])
{
int i;
result[INT_HEXSTRING_LENGTH] = '\0';
for(i=INT_HEXSTRING_LENGTH-1; value; i--, value >>= 4) {
int d = value & 0xf;
result[i] = HEXDIGITS[d];
}
for(;i>=0;i--){ result[i] = '0'; }
}
int main(int argc, char *argv[])
{
char buf[INT_HEXSTRING_LENGTH+1];
if(argc < 2)
return -1;
int_to_hexstring(atoi(argv[1]), buf);
puts(buf);
putchar('\n');
return 0;
}
I made a librairy to make Hexadecimal / Decimal conversion without the use of stdio.h. Very simple to use :
char* dechex (int dec);
This will use calloc() to to return a pointer to an hexadecimal string, this way the quantity of memory used is optimized, so don't forget to use free()
Here the link on github : https://github.com/kevmuret/libhex/
You're very close - make the following two small changes and it will be working well enough for you to finish it off:
(1) change:
if (value <= 9) return (char)value;
to:
if (value <= 9) return '0' + value;
(you need to convert the 0..9 value to a char, not just cast it).
(2) change:
void asciiToHexadecimal(char inputChar, char *hexString)
to:
void asciiToHexadecimal(unsigned char inputChar, char *hexString)
(inputChar was being treated as signed, which gave undesirable results with %).
A couple of tips:
have getHexValue return '?' rather than -1 for invalid input (make debugging easier)
write a test harness for debugging, e.g.
int main(void)
{
char hexString[256];
asciiToHexadecimal(166, hexString);
printf("hexString = %s = %#x %#x %#x ...\n", hexString, hexString[0], hexString[1], hexString[2]);
return 0;
}
#include<stdio.h>
char* inttohex(int);
main()
{
int i;
char *c;
printf("Enter the no.\n");
scanf("%d",&i);
c=inttohex(i);
printf("c=%s",c);
}
char* inttohex(int i)
{
int l1,l2,j=0,n;
static char a[100],t;
while(i!=0)
{
l1=i%16;
if(l1>10)
{
a[j]=l1-10+'A';
}
else
sprintf(a+j,"%d",l1);
i=i/16;
j++;
}
n=strlen(a);
for(i=0;i<n/2;i++)
{
t=a[i];
a[i]=a[n-i-1];
a[n-i-1]=t;
}
//printf("string:%s",a);
return a;
//
}
In complement of the other good answers....
If the numbers represented by these hexadecimal or decimal character strings are huge (e.g. hundreds of digits), they won't fit in a long long (or whatever largest integral type your C implementation is providing). Then you'll need bignums. I would suggest not coding your own implementation (it is tricky to make an efficient one), but use an existing one like GMPlib
Alright, this one's been puzzling me for a bit.
the following function encodes a string into base 64
void Base64Enc(const unsigned char *src, int srclen, unsigned char *dest)
{
static const unsigned char enc[] =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
unsigned char *cp;
int i;
cp = dest;
for(i = 0; i < srclen; i += 3)
{
*(cp++) = enc[((src[i + 0] >> 2))];
*(cp++) = enc[((src[i + 0] << 4) & 0x30)
| ((src[i + 1] >> 4) & 0x0f)];
*(cp++) = enc[((src[i + 1] << 2) & 0x3c)
| ((src[i + 2] >> 6) & 0x03)];
*(cp++) = enc[((src[i + 2] ) & 0x3f)];
}
*cp = '\0';
while (i-- > srclen)
*(--cp) = '=';
return;
}
Now, on the function calling Base64Enc() I have:
unsigned char *B64Encoded;
Which is the argument I pass onto unsigned char *dest in the base 64 encoding function.
I've tried different initializations from mallocs to NULL to other initialization. No matter what I do I alway get an exception and if I don't initialize it then the compiler (VS2005 C compiler) throws a warning telling me that it hasn't been initialized.
If I run this code with the un-initialized variable sometimes it works and some other it doesn't.
How do I initialized that pointer and pass it to the function?
you need to allocate buffer big enough to contain the encoded result. Either allocate it on the stack, like this:
unsigned char B64Encoded[256]; // the number here needs to be big enough to hold all possible variations of the argument
But it is easy to cause stack buffer overflow by allocating too little space using this approach. It would be much better if you allocate it in dynamic memory:
int cbEncodedSize = srclen * 4 / 3 + 1; // cbEncodedSize is calculated from the length of the source string
unsigned char *B64Encoded = (unsigned char*)malloc(cbEncodedSize);
Don't forget to free() the allocated buffer after you're done.
It looks like you would want to use something like this:
// allocate 4/3 bytes per source character, plus one for the null terminator
unsigned char *B64Encoded = malloc(srclen*4/3+1);
Base64Enc(src, srclen, B64Encoded);
It would help if you provided the error.
I can, with your function above, to this successfully:
int main() {
unsigned char *B64Encoded;
B64Encoded = (unsigned char *) malloc (1000);
unsigned char *src = "ABC";
Base64Enc(src, 3, B64Encoded);
}
You definitely need to malloc space for the data. You also need to malloc more space than src (1/4 more I believe).
A base64 encoded string has four bytes per three bytes in-data string, so if srclen is 300 bytes (or characters), the length for the base64 encoded string is 400.
Wikipedia has a brief but quite good article about it.
So, rounding up srclen to the nearest tuple of three, divided by three, times four should be exactly enough memory.
I see a problem in your code in the fact that it may access the byte after the trailing null char, for instance if the string length is one char. The behavior is then undefined and may result in a thrown exception if buffer boundary checking is activated.
This may explain the message related to accessing uninitialized memory.
You should then change your code so that you handle the trailing chars separately.
int len = (scrlen/3)*3;
for( int i = 0; i < len; i += 3 )
{
// your current code here, it is ok with this loop condition.
}
// Handle 0 bits padding if required
if( len != srclen )
{
// add new code here
}
...
PS: Here is a wikipedia page describing Base64 encoding.