Im interfacing accelerometer with TivaC and displaying the RAW data on UART.
void main(){
signed int accelerationX;
accelerationX = getAcceleration_X();
if (accelerationX>=0){
UART_OutString("\r\nX Axl: ");
UART_OutUDec((unsigned short) accelerationX);
} else {
UART_OutString("\r\nX Axl: - ");
UART_OutUDec((unsigned short) (accelerationX*-1));
}
}
Such type of code I got on some forum.
I'm not understanding why " accelerationX*-1 " is done when acceleration is negative.
accelerationX is a signed integer, but it would seem that UART_OutUDec expects an unsigned integer. Therefore they have to print a minus sign followed by the absolute value of accelerationX (sign removed).
It's because the number is being sent as an unsigned short instead of a signed quantity. It would be helpful to see what UART_OutUDec is doing, but it also doesn't really matter because a UART will simply send whatever is dropped in its data register. As an aside, UART_OutUDec is most likely translating the unsigned short into ASCII. The receiver is unlikely to understand the value was supposed to be negative, so the minus sign is transmitted with what's effectively the absolute value of the acceleration.
Something to consider is that not all receivers are equal. A lot of people assume the device on the other end is a computer or something that understands ASCII, but that's not always the case. I've worked on embedded systems that transmitted ASCII characters mixed with non-ASCII characters, which is confusing and hard to maintain, but these systems exist. That's almost certainly not applicable to your situation simply because it's rare, but, in the future, if you give additional details about the receiver it will help clarify how the data should be formatted and transmitted.
Related
I am receiving hex data from a serial port.
I have converted the hex data to corresponding int value.
I want to display the equivalent character over GTK label.
But if we see character map there are control characters from 0x00 to 0x20.
So i was thinking of adding 256 to the converted int value and show the corresponding Unicode character to label.
But i am not able to convert int to Unicode. say if i have an array of ints 266,267,289...
how should i convert it to Unichar and display over GTK label.
I know it may seems very basic problem to you all but i have struggled a lot and didn't find any answer. Please help,
The GTK functions that set text on UI elements all assume UTF-8 strings. A single unsigned byte representing a Unicode code point with value > 127 will not form a valid UTF-8 string if written out as an unsigned byte. I can think of a couple of ways around this.
Store the code point as a 32-bit integer (which is essentially UTF-32) and use the functions in the iconv library, or something similar, to do the conversion from UTF-32 to UTF-8. There are other conversion implementations in C widely available. Converting your unsigned byte to UTF-32 really amounts to padding it with three leading zero bytes -- which is easy to code.
Generate a UTF-8 string yourself, based on the 8-bit code point value. Since you have a limited range of values, this is easy-ish. If you look at the way that UTF-8 is written out, e.g., here:
https://en.wikipedia.org/wiki/UTF-8
you'll see that the values you need to represent are written as two unsigned bytes, the first beginning with binary 110B, and the second with 10B. The bits of the code point value are split up and distributed between these two bytes. Doing this conversion will need a little masking and bit-shifting, but it's not hugely difficult.
Having said all that, I have to wonder why you'd want to assign a character to a label that users will likely not understand? Why not just write the hex number on the label, if it is not a displayable character?
I'm writing a program in C which should visualize audio. As audio source I use a microphone and the Alsa C Sound library. I take the sound frames from the Alsa library, make some transformations (Fourier analysis and similar), and then visualise them. I almost have a working program besides one exception, it seems I'm converting the Alsa-frames to doubles in a wrong way.
This is how I do it:
unsigned char x=getFirstByte();
signed char y=getSecondByte();
double analog_signal=(y*256+x)/32768.;
Now this code works, but sometimes (relatively often) I get spikes where the value of analog_signal is about 0.99... where it shouldn't be.
So I started printing the values of x and y when such spikes occur.
The output was quite clear: always when such a spike occurred y was equal to 127 and x was some value around 230.
My conversion is still correct in my understanding, but it seems that Alsa treats its values in a different way. So that this special value of 127 in the second byte has to be converted differently, for whatever reason?!
I don't want to believe that my microphone is broken, so could someone who has worked with the Alsa-library, kindly give me some advice on my problem.
I would also be happy with a function of the Alsa-library which does this conversion for me as I haven't found one but maybe overlooked it.
I need to generate a pin number, between 0 and 9999;
0's are important since I'm gonna use this pin to encrypt some files, and encrypting with '0024' is different than encrypting with '24'
I'm using an unsigned int and it's isn't working..
Is the only way an array?
You can't. An int is an integer, and for integers it really doesn't make sense to talk about leading zeros. They don't exist, an integer (in the mathematical sense, which is what int is trying to model on a computer) basically cannot have leading zeros. See comments for more pedantry about this, I tried to simplify it reasonably but might have failed since I'm just a lowly programmer and not a mathematician.
All the bits available for the int are used to store actual value bits, none are available to store that kind of representational information.
It sounds as if you want a string, or an array of digits.
Convert your integer value as a string and do formatting so it will return integer with leading zero in string format.
I'm trying to send a converted temperature reading from my DS1820 to my PC using a PIC16F877 uart. I am using MPLABX and the XC8 compiler, which has a build in usart.h, though it's only useful for the PIC18 series, so I'm using usart_pic16.h which has been written to work with the PIC16 series by a third party.
I am successfully collecting the temperature in its hex form from the DS1820 and have converted it to a human readable float, but I can't find a way to forward the float value to the PC via the UART.
The usart_pic16.h library allows for direct sending of chars, strings, lines and ints over the usart using the following methods:-
void USARTWriteChar(char ch);
void USARTWriteString(const char *str);
void USARTWriteLine(const char *str);
void USARTWriteInt(int16_t val, int8_t field_length);
I'm stuck at finding a way to send the float value across the uart using this library, which includes extraction and sending a decimal point.
I did try sending a string like this:-
USARTWriteString( "TempC= %7.3f degrees C \r\n", temp_c );
Where temp_c is the float value of the temp, but it errorred with "too many function arguments" while compiling. Probably obvious to those c gurus out there, which I'm unfortunately not :(
Maybe one way would be to extract each value from the float and sent it as an int, with the exception of the decimal point which could probably be found with an 'if' check of each value, then when the decimal point is found just send it as a char e.g. USARTWriteChar('.');, which does work. Unfortunately I don't know how to extract individual float values or if it's the best way to do it.
I wasn't sure if my code was required to solve this so thought I'd avoid spamming it unless someone asks.
Any help would be great.
Thanks.
The general equivalent would be to include <stdio.h> and do something like the following:
char s[64];
sprintf(s, "TempC= %7.3f degrees C \r\n", temp_c);
USARTWriteString(s);
Although for an embedded platform you may be best to avoid the printf style functions that can use a fair bit of code space on a small microcontroller. Also in the above example it would make sense to break just the floating point conversion into a seperate sprintf and output the rest of the string seperately so the buffer s doesn't have to be so large.
That should get you running for the moment but in the longer term you might want to look at converting the temperature to integer say by multiplying it by 1000 and then decoding the value on the PC, that's assuming eventually you intend to write your own application to communicate with the microcontroller.
I have smart card on which I can store bytes (multiple of 16).
If I do: Save(byteArray, length) then I can do Receive(byteArray,length)
and I think I will get byte array in the same order I stored.
Now, I have such issue. I realized if I store integer on this card,
and some other machine (with different endianness) reads it, it may get wrong data.
So, I thought maybe solution is I always store data on this card, in a little
endian way, and always retrieve the data in a little endian way (I will write apps to read and write, so I am free to interpret numbers as I like.). Is this possible?
Here is something I have come up with:
Embed integer in char array:
int x;
unsigned char buffer[250];
buffer[0] = LSB(x);
buffer[1] = LSB(x>>8);
buffer[2] = LSB(x>>16);
buffer[3] = LSB(x>>24);
Important is I think that LSB function should return the least significant byte regardless of the endiannes of the machine, how would such LSB function look like?
Now, to reconstruct the integer (something like this):
int x = buffer[0] | (buffer[1]<<8) | (buffer[2]<<16) | (buffer[3]<<24);
As I said I want this to work, regardless of the endiannes of the machine who reads it and writes it. Will this work?
The 'LSB' function may be implemented via a macro as below:-
#define LSB(x) ((x) & 0xFF)
Provided x is unsigned.
If your C library is posix compliant, then you have standard functions available to do exactly what you are trying to code. ntohl, ntohs, htonl, htons (network to host long, network to host short, ...). That way you don't have to change your code if you want to compile it for a big-endian or for a little-endian architecture. The functions are defined in arpa/inet.h (see http://linux.die.net/man/3/ntohl).
I think the answer for your question is YES, you can write data on a smart card such that it is universally (and correctly) read by readers of both big endian AND little endian orientation. With one big caveat: it would be incumbent on the reader to do the interpretation, not your smart card interpreting the reader, would it not? That is, as you know there are many routines to determine endianess (1, 2, 3). But it is the readers that would have to contain code to test endianess, not your card.
Your code example works, but I am not sure it would be necessary given the nature of the problem as it is presented.
By the way, HERE is a related post.