Here is part of my code , I read some fields of a kernel data structure and compare it with an array. but oddly I see that when I print contents of array orig_poolinfo the first element is 103 though it is actually 128.
int get_poolinfo_fields(vmi_instance_t vmi)
{
int orig_poolinfo[]={128,103,76,51,25,1,32,26,20,14,7,1};
uint64_t poolinfo_table_addr = 0xffffffff81ca4fc0;//kernel 3.11
int poolinfo_table;
int i;
//for( i=0;i<12;i++)
// printf("poolinfo_table=%d %d\n",i,orig_poolinfo[i]);
for( i=0;i<12;i++)
{
vmi_read_64_va(vmi,poolinfo_table_addr, 0, &poolinfo_table);
printf("poolinfo_table=%d orig_poolinfo[%d]=%d\n",poolinfo_table,i,orig_poolinfo[i]);
if(poolinfo_table != orig_poolinfo[i])
printf("hi\n");//return(1);
poolinfo_table_addr = poolinfo_table_addr + 0x4;
}
return(0);
}
and this is the output:
poolinfo_table=128 orig_poolinfo[0]=103
hi
poolinfo_table=103 orig_poolinfo[1]=103
poolinfo_table=76 orig_poolinfo[2]=76
poolinfo_table=51 orig_poolinfo[3]=51
poolinfo_table=25 orig_poolinfo[4]=25
poolinfo_table=1 orig_poolinfo[5]=1
poolinfo_table=32 orig_poolinfo[6]=32
poolinfo_table=26 orig_poolinfo[7]=26
poolinfo_table=20 orig_poolinfo[8]=20
poolinfo_table=14 orig_poolinfo[9]=14
poolinfo_table=7 orig_poolinfo[10]=7
poolinfo_table=1 orig_poolinfo[11]=1
You are mixing two different types int and uint64_t. Their sizes might not be the same.
By using vmi_read_64_va() you copy 8 bytes. If sizeof( int ) is 4 on your system you get undefined behavior. This means anything can happen and your program is not behaving correctly.
Use functions appropriate to your type size and don't mix types.
Related
I was trying to make an array that contains Fibonacci numbers in C, but I got into trouble. I can't get all of the elements, and some of the elements are wrongly calculated, and I don't know where I am I going wrong.
#include <stdio.h>
int main(void){
int serie[]={1,1},sum=0,size=2;
while(size<=4000000){
serie[size]=serie[size-1]+serie[size-2];
printf("%d\n",serie[size-1]);
size+=1;
}
return 0;
}
Output:
1
2
4
6
11
17
28
45
73
118
191
309
500
809
1309
2118
3427
5545
8972
14517
23489
38006
61495
99501
160996
260497
421493
681990
1103483
1785473
2888956
4674429
7563385
12237814
19801199
32039013
51840212
83879225
135719437
219598662
355318099
574916761
930234860
1505151621
-1859580815
-354429194
2080957287
1726528093
-487481916
1239046177
751564261
1990610438
-1552792597
437817841
-1114974756
-677156915
-1792131671
1825678710
33547039
1859225749
1892772788
-542968759
1349804029
806835270
-2138327997
-1331492727
825146572
-506346155
318800417
-187545738
131254679
-56291059
74963620
18672561
93636181
112308742
205944923
318253665
524198588
842452253
1366650841
-2085864202
-719213361
1489889733
770676372
-2034401191
-1263724819
996841286
-266883533
729957753
463074220
1193031973
1656106193
-1445829130
210277063
-1235552067
-1025275004
2034140225
1008865221
-1251961850
-243096629
-1495058479
-1738155108
1061753709
-676401399
385352310
-291049089
94303221
-196745868
-102442647
-299188515
-401631162
-700819677
-1102450839
-1803270516
1389245941
-414024575
975221366
561196791
1536418157
2097614948
-660934191
--------------------------------
Process exited after 2.345 seconds with return value 3221225477
Press any key to continue . . .
I don't understand why it is giving that output.
int serie[]={1,1}
Declares an array of two elements. As the array has two elements and indices start from zero, it has valid indices - 0 and 1, ie. serie[0] is the first element and serie[1] is the second element.
int size=2;
while(..) {
serie[size]= ...
size+=1;
}
As size starts 2, the expression serie[2] = is invalid. There is no third element in the array and it writes to an unknown memory region. Executing such an action is undefined behavior. There could be some another variable there, some system variable, or memory of another program or it can spawn nasal demons. It is undefined.
If you want to store the output in an array, you need to make sure the array has enough elements to hold the input.
And a tip:
int serie[4000000];
may not work, as it will try to allocate 40000000 * sizeof(int), which assuming sizeof(int) = 4 is 15.2 megabytes of memory. Some systems don't allow to allocate that much memory on stack, so you should move to dynamic allocation.
You're having an integer overflow because the int size is ,at a certain leverl, not big enough to hold the numbers, so the number is wrapping round the size and giving false values.
Your program should be like:
#include <stdio.h>
int main(void){
long long unsigned series[100] = {1,1};
int size = 2;
while(size < 100){
series[size] = series[size-1] + series[size-2];
printf("%llu\n", series[size-1]);
size += 1;
}
return 0;
}
Although, size of long long unsigned is also limited, at a certain level, with such very big numbers in Fibonacci. So this will result in more correct numbers printed, but also will overflow at a certain level. It will overflow when the number exceeds this constant ULLONG_MAX declared in limits.h.
The problem with this code:
#include <stdio.h>
int main(void){
int serie[]={1,1},sum=0,size=2;
while(size<=4000000){
serie[size]=serie[size-1]+serie[size-2];
printf("%d\n",serie[size-1]);
size+=1;
}
return 0;
}
... is that it attempts to store a very long series of numbers (4 million) into a very short array (2 elements). Arrays are fixed in size. Changing the variable size has no effect on the size of the array serie.
The expression serie[size]=... stores numbers outside the bounds of the array every time it's executed because the only legal array index values are 0 and 1. This results in undefined behavior and to be honest you were lucky only to see weird output.
There are a couple of possible solutions. The one that changes your code the least is to simply extend the array. Note that I've made it a static rather than automatic variable, because your implementation probably won't support something of that size in its stack.
#include <stdio.h>
int serie[4000000]={1,1};
int main(void){
int size=2;
while(size<4000000){ // note strict less-than: 4000000 is not a valid index
serie[size]=serie[size-1]+serie[size-2];
printf("%d\n",serie[size-1]);
size+=1;
}
return 0;
}
The more general solution is to store the current term and the two previous terms in the series as three separate integers. It's a little more computationally expensive but doesn't have the huge memory requirement.
#include <limits.h>
#include <stdio.h>
int main(void)
{
int term0=0, term1=1, term2;
while(1)
{
if (term0 > INT_MAX - term1) break;// overflow, stop
term2 = term0 + term1;
printf("%d\n",term2);
term0 = term1;
term1 = term2;
}
return 0;
}
This also has the benefit that it won't print any numbers that have "wrapped around" as a result of exceeding the limits of what can be represented in an 'int`. Of course, you can easily choose another data type in order to get a longer sequence of valid output.
You have two problems:
You need to allocate more space in serie, as much as you are going
to use
Eventually the fib numbers will become too big to fit inside an integer, even a 64bit unsigned integer (long long unsigned), i think 90 or so is about max
See the modified code:
#include <stdio.h>
// Set maximum number of fib numbers
#define MAX_SIZE 90
int main(void) {
// Use 64 bit unsigned integer (can't be negative)
long long unsigned int serie[MAX_SIZE];
serie[0] = 1;
serie[1] = 1;
int sum = 0;
int size = 0;
printf("Fib(0): %llu\n", serie[0]);
printf("Fib(1): %llu\n", serie[1]);
for (size = 2; size < MAX_SIZE; size++) {
serie[size] = serie[size-1] + serie[size-2];
printf("Fib(%i): %llu\n", size, serie[size]);
}
return 0;
}
As you are only printing out the numbers, you don't actually have to store all of them
(only the two previous numbers), but it really doesn't matter if there's only 90.
That code will run on a payment device (POS). I have to use legacy C (not C# or C++) for that purpose.
I am trying to prepare a simple Mifare card read/write software data. Below document is my reference and I am trying to achieve what is on page 9, 8.6.2.1 Value blocks explains.
http://www.nxp.com/documents/data_sheet/MF1S50YYX_V1.pdf
I just know very basics of C. All my searches in The Internet have failed. According to document:
1- There is integer variable with value of 1234567.
2- There is char array[4] which should have hex of above value which is 0x0012D687
3- I am supposed to invert that char array[4] and reach value of 0xFFED2978
I need to do some other things but I have stuck in number 3 above. What I have tried lastly is
int value = 1234567;
char valuebuffer[4];
char invertbuffer[4];
sprintf(valuebuffer, "%04x", value);
for(i = 0; i < sizeof(valuebuffer); i++ )
{
invertbuffer[i] ^= valuebuffer[i];
}
When I print, I read some other value in invertbuffer and not 0xFFED2978
Seems like you're making it more complicated than it needs to be. You can do the binary inversion on the int variable rather than messing around with individual bytes.
int value = 1234567;
int inverted= ~ value;
printf("%x\n",value);
printf("%x\n",inverted);
gives you output of
12d687
ffed2978
First of all, you must use the types from stdint.h and not char, because the latter has implementation-defined signedness and is therefore overall unsuitable for holding raw binary data.
With that sorted, you can use a union for maximum flexibility:
#include <stdint.h>
#include <stdio.h>
typedef union
{
uint32_t u32;
uint8_t u8 [4];
} uint32_union_t;
int main (void)
{
uint32_union_t x;
x.u32 = 1234567;
for(size_t i=0; i<4; i++)
{
printf("%X ", x.u8[i]);
}
printf("\n");
x.u32 = ~x.u32;
for(size_t i=0; i<4; i++)
{
printf("%X ", x.u8[i]);
}
printf("\n");
}
Notably, the access order of the u8 is endianess dependent. This might be handy when dealing with something like RFID, which doesn't necessarily have the same network endianess as your MCU.
Let me start by saying that I openly admit this is for a homework assignment, but what I am asking is not related to the purpose of the assignment, just something I don't understand in C. This is just a very small part of a large program.
So my issue is, I have a set of data that consists various data types as follows:
[16 bit number][16 but number][16 bit number][char[234]][128 bit number]
where each block represents a variable from elsewhere in the program.
I need to send that data 8bytes at a time into a function that accepts uint32_t[2] as an input. How do I convert my 234byte char array into uint32_t without losing the char values?
In other words, I need to be able to convert back from the uint32_t version to the original char array later on. I know a char is 1byte, and the value can also be represented as a number in relation to its ascii value, but not sure how to convert between the two since some letters have a 3 digit ascii value and others have 2.
I tried to use sprintf to grab 8byte blocks from the data set, and store that value in a uint32_t[2] variable. It works, but then I lose the original char array because I can't figure out way to go back/undo it.
I know there has to be a relatively simple way to do this, i'm just lacking enough skill in C to make it happen.
Your question is very confusing, but I am guessing you are preparing some data structure for encryption by a function that requires 8 bytes or 2 uint32_t's.
You can convert a char array to uint32_t as follows
#define NELEM 234
char a[NELEM];
uint64_t b[(NELEM+sizeof(uint64_t)-1)/sizeof(uint64_t)]; // this rounds up to nearest modulo 4
memcpy(b,a,NELEM);
for(i .. ) {
encryption_thing(b[i]);
}
or
If you need to change endianess or something, that is more complicated.
#include <stdint.h>
void f(uint32_t a[2]) {}
int main() {
char data[234]; /* GCC can explicitly align with this: __attribute__ ((aligned (8))) */
int i = 0;
int stride = 8;
for (; i < 234 - stride; i += stride) {
f((uint32_t*)&data[i]); }
return 0; }
I need to send that data 8bytes at a time into a function that accepts
uint32_t[2] as an input. How do I convert my 234byte char array into
uint32_t without losing the char values?
you could use a union for this
typedef union
{
unsigned char arr[128]; // use unsigned char
uint32_t uints[16]; // 128/8
} myvaluetype;
myvaluetype value;
memcpy(value.arr, your_array, sizeof(value.arr));
say the prototype that you want to feed 2 uint32_t at a time is something like
foo(uint32_t* p);
you can now send the data 8 bytes at the time by
for (int i = 0; i < 16; i += 2)
{
foo(myvaluetype.uints + i);
}
then use the same struct to convert back.
of course some care must be taken about padding/alignment you also don't mention if it is sent over a network etc so there are other factors to consider.
Can I declare an int array, then initialize it with chars? I'm trying to print out the state of a game after each move, therefore initially the array will be full of chars, then each move an entry will be updated to an int.
I think the answer is yes, this is permitted and will work because an int is 32 bits and a char is 8 bits. I suppose that each of the chars will be offset by 24 bits in memory from each other, since the address of the n+1'th position in the array will be n+32 bits and a char will only make use of the first 8.
It's not a homework question, just something that came up while I was working on homework. Maybe I'm completely wrong and it won't even compile the way I've set everything up?
EDIT: I don't have to represent them in a single array, as per the title of this post. I just couldn't think of an easier way to do it.
You can also make an array of unions, where each element is a union of either char or int. That way you can avoid having to do some type-casting to treat one as the other and you don't need to worry about the sizes of things.
int and char are numeric types and char is guaranteed smaller than int (therefore supplying a char where an int is expected is safe), so in a nutshell yes you can do that.
Yes it would work, because a char is implicitly convertible to an int.
"I think the answer is yes, this is permitted and will work because an int is 32 bits and a char is 8 bits." this is wrong, an int is not always 32 bits. Also, sizeof(char) is 1, but not necessarily 8 bits.
As explained, char is an int compatible type.
From your explanation, you might initially start with an array of int who's values are char, Then as the game progresses, the char values will no longer be relevant, and become int values. Yes?
IMHO the problem is not putting char into an int, that works and is built into the language.
IMHO using a union to allow the same piece of space to be used to store either type, helps but is not important. Unless you are using an amazingly small microcontroller, the saving in space is not likely relevant.
I can understand why you might want to make it easy to write out the board, but I think that is a tiny part of writing a game, and it is best to keep things simple for the rest of the game, rather than focus on the first few lines of code.
Let's think about the program; consider how to print the board.
At the start it could be:
for (int i=0; i<states; ++i) {
printf("%c ", game_state[i]);
}
Then as the game progresses, some of those values will be int.
The issue to consider is "which format is needed to print the value in the 'cell'?".
The %c format prints a single char.
I presume you would like to see the int values printed differently from ordinary printed characters? For example, you want to see the int values as integers, i.e. strings of decimal (or hex) digits? That needs a '%d' format.
On my Mac I did this:
#include <stdio.h>
#define MAX_STATE (90)
int main (int argc, const char * argv[]) {
int game_state[MAX_STATE];
int state;
int states;
for (states=0; states<MAX_STATE; ++states) {
game_state[states] = states+256+32;
}
for (int i=0; i<states; ++i) {
printf("%c ", game_state[i]);
}
return 0;
}
The expression states+256+32 guarantees the output character codes are not ASCII, or even ISO-8859-1 and they are not control codes. They are just integers. The output is:
! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? # A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y
I think you'd like the original character to be printed (no data conversion) when the value is the initial character (%c format), but you do want to see data conversion, from a binary number to a string of digit-characters (%d or a relative format). Yes?
So how would the program tell which is which?
You could ensure the int values are not characters (as my program did). Typically, this become a pain, because you are restricted on values, and end up using funny expressions everywhere else just to make that one job easier.
I think it is easier to use a flag which says "the value is still a char" or "the value is an int"
The small saving of space from using a union is rarely worth while, and their are advantages to having the initial state and the current move available.
So I think you end up with something like:
#include <stdio.h>
#define MAX_STATE (90)
int main (int argc, const char * argv[]) {
struct GAME { int cell_state; int move; char start_value } game_state[MAX_STATE];
enum CELL_STATE_ENUM { start_cell, move_cell };
int state;
int states;
for (states=0; (state=getchar())!= EOF && states<MAX_STATE; ++states) {
game_state[states].start_value = state;
game_state[states].cell_state = start_cell;
}
// should be some error checking ...
// ... make some moves ... this is nonsense but shows an idea
for (int i=0; i<states; ++i ) {
if (can_make_move(i)) {
game_state[states].cell_state = move_cell;
game_state[states].move = new_move(i);
}
}
// print the board
for (int i=0; i<states; ++i) {
if (game_state[i].cell_state == start_cell) {
printf("'%c' ", game_state[i].start_value);
} else if (game_state[i].cell_state == move_cell) {
printf("%d ", game_state[i].move);
} else {
fprintf(stderr, "Error, the state of the cell is broken ...\n");
}
}
return 0;
}
The move can be any convenient value, there is nothing to complicate the rest of the program.
Your intent can be made a little more clear my using int8_t or uint8_t from the stdint.h header. This way you say "I'm using a eight bit integer, and I intend for it to be a number."
It's possible and very simple. Here is an example:
int main()
{
// int array initialized with chars
int arr[5] = {'A', 'B', 'C', 'D', 'E'};
int i; // loop counter
for (i = 0; i < 5; i++) {
printf("Element %d id %d/%c\n", i, arr[i], arr[i]);
}
return 0;
}
The output is:
Element 0 is 65/A
Element 1 is 66/B
Element 2 is 67/C
Element 3 is 68/D
Element 4 is 69/E
I am having trouble understanding the output of the following simple CUDA code. All that the code does is allocate two integer arrays: one on the host and one on the device each of size 16. It then sets the device array elements to the integer value 3 and then copies these values into the host_array where all the elements are then printed out.
#include <stdlib.h>
#include <stdio.h>
int main(void)
{
int num_elements = 16;
int num_bytes = num_elements * sizeof(int);
int *device_array = 0;
int *host_array = 0;
// malloc host memory
host_array = (int*)malloc(num_bytes);
// cudaMalloc device memory
cudaMalloc((void**)&device_array, num_bytes);
// Constant out the device array with cudaMemset
cudaMemset(device_array, 3, num_bytes);
// copy the contents of the device array to the host
cudaMemcpy(host_array, device_array, num_bytes, cudaMemcpyDeviceToHost);
// print out the result element by element
for(int i = 0; i < num_elements; ++i)
printf("%i\n", *(host_array+i));
// use free to deallocate the host array
free(host_array);
// use cudaFree to deallocate the device array
cudaFree(device_array);
return 0;
}
The output of this program is 50529027 printed line by line 16 times.
50529027
50529027
50529027
..
..
..
50529027
50529027
Where did this number come from? When I replace 3 with 0 in the cudaMemset call then I get correct behaviour. i.e.
0 printed line by line 16 times.
I compiled the code with nvcc test.cu on Ubuntu 10.10 with CUDA 4.0
I'm no cuda expert but 50529027 is 0x03030303 in hex. This means cudaMemset sets each byte in the array to 3 and not each int. This is not surprising given the signature of cuda memset (to pass in the number of bytes to set) and the general semantics of memset operations.
Edit: As to your (I guess) implicit question of how to achieve what you intended I think you have to write a loop and initialize each array element.
As others have pointed out, cudaMesetworks like the standard C memset- it sets byte values. From the CUDA documentation:
cudaError_t cudaMemset( void * devPtr, int value, size_t count)
Fills the first count bytes of the memory area pointed to by devPtr
with the constant byte value value.
If you want to set word size values, the best solution is to use your own memset kernel, perhaps something like this:
template<typename T>
__global__ void myMemset(T * x, T value, size_t count )
{
size_t tid = threadIdx.x + blockIdx.x * blockDim.x;
size_t stride = blockDim.x * gridDim.x;
for(int i=tid; i<count; i+=stride) {
x[i] = value;
}
}
which could be launched with enough blocks to cover the number of MP in your GPU, and each thread will do as many iterations as required to fill the memory allocation. Writes will be coalesced, so performance shouldn't be too bad. This could also be adapted to CUDA's vector types, if you so desired.
memset sets bytes, and integer is 4 bytes.. so what you get is 50529027 decimal, which is 0x3030303 in hex... In other words - you are using it wrong, and it has nothing to do with CUDA.
This is a classic memset shortcoming; it works only on data type with 8-bit size i.e char. This means it sets (probably) 3 to every 8-bits of the total memory. You can confirm this by a simple C++ code:
int main ()
{
int x=16;
size_t bytes = x*sizeof(int);
int *M = (int*)malloc(bytes);
memset(M,3,bytes);
for (int i = 0; i < x; ++i) {
printf("%d\n", M[i]);
}
return 0;
}
The only case in which memset works on all data types is when you set it to 0. (it sets every byte to 0 and hence all data to 0). If you change the data type to char, you'll see the desired output. cudaMemset is ditto copy of memset with the only difference that it takes a GPU pointer in input.
So memset or cudaMemset probably sets every byte to the integer value (in your case 3) of whole memory space defined by the third argument regardless of the datatype.
Tip:
Google: 50529027 in binary and you'll get the answer :)