Is it possible to have bit fields array? [duplicate] - c

I was pondering (and therefore am looking for a way to learn this, and not a better solution) if it is possible to get an array of bits in a structure.
Let me demonstrate by an example. Imagine such a code:
#include <stdio.h>
struct A
{
unsigned int bit0:1;
unsigned int bit1:1;
unsigned int bit2:1;
unsigned int bit3:1;
};
int main()
{
struct A a = {1, 0, 1, 1};
printf("%u\n", a.bit0);
printf("%u\n", a.bit1);
printf("%u\n", a.bit2);
printf("%u\n", a.bit3);
return 0;
}
In this code, we have 4 individual bits packed in a struct. They can be accessed individually, leaving the job of bit manipulation to the compiler. What I was wondering is if such a thing is possible:
#include <stdio.h>
typedef unsigned int bit:1;
struct B
{
bit bits[4];
};
int main()
{
struct B b = {{1, 0, 1, 1}};
for (i = 0; i < 4; ++i)
printf("%u\n", b.bits[i]);
return 0;
}
I tried declaring bits in struct B as unsigned int bits[4]:1 or unsigned int bits:1[4] or similar things to no avail. My best guess was to typedef unsigned int bit:1; and use bit as the type, yet still doesn't work.
My question is, is such a thing possible? If yes, how? If not, why not? The 1 bit unsigned int is a valid type, so why shouldn't you be able to get an array of it?
Again, I don't want a replacement for this, I am just wondering how such a thing is possible.
P.S. I am tagging this as C++, although the code is written in C, because I assume the method would be existent in both languages. If there is a C++ specific way to do it (by using the language constructs, not the libraries) I would also be interested to know.
UPDATE: I am completely aware that I can do the bit operations myself. I have done it a thousand times in the past. I am NOT interested in an answer that says use an array/vector instead and do bit manipulation. I am only thinking if THIS CONSTRUCT is possible or not, NOT an alternative.
Update: Answer for the impatient (thanks to neagoegab):
Instead of
typedef unsigned int bit:1;
I could use
typedef struct
{
unsigned int value:1;
} bit;
properly using #pragma pack

NOT POSSIBLE - A construct like that IS NOT possible(here) - NOT POSSIBLE
One could try to do this, but the result will be that one bit is stored in one byte
#include <cstdint>
#include <iostream>
using namespace std;
#pragma pack(push, 1)
struct Bit
{
//one bit is stored in one BYTE
uint8_t a_:1;
};
#pragma pack(pop, 1)
typedef Bit bit;
struct B
{
bit bits[4];
};
int main()
{
struct B b = {{0, 0, 1, 1}};
for (int i = 0; i < 4; ++i)
cout << b.bits[i] <<endl;
cout<< sizeof(Bit) << endl;
cout<< sizeof(B) << endl;
return 0;
}
output:
0 //bit[0] value
0 //bit[1] value
1 //bit[2] value
1 //bit[3] value
1 //sizeof(Bit), **one bit is stored in one byte!!!**
4 //sizeof(B), ** 4 bytes, each bit is stored in one BYTE**
In order to access individual bits from a byte here is an example (Please note that the layout of the bitfields is implementation dependent)
#include <iostream>
#include <cstdint>
using namespace std;
#pragma pack(push, 1)
struct Byte
{
Byte(uint8_t value):
_value(value)
{
}
union
{
uint8_t _value;
struct {
uint8_t _bit0:1;
uint8_t _bit1:1;
uint8_t _bit2:1;
uint8_t _bit3:1;
uint8_t _bit4:1;
uint8_t _bit5:1;
uint8_t _bit6:1;
uint8_t _bit7:1;
};
};
};
#pragma pack(pop, 1)
int main()
{
Byte myByte(8);
cout << "Bit 0: " << (int)myByte._bit0 <<endl;
cout << "Bit 1: " << (int)myByte._bit1 <<endl;
cout << "Bit 2: " << (int)myByte._bit2 <<endl;
cout << "Bit 3: " << (int)myByte._bit3 <<endl;
cout << "Bit 4: " << (int)myByte._bit4 <<endl;
cout << "Bit 5: " << (int)myByte._bit5 <<endl;
cout << "Bit 6: " << (int)myByte._bit6 <<endl;
cout << "Bit 7: " << (int)myByte._bit7 <<endl;
if(myByte._bit3)
{
cout << "Bit 3 is on" << endl;
}
}

In C++ you use std::bitset<4>. This will use a minimal number of words for storage and hide all the masking from you. It's really hard to separate the C++ library from the language because so much of the language is implemented in the standard library. In C there's no direct way to create an array of single bits like this, instead you'd create one element of four bits or do the manipulation manually.
EDIT:
The 1 bit unsigned int is a valid type, so why shouldn't you be able
to get an array of it?
Actually you can't use a 1 bit unsigned type anywhere other than the context of creating a struct/class member. At that point it's so different from other types it doesn't automatically follow that you could create an array of them.

C++ would use std::vector<bool> or std::bitset<N>.
In C, to emulate std::vector<bool> semantics, you use a struct like this:
struct Bits {
Word word[];
size_t word_count;
};
where Word is an implementation-defined type equal in width to the data bus of the CPU; wordsize, as used later on, is equal to the width of the data bus.
E.g. Word is uint32_fast_t for 32-bit machines, uint64_fast_t for 64-bit machines;
wordsize is 32 for 32-bit machines, and 64 for 64-bit machines.
You use functions/macros to set/clear bits.
To extract a bit, use GET_BIT(bits, bit) (((bits)->)word[(bit)/wordsize] & (1 << ((bit) % wordsize))).
To set a bit, use SET_BIT(bits, bit) (((bits)->)word[(bit)/wordsize] |= (1 << ((bit) % wordsize))).
To clear a bit, use CLEAR_BIT(bits, bit) (((bits)->)word[(bit)/wordsize] &= ~(1 << ((bit) % wordsize))).
To flip a bit, use FLIP_BIT(bits, bit) (((bits)->)word[(bit)/wordsize] ^= (1 << ((bit) % wordsize))).
To add resizeability as per std::vector<bool>, make a resize function which calls realloc on Bits.word and changes Bits.word_count accordingly. The exact details of this is left as a problem.
The same applies for proper range-checking of bit indices.

this is abusive, and relies on an extension... but it worked for me:
struct __attribute__ ((__packed__)) A
{
unsigned int bit0:1;
unsigned int bit1:1;
unsigned int bit2:1;
unsigned int bit3:1;
};
union U
{
struct A structVal;
int intVal;
};
int main()
{
struct A a = {1, 0, 1, 1};
union U u;
u.structVal = a;
for (int i =0 ; i<4; i++)
{
int mask = 1 << i;
printf("%d\n", (u.intVal & mask) >> i);
}
return 0;
}

You can also use an array of integers (ints or longs) to build an arbitrarily large bit mask. The select() system call uses this approach for its fd_set type; each bit corresponds to the numbered file descriptor (0..N). Macros are defined: FD_CLR to clear a bit, FD_SET to set a bit, FD_ISSET to test a bit, and FD_SETSIZE is the total number of bits. The macros automatically figure out which integer in the array to access and which bit in the integer. On Unix, see "sys/select.h"; under Windows, I think it is in "winsock.h". You can use the FD technique to make your own definitions for a bit mask. In C++, I suppose you could create a bit-mask object and overload the [] operator to access individual bits.

You can create a bit list by using a struct pointer. This will use more than a bit of space per bit written though, since it'll use one byte (for an address) per bit:
struct bitfield{
unsigned int bit : 1;
};
struct bitfield *bitstream;
Then after this:
bitstream=malloc( sizeof(struct bitfield) * numberofbitswewant );
You can access them like so:
bitstream[bitpointer].bit=...

Related

How to get the bit position of any member in structure

How can I get the bit position of any members in structure?
In example>
typedef struct BitExamStruct_
{
unsigned int v1: 3;
unsigned int v2: 4;
unsigned int v3: 5;
unsigned int v4: 6;
} BitExamStruct;
Is there any macro to get the bit position of any members like GetBitPos(v2, BitExamStruct)?
I thought that compiler might know members' location based on bits length in the structure. So I want to know whether I can get it by using just a simple macro without running code.
Thank you in advance.
There is no standard way that I know of to do so, but it doesn't mean you can't find a solution.
The following is not the prettiest code ever; it's a kind of hack to identify where the variable "begins" in memory. Please keep in mind that the following can give different results depending on the endianess:
#include <stdio.h>
#include <string.h>
typedef struct s_toto
{
int a:2;
int b:3;
int c:3;
} t_toto;
int
main()
{
t_toto toto;
unsigned char *c;
int bytes;
int bits;
memset(&toto, 0, sizeof(t_toto));
toto.c = 1;
c = (unsigned char *)&toto;
for (bytes = 0; bytes < (int)sizeof(t_toto); bytes++)
{
if (*c)
break;
}
for (bits = 0; bits < 8; bits++)
{
if (*c & 0b10000000)
break;
*c = (*c << 1);
}
printf("position (bytes=%d, bits=%d): %d\n", bytes, bits, (bytes * 8) + bits);
return 0;
}
What I do is that I initialize the whole structure to 0 and I set 1 as value of the variable I want to locate. The result is that only one bit is set to 1 in the structure. Then I read the memory byte per byte until I find one that's not zero. Once found, I can look at its bits until I find the one that's set.
There is no portable (aka standard C) way. But thinking outside the box, if you need full control or need this information badly, bitfields are the wrong approach. The proper solution is shifting and masking. Of course this is feasible only when you are in control of the source code.

How to split and recombine an unsigned long into signed shorts?

I need to store a large number, but due to limitations in an old game engine, I am restricted to working with signed short (I can, however, use as many of these as I want).
I need to split an unsigned long (0 to 4,294,967,295) into multiple signed short (-32,768 to 32,767). Then I need to recombine the multiple signed short into a new unsigned long later.
For example, take the number 4,000,000,000. This should be split into multiple signed short and then recombined into unsigned long.
Is this possible in C? Thanks.
In addition to dbush's answer you can also use a union, e.g.:
union
{
unsigned long longvalue;
signed short shortvalues[2];
}
value;
The array of two shorts overlays the single long value.
I assume your problem is finding a place to store these large values. There are options we haven't yet explored which don't involve splitting the values up and recombining them:
Write them to a file, and read them back later. This might seem silly at first, but considering the bigger picture, if the values end up in a file later on then this might seem like the most attractive option.
Declare your unsigned long to have static storage duration e.g. outside of any blocks of code A.K.A globally (I hate that term) or using the static keyword inside a block of code.
None of the other answers so far are strictly portable, not that it seems like it should matter to you. You seem to be describing a twos complement 16-bit signed short representation and a 32-bit unsigned long representation (you should put assertions in place to ensure this is the case), which has implications that restrict the options for the implementation (that is, the C compiler, the OS, the CPU, etc)... so the portability issues associated with them are unlikely to occur. In case you're curious, however, I'll discuss those issues anyway.
The portability issues associated are that one type or the other might have padding bits causing the sizes to mismatch, and that there might be trap representations for short.
Changing the type but not the representation is by far much cleaner and easier to get right, though not portable; this includes the union hack, you could also avoid the union by casting an unsigned long * to a short *. These solutions are the cleanest solutions, which makes Ken Clement's answer my favourite so far, despite the non-portability.
Binary shifts (the >> and << operators), and (the & operator), or (|) operators introduce additional portability issues when you use them on signed types; they're also bulky and clumsy leading to more code to debug and a higher chance that mistakes are made.
You need to consider that while ULONG_MAX is guaranteed to be at least 4,294,967,295, SHORT_MIN is not guaranteed by the C standard to be -32,768; it might be -32,767 (which is quite uncommon indeed, though still possible)... There might be a negative zero or trap representation in place of that -32,768 value.
This means you can't portably rely upon a pair of signed shorts being able to represent all of the values of an unsigned long; even when the sizes match up you need another bit to account for the two missing values.
With this in mind, you could use a third signed char... The implementation-defined and undefined behaviours of the shift approaches could be avoided that way.
signed short x = (value ) & 0xFFF,
y = (value >> 12) & 0xFFF,
z = (value >> 24) & 0xFFF;
value = (unsigned long) x
+ ((unsigned long) y << 12)
+ ((unsigned long) z << 24);
You can do it like this (I used fixed size types to properly illustrate how it works):
#include<stdio.h>
#include<stdint.h>
int main()
{
uint32_t val1;
int16_t val2a, val2b;
uint32_t val3;
val1 = 0x11223344;
printf("val1=%08x\n", val1);
// to short
val2a = val1 >> 16;
val2b = val1 & 0xFFFF;
printf("val2a=%04x\n", val2a);
printf("val2b=%04x\n", val2b);
// to long
val3 = (uint32_t)val2a << 16;
val3 |= (uint32_t)val2b;
printf("val3=%08x\n", val3);
return 0;
}
Output:
val1=11223344
val2a=1122
val2b=3344
val3=11223344
There are any number of ways to do it. One thing to consider is that unsigned long may not have the same size on different hardware/operating systems. You can use exact length types found in stdint.h to avoid ambiguity (e.g. uint8_t, uint16_t, etc.). One implementation incorporating exact types (and cheezy hex values) would be:
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
#include <limits.h>
int main (void) {
uint64_t a = 0xfacedeadbeefcafe, b = 0;
uint16_t s[4] = {0};
uint32_t i = 0, n = 0;
printf ("\n a : 0x%16"PRIx64"\n\n", a);
/* separate uint64_t into 4 uint16_t */
for (i = 0; i < sizeof a; i += 2, n++)
printf (" s[%"PRIu32"] : 0x%04"PRIx16"\n", n,
(s[n] = (a >> (i * CHAR_BIT))));
/* combine 4 uint16_t into uint64_t */
for (n = i = 0; i < sizeof b; i += 2, n++)
b |= (uint64_t)s[n] << i * CHAR_BIT;
printf ("\n b : 0x%16"PRIx64"\n\n", b);
return 0;
}
Output
$ ./bin/uint64_16
a : 0xfacedeadbeefcafe
s[0] : 0xcafe
s[1] : 0xbeef
s[2] : 0xdead
s[3] : 0xface
b : 0xfacedeadbeefcafe
This is one possible solution (which assumes ulong is 32-bits, and sshort is 16-bits):
unsigned long L1, L2;
signed short S1, S2;
L1 = 0x12345678; /* Initial ulong to store away into two sshort */
S1 = L1 & 0xFFFF; /* Store component 1 */
S2 = L1 >> 16; /* Store component 2*/
L2 = S1 | (S2<<16); /* Retrive ulong from two sshort */
/* Print results */
printf("Initial value: 0x%08lx\n",L1);
printf("Stored component 1: 0x%04hx\n",S1);
printf("Stored component 2: 0x%04hx\n",S2);
printf("Retrieved value: 0x%08lx\n",L2);

Manipulating bits in C. Is there a better way?

I have a program that uses the following two functions 99.9999% of time:
unsigned int getBit(unsigned char *byte, unsigned int bitPosition)
{
return (*byte & (1 << bitPosition)) >> bitPosition;
}
void setBit(unsigned char *byte, unsigned int bitPosition, unsigned int bitValue)
{
*byte = (*byte | (1 << bitPosition)) ^ ((bitValue ^ 1) << bitPosition);
}
Can this be improved? The processing speed of the program mainly depends on the speed of these two functions.
UPDATE
I will do a benchmark for each provided answer bellow and write the timings I get. For the reference, the compiler used is gcc on Mac OS X platform:
Apple LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn)
I compile without any specific arguments like: gcc -o program program.c
If you think I should set some optimizations, feel free to suggest.
The CPU is:
2,53 GHz Intel Core 2 Duo
While processing 21.5 MB of data with my originally provided functions it takes about:
Time: 13.565221
Time: 13.558416
Time: 13.566042
Time is in seconds (these are three tries).
-- UPDATE 2 --
I've used the -O3 optimization (gcc -O3 -o program program.c) option and now I'm getting these results:
Time: 6.168574
Time: 6.170481
Time: 6.167839
I'll redo the other benchmarks now...
If you want to stick with functions, then for the first one:
unsigned int getBit(unsigned char *byte, unsigned int bitPosition)
{
return (*byte >> bitPosition) & 1;
}
For the second one:
void setBit(unsigned char *byte, unsigned int bitPosition, unsigned int bitValue)
{
if(bitValue == 0)
*byte &= ~(1 << bitPosition);
else
*byte |= (1 << bitPosition);
}
However, I suspect that the function call/return overhead will swamp the actual bit-flipping. A good compiler might inline these function calls anyways, but you may get some improvement by defining these as macros:
#define getBit(b, p) ((*(b) >> (p)) & 1)
#define setBit(b, p, v) (*(b) = ((v) ? (*(b) | (1 << (p))) : (*(b) & (~(1 << (p))))))
#user694733 pointed out that branch prediction might be a problem and could cause a slowdown. As such it might be good to define separate setBit and clearBit functions:
void setBit(unsigned char *byte, unsigned int bitPosition)
(
*byte |= (1 << bitPosition);
}
void clearBit(unsigned char *byte, unsigned int bitPosition)
(
*byte &= ~(1 << bitPosition);
}
And their corresponding macro versions:
#define setBit(b, p) (*(b) |= (1 << (p)))
#define clearBit(b, p) (*(b) &= ~(1 << (p)))
The separate functions/macros would be useful if the calling code hard-codes the value passed for the bitValue argument in the original version.
Share and enjoy.
How about:
bool getBit(unsigned char byte, unsigned int bitPosition)
{
return (byte & (1 << bitPosition)) != 0;
}
No need to use a shift operator to "physically" shift the masked-out bit into position 0, just use a comparison operator and let the compiler deal with it. This should of course also be made inline if possible.
For the second one, it's complicated by the fact that it's basically "assignBit", i.e. it takes the new value of the indicated bit as a parameter. I'd try using the explicit branch:
unsigned char setBit(unsigned char byte, unsigned int bitPosition, bool value)
{
const uint8_t mask = 1 << bitPosition;
if(value)
return byte | mask;
return byte & ~mask;
}
Generally, these things are best left to the compiler's optimizer.
But why do you need functions for such trivial tasks? A C programmer should not get shocked when they encounter basic stuff like this:
x |= 1<<n; // set bit
x &= ~(1<<n); // clear bit
x ^= 1<<n; // toggle bit
y = x & (1<<n); // read bit
There is no real reason to hide simple things like these behind functions. You won't make the code more readable, because you can always assume that the reader of your code knows C. It rather seems like pointless wrapper functions to hide away "scary" operators that the programmer isn't familiar with.
That being said, the introduction of the functions may cause a lot of overhead code. To turn your functions back into the core operations shown above, the optimizer would have to be quite good.
If you for some reason persists in using the functions, any attempt of manual optimization is going to be questionable practice. The use of inline, register and such keywords are likely superfluous. The compiler with optimizer enabled should be far more capable to make the decision when to inline and when to put things in registers than the programmer.
As usual, it doesn't make sense to manually optimize code, unless you know more about the given CPU than the person who wrote the compiler port for it. Most often this is not the case.
What you can harmlessly do as manual optimization, is to get rid of unsigned char (you shouldn't be using the native C types for this anyhow). Instead use the uint_fast8_t type from stdint.h. Using this type means: "I would like to have an uint8_t, but if the CPU prefers a larger type for alignment/performance reasons, it can use that instead".
EDIT
There are different ways to set a bit to either 1 or 0. For maximum readability, you would write this:
uint8_t val = either_1_or_0;
...
if(val == 1)
byte |= 1<<n;
else
byte &= ~(1<<n);
This does however include a branch. Let's assume we know that the branch is a known performance bottleneck on the given system, to justify the otherwise questionable practice of manual optimization. We could then set the bit to either 1 or 0 without a branch, in the following manner:
byte = (byte & ~(1<<n)) | (val<<n);
And this is where the code is turning a bit unreadable. Read the above as:
Take the byte and preserve everything in it, except for the bit we want to set to 1 or 0.
Clear this bit.
Then set it to either 1 or 0.
Note that the whole right side sub-expression is pointless if val is zero. So on a "generic system" this code is possibly slower than the readable version. So before writing code like this, we would have to know that our CPU is very good at bit-flipping and not-so-good at branch prediction.
You can benchmark with the following variations and keep the best of all solutions.
inline unsigned int getBit(unsigned char *byte, unsigned int bitPosition)
{
const unsigned char mask = (unsigned char)(1U << bitPosition);
return !!(*byte & mask);
}
inline void setBit(unsigned char *byte, unsigned int bitPosition, unsigned int bitValue)
{
const unsigned char mask = (unsigned char)(1U << bitPosition);
bitValue ? *byte |= mask : *byte &= ~mask;
}
If your algorithm expects only zero v/s non zero result from getBit, you can remove !! from return. (To return 0 or 1, I found the version of #BobJarvis really clean)
If your algorithm can pass the bit mask to be set or reset to setBit function, you won't need to calculate mask explicitly.
So depending on the code calling these functions, it may be possible to cut on time.

C variable smaller then 8-bit

I'm writing C implementation of Conway's Game of Life and pretty much done with the code, but I'm wondering what is the most efficient way to storage the net in the program.
The net is two dimensional and stores whether cell (x, y) is alive (1) or dead (0). Currently I'm doing it with unsigned char like that:
struct:
typedef struct {
int rows;
int cols;
unsigned char *vec;
} net_t;
allocation:
n->vec = calloc( n->rows * n->cols, sizeof(unsigned char) );
filling:
i = ( n->cols * (x - 1) ) + (y - 1);
n->vec[i] = 1;
searching:
if( n->vec[i] == 1 )
but I don't really need 0-255 values - I only need 0 - 1, so I'm feeling that doing it like that is a waste of space, but as far as I know 8-bit char is the smallest type in C.
Is there any way to do it better?
Thanks!
The smallest declarable / addressable unit of memory you can address/use is a single byte, implemented as unsigned char in your case.
If you want to really save on space, you could make use of masking off individual bits in a character, or using bit fields via a union. The trade-off will be that your code will execute a bit slower, and will certainly be more complicated.
#include <stdio.h>
union both {
struct {
unsigned char b0: 1;
unsigned char b1: 1;
unsigned char b2: 1;
unsigned char b3: 1;
unsigned char b4: 1;
unsigned char b5: 1;
unsigned char b6: 1;
unsigned char b7: 1;
} bits;
unsigned char byte;
};
int main ( ) {
union both var;
var.byte = 0xAA;
if ( var.bits.b0 ) {
printf("Yes\n");
} else {
printf("No\n");
}
return 0;
}
References
Union and Bit Fields, Accessed 2014-04-07, <http://www.rightcorner.com/code/CPP/Basic/union/sample.php>
Access Bits in a Char in C, Accessed 2014-04-07, <https://stackoverflow.com/questions/8584577/access-bits-in-a-char-in-c>
Struct - Bit Field, Accessed 2014-04-07, <http://cboard.cprogramming.com/c-programming/10029-struct-bit-fields.html>
Unless you're working on an embedded platform, I wouldn't be too concerned about the size your net takes up by using an unsigned char to store only a 1 or 0.
To address your specific question: char is the smallest of the C data types. char, signed char, and unsigned char are all only going to take up 1 byte each.
If you want to make your code smaller you can use bitfields to decrees the amount of space you take up, but that will increase the complexity of your code.
For a simple exercise like this, I'd be more concerned about readability than size. One way you can make it more obvious what you're doing is switch to a bool instead of a char.
#include <stdbool.h>
typedef struct {
int rows;
int cols;
bool *vec;
} net_t;
You can then use true and false which, IMO, will make your code much easier to read and understand when all you need is 1 and 0.
It will take up at least as much space as the way you're doing it now, but like I said, consider what's really important in the program you're writing for the platform you're writing it for... it's probably not the size.
The smallest type on C as i know are the char (-128, 127), signed char (-128, 127), unsigned char (0, 255) types, all of them takes a whole byte, so if you are storing multiple bits values on different variables, you can instead use an unsigned char as a group of bits.
unsigned char lives = 128;
At this moment, lives have a 128 decimal value, which it's 10000000 in binary, so now you can use a bitwise operator to get a single value from this variable (like an array of bits)
if((lives >> 7) == 1) {
//This code will run if the 8 bit from right to left (decimal 128) it's true
}
It's a little complex, but finally you'll end up with a bit array, so instead of using multiple variables to store single TRUE / FALSE values, you can use a single unsigned char variable to store 8 TRUE / FALSE values.
Note: As i have some time out of the C/C++ world, i'm not 100% sure that it's "lives >> 7", but it's with the '>' symbol, a little research on it and you'll be ready to go.
You're correct that a char is the smallest type - and it is typically (8) bits, though this is a minimum requirement. And sizeof(char) or (unsigned char) is (1). So, consider using an (unsigned) char to represent (8) columns.
How many char's are required per row? It's (cols / 8), but we have to round up for an integer value:
int byte_cols = (cols + 7) / 8;
or:
int byte_cols = (cols + 7) >> 3;
which you may wish to store with in the net_t data structure. Then:
calloc(n->rows * n->byte_cols, 1) is sufficient for a contiguous bit vector.
Address columns and rows by x and y respectively. Setting (x, y) (relative to 0) :
n->vec[y * byte_cols + (x >> 3)] |= (1 << (x & 0x7));
Clearing:
n->vec[y * byte_cols + (x >> 3)] &= ~(1 << (x & 0x7));
Searching:
if (n->vec[y * byte_cols + (x >> 3)] & (1 << (x & 0x7)))
/* ... (x, y) is set... */
else
/* ... (x, y) is clear... */
These are bit manipulation operations. And it's fundamentally important to learn how (and why) this works. Google the term for more resources. This uses an eighth of the memory of a char per cell, so I certainly wouldn't consider it premature optimization.

Sign extension from 16 to 32 bits in C

I have to do a sign extension for a 16-bit integer and for some reason, it seems not to be working properly. Could anyone please tell me where the bug is in the code? I've been working on it for hours.
int signExtension(int instr) {
int value = (0x0000FFFF & instr);
int mask = 0x00008000;
int sign = (mask & instr) >> 15;
if (sign == 1)
value += 0xFFFF0000;
return value;
}
The instruction (instr) is 32 bits and inside it I have a 16bit number.
Why is wrong with:
int16_t s = -890;
int32_t i = s; //this does the job, doesn't it?
what's wrong in using the builtin types?
int32_t signExtension(int32_t instr) {
int16_t value = (int16_t)instr;
return (int32_t)value;
}
or better yet (this might generate a warning if passed a int32_t)
int32_t signExtension(int16_t instr) {
return (int32_t)instr;
}
or, for all that matters, replace signExtension(value) with ((int32_t)(int16_t)value)
you obviously need to include <stdint.h> for the int16_t and int32_t data types.
Just bumped into this looking for something else, maybe a bit late, but maybe it'll be useful for someone else. AFAIAC all C programmers should start off programming assembler.
Anyway sign extending is much easier than the proposals. Just make sure you are using signed variables and then use 2 shifts.
long value; // 32 bit storage
value=0xffff; // 16 bit 2's complement -1, value is now 0x0000ffff
value = ((value << 16) >> 16); // value is now 0xffffffff
If the variable is signed then the C compiler translates >> to Arithmetic Shift Right which preserves sign. This behaviour is platform independent.
So, assuming that value starts of with 0x1ff then we have, << 16 will SL (Shift Left) the value so instr is now 0xff80, then >> 16 will ASR the value so instr is now 0xffff.
If you really want to have fun with macros then try something like this (syntax works in GCC haven't tried in MSVC).
#include <stdio.h>
#define INT8 signed char
#define INT16 signed short
#define INT32 signed long
#define INT64 signed long long
#define SIGN_EXTEND(to, from, value) ((INT##to)((INT##to)(((INT##to)value) << (to - from)) >> (to - from)))
int main(int argc, char *argv[], char *envp[])
{
INT16 value16 = 0x10f;
INT32 value32 = 0x10f;
printf("SIGN_EXTEND(8,3,6)=%i\n", SIGN_EXTEND(8,3,6));
printf("LITERAL SIGN_EXTEND(16,9,0x10f)=%i\n", SIGN_EXTEND(16,9,0x10f));
printf("16 BIT VARIABLE SIGN_EXTEND(16,9,0x10f)=%i\n", SIGN_EXTEND(16,9,value16));
printf("32 BIT VARIABLE SIGN_EXTEND(16,9,0x10f)=%i\n", SIGN_EXTEND(16,9,value32));
return 0;
}
This produces the following output:
SIGN_EXTEND(8,3,6)=-2
LITERAL SIGN_EXTEND(16,9,0x10f)=-241
16 BIT VARIABLE SIGN_EXTEND(16,9,0x10f)=-241
32 BIT VARIABLE SIGN_EXTEND(16,9,0x10f)=-241
Try:
int signExtension(int instr) {
int value = (0x0000FFFF & instr);
int mask = 0x00008000;
if (mask & instr) {
value += 0xFFFF0000;
}
return value;
}
People pointed out casting and a left shift followed by an arithmetic right shift. Another way that requires no branching:
(0xffff & n ^ 0x8000) - 0x8000
If the upper 16 bits are already zeroes:
(n ^ 0x8000) - 0x8000
• Community wiki as it's an idea from "The Aggregate Magic Algorithms, Sign Extension"

Resources