I was wondering if you could define a type in bits.
Specifically, I want to define a 24 bits type, in order to store the cumulative number of package lost in RTP.
If not, how can I memcpy 3 bytes from an int.
If I do this, I'm not sure how it'll end:
memcpy(pkg + 29, (&clamped_pkgs_lost)+(sizeof(char)), 3*sizeof (char));
You can define a type with at least 24 bits using a bitfield, but a bitfield must be a member of a struct:
struct {
unsigned pkgs_lost: 24;
};
Whether you use such a bitfield, or just a simple type with at least 24 bits like unsigned long to store the value within your application, when you copy it to the RTP packet the simplest portable way to do it is to copy it a byte at a time. This is because the value in the RTP packet is always big-endian, and the endianness of your host is unknown.
Assuming that pkg is of type unsigned char *, you would do something like:
pkg[33] = pkgs_lost >> 16;
pkg[34] = pkgs_lost >> 8;
pkg[35] = pkgs_lost;
to place the 24-bit big endian number at byte position 33 in the outgoing packet.
In C you can define integer types only in terms of the fundamental types or bitfields thereof.
Bitfields are quirky. You can't take their address. And they won't save you any space if you need just 24 bits, but your platform only has fundamental types of 8, 16 and 32 bits. You'd still need to use either 3 8-bit integers or 1 32-bit integer (or 1 16-bit and 1 8-bit) to store those 24 bits of yours.
For something as simple as a counter, I'd just use a 32-bit integer. If I'm interested in limiting it to 24 bit values, I have two options:
zeroing out the 8 most significant bits and thus simulating a wrap around
limiting the value to 224-1, so it never grows beyond it nor wraps around
You can store a narrow integer in a larger integer. Just mask-off the bits you want.
int main() {
long data;
data & 0xFFFFFF;
}
Or, you can define a bitfield on a structure member. But don't try to write the struct to disk and open it on a different system because bitfield layouts are not standardized.
struct {
long data:24;
};
Related
Hi I want to declare a 12 bit variable in C or any "unconventional" size variable (a variable that is not in the order of 2^n). how would I do that. I looked everywhere and I couldn't find anything. If that is not possible how would you go about saving certain data in its own variable.
Use a bitfield:
struct {
unsigned int twelve_bits: 12;
} wrapper;
Unlike Ada, C has no way to specify types with a limited range of values. C relies on predefined types with implementation defined characteristics, but with certain guarantees:
Types short and int are guaranteed by the standard to hold at least 16 bits, you can use either one to hold your 12 bit values, signed or unsigned.
Similarly, type long is guaranteed to hold at least 32 bits and type long long at least 64 bits. Choose the type that is large enough for your purpose.
Types int8_t, int16_t, int32_t, int64_t and their unsigned counterparts defined in <stdint.h> have more precise semantics but might not be available on all systems. Types int_least8_t, int_least16_t, int_least32_t and int_least64_t are guaranteed to be available, as well as similar int_fastXX_t types, but they are not used very often, probably because the names are somewhat cumbersome.
Finally, you can use bit-fields for any bit counts from 1 to 64, but these are only available as struct members. bit-fields of size one should be declared as unsigned.
Data is always stored in groups of bytes (8 bits each).
In C, variables can be declared of 1 byte (a "char" or 8 bits), 2 bytes (a "short" int on many computers is 16 bits), and 4 bytes (a "long" int on many computers is 32 bits).
On a more advanced level, you are looking for "bitfields".
See this perhaps: bitfield discussion
I'm trying to celebrate 10,000,000 questions on StackOverflow with a simple console application written in C, but I don't want to waste any memory. What's the most efficient way to store the number 10,000,000 in memory?
The type you're looking for is int_least32_t, from stdint.h, which will give you the smallest type with at least 32 bits. This type is guaranteed to exist on C99 implementations.
Exact-width typedefs such as int32_t are not guaranteed to exist, though you'd be hard pressed to find a platform without it.
The number 10000000 (ten million) requires 24 bits to store as an unsigned value.
Most C implementations do not have a 24-bit type. Any implementation that conforms to the 1999 C standard or later must provide the <stdint.h> header, and must define all of:
uint_least8_t
uint_least16_t
uint_least32_t
uint_least64_t
each of which is (an alias for) an unsigned integer type with at least the specified width, such that no narrower integer type has at least the specified width. Of these, uint_least32_t is the narrowest type that's guaranteed to hold the value 10000000.
On the vast majority of C implementations, uint_least32_t is the type you're looking for -- but on an implementation that supports 24-bit integers, there will be a narrower type that satisfies your requirements.
Such an implementation would probably define uint24_t, assuming that it's unsigned 24-bit type has no padding bits. So you could do something like this:
#include <stdint.h>
#ifdef UINT_24_MAX
typedef uint24_t my_type;
#else
typedef uint_least32_t my_type;
#endif
That's still not 100% reliable (for example if there's a 28-bit type but no 24-bit type, this would miss it). In the worst case, it would select uint_least32_t.
If you want to restrict yourself to the predefined types (perhaps because you want to support pre-C99 implementations), you could do this:
#include <limits.h>
#define TEN_MILLION 10000000
#if UCHAR_MAX >= TEN_MILLION
typedef unsigned char my_type;
#elif USHRT_MAX >= TEN_MILLION
typedef unsigned short my_type;
#elif UINT_MAX >= TEN_MILLION
typedef unsigned int my_type;
#else
typedef unsigned long my_type;
#endif
If you merely want the narrowest predefined type that's guaranteed to hold the value 10000000 on all implementations (even if some implementations might have a narrower type that can hold it), use long (int can be a narrow as 16 bits).
If you don't require using an integer type, you can simply define a type that's guaranteed to be 3 bytes wide:
typedef unsigned char my_type[3];
But actually that will be wider than you need of CHAR_BIT > 8:
typedef unsigned char my_type[24 / CHAR_BIT]
but that will fail if 24 is not a multiple of CHAR_BIT.
Finally, your requirement is to represent the number 10000000; you didn't say you need to be able to represent any other numbers:
enum my_type { TEN_MILLION };
Or you can define a 1-bit bitfield with the value 1 denoting 10000000 and the value 0 denoting not 10000000.
Technically a 24-bit integer can store that, but there are no 24-bit primitive types in C. You will have to use a 32-bit int, or a long.
For performance that would be the best approach, wasting 1 unused byte of memory is irrelevant.
Now, if for study purposes you really really want to store 10 million in the smallest piece of memory possible, and you are even willing to make your own data storage method to achieve that, you can store that in 1 byte, by customizing a storage method that follows the example of float. You only need 4 bits to represent 10, and other 3 bits to represent 7, and have in 1 byte all data you need to calculate pow(10, 7);. It even leaves you with an extra free bit you can use as sign.
The largest value that can be stored in an uint16_t is 0xFFFF, which is 65535 in decimal. That is obviously not large enough.
The largest value that can be stored in an uint32_t is 0xFFFFFFFF, which is 4294967295 in decimal. That is obviously large enough.
Looks like you'll need to use uint32_t.
If you want to store a number n > 0 in an unsigned integer data type, then you need at least ⌈lg (n+1)⌉ bits in your integer type. In your case, ⌈lg (10,000,000 + 1)⌉ = 24, so you'd need at least 24 bits in whatever data type you picked.
To the best of my knowledge, the C spec does not include an integer type that holds specifically 24 bits. The closest option would be to use something like uint32_t, but (as was mentioned in a comment) this type might not exist on all compilers. The type int_least32_t is guaranteed to exist, but it might be way larger than necessary.
Alternatively, if you just need it to work on one particular system, your specific compiler may have a compiler-specific data type for 24-bit integers. (Spoiler: it probably doesn't. ^_^)
The non-smart-alecky answer is uint32_t or int32_t.
But if you're really low on memory,
uint8_t millions_of__questions;
But only if you are a fan of fixed-point arithmetic and are willing to deal with some error (or are using some specialized numerical representation scheme). Depends on what you're doing with it after you store it and what other numbers you want to be storable in your "datatype".
struct {
uint16_t low; // Lower 16 bits
uint8_t high; // Upper 8 bits
} my_uint24_t;
This provides 24-bits of storage and can store values up to 16,777,215. It's not a native type and would need to special accessor functions, but it takes less space than a 32-bit integer on most platforms (packing may be required).
uint32_t from <stdint.h> would probably be your go-to data type. Since uint16_t would only be able to store 2^16-1=65535, the next available type is uint32_t.
Note that uint32_t stores unsigned values; use int32_t if you want to store signed numbers as well. You can find more info, e.g., here: http://pubs.opengroup.org/onlinepubs/009695399/basedefs/stdint.h.html
Is it possible to declare a bitfield of very large numbers e.g.
struct binfield{
uber_int field : 991735910442856976773698036458045320070701875088740942522886681;
}wordlist;
just to clarify, i'm not trying to represent that number in 256bit, that's how many bits I want to use. Or maybe there aren't that many bits in my computer?
C does not support numeric data-types of arbitrary size. You can only use those integer sizes which are provided by the compiler, and when you want your code to be portable, you better stick to the minimum guaranteed sizes for the standardized types of char (8 bit), short (16 bit), and long (32 bit) and long long (64 bit).
But what you can do instead is create a char[]. A char is always at least 8 bit (and is not more than 8 bit either except on some very exotic platforms). So you can use an array of char to store as many bit-values as you can afford memory. However, when you want to use a char array as a bitfield you will need some boilerplate code to access the correct byte.
For example, to get the value of bit n of a char array, use
bitfield[n/8] >> n%8 & 0x1
I want to define a datatype for boolean true/false value in C. Is there any way to define a datatype with 1 bit size to declare for boolean?
Maybe you are looking for a bit-field:
struct bitfield
{
unsigned b0:1;
unsigned b1:1;
unsigned b2:1;
unsigned b3:1;
unsigned b4:1;
unsigned b5:1;
unsigned b6:1;
unsigned b7:1;
};
There are so many implementation-defined features to bit-fields that it is almost unbelievable, but each of the elements of the struct bitfield occupies a single bit. However, the size of the structure may be 4 bytes even though you only use 8 bits (1 byte) of it for the 8 bit-fields.
There is no other way to create a single bit of storage. Otherwise, C provides bytes as the smallest addressable unit, and a byte must have at least 8 bits (historically, there were machines with 9-bit or 10-bit bytes, but most machines these days provide 8-bit bytes only — unless perhaps you're on a DSP where the smallest addressable unit may be a 16-bit quantity).
Try this:
#define bool int
#define true 1
#define false 0
In my opinion use a variable of type int. That is what we do in C. For example:
int flag=0;
if(flag)
{
//do something
}
else
{
//do something else
}
EDIT:
You can specify the size of the fields in a structure of bits fields in bits. However the compiler will round the type to at the minimum the nearest byte so you save nothing and the fields are not addressable and the order that bits in a bit field are stored is implementation defined. Eg:
struct A {
int a : 1; // 1 bit wide
int b : 1;
int c : 2; // 2 bits
int d : 4; // 4 bits
};
Bit-fields are only allowed inside structures. And other than bit-fields, no object is allowed to be smaller than sizeof(char).
The answer is _Bool or bool.
C99 and later have a built-in type _Bool which is guaranteed to be large enough to store the values 0 and 1. It may be 1 bit or larger, and it is an integer.
They also have a library include which provides a macro bool which expands to _Bool, and macros true and false that expand to 1 and 0 respectively.
If you are using an older compiler, you will have to fake it.
[edit: thanks Jonathan]
I have no idea what to call it, so I have no idea how to search for it.
unsigned int odd : 1;
Edit:
To elaborate, it comes from this snippet:
struct bitField {
unsigned int odd : 1;
unsigned int padding: 15; // to round out to 16 bits
};
I gather this involves bits, but I'm still not all the way understanding.
They are bitfields. odd and padding will be stored in one unsigned int (16 bit) where odd will occupy the lowest bit, and padding the upper 15 bit of the unsigned int.
It's a bitfield - Check the C FAQ.
It's:
1 bit of "odd" (e.g. 1)
15 bits of "padding" (e.g. 0000000000000001)
and (potentially) whatever other bits round out the unsigned int. In modern 32-bit platforms where this is 32 bits, you'll see another 16 0s in memory (but not in the struct). (In this case sizeof returns 4)
Bitfields can save memory but potentially add instructions to computations. In some cases compilers may ignore your bitfield settings. You can't make any assumptions about how the compiler will choose to actually lay out your bit field, and it can depend on the endianness of your platform.
The main thing I use bitfields for is when I know I will be doing a lot of copying of the data, and not necessarily a lot of computation on or reference of the specific fields in the bit field.