Are there limit to received packet count / byte in linux? - c

I want to write a program that sends network's current received packet cnt/byte.
I get these data from /proc/net/dev.
But I can't decide what type to store these data.
I just have feeling that using unsigned long long int is wasteful.
Are there limit for received packet cnt/byte like RLIMIT_*?

uint64_t, uint_fast64_t, or unsigned long long, are the correct types to use here. The first two are available from <stdint.h> or <inttypes.h>, and are what I'd recommend. unsigned long long is perfectly acceptable too. [*]
You are suffering from a misguided instinct towards premature optimization.
Even if you had a thousand of these counters – and you usually do not –, they would take a paltry amount of RAM, some 8192 bytes. This is a tiny fraction of the RAM use of a typical userspace process, because even the standard C library (especially functions like printf(); and anything that does file I/O using <stdio.h>) uses a couple of orders of magnitude more.
So, when you worry about how much memory you're "wasting" by using an unsigned integer type that might be larger than strictly necessary for most cases, you're probably wasting an order of magnitude more by not choosing a better approach or a better algorithm in the first place.
It is make-work worry. There are bigger things you are not thinking about all yet (because you lack the experience or knowledge or both) that affect the results you might be thinking of –– efficiency, memory footprint, run time to complete the task at hand –– often an order of magnitude more than those small details. You need to learn to think of the big picture, instead: Is this needed? Is this useful, or is there a better way to look at this?
[*] You can verify this by looking at how the data is generated, by the net/core/net-procfs.c:dev_seq_printf_stats(), as well as look at the data structure, include/uapi/linux/if_link.h:struct rtnl_link_stats64.
The __u64 type is how the Linux kernel calls the type, and %llu is how the Linux kernel seq_printf() implementation formats 64-bit unsigned integers.)

To decide what type to use depends on pragmatic conditions like:
How long do you want to record the number of packets received
What is the worst-case average data rate over the above time
How many times will you be storing (or transferring over a network) this number
Once you've figured this out, calculate the rough maximum value of this number. Then, depending on how many times you want to store(/transfer) this number you can determine the storage(/transfer) volume per possible type.
Finally select the type that has a very broad value margin and doesn't take up too much space.
I wouldn't expect long long to become wasteful quickly. However, when you're counting packets, a 4-byte integer seems to be more than sufficient, unless you're in an extreme environment with crazy data volumes.

Related

Is there a optimal batch size for arc4random_buf?

I need billions of random bytes from arc4random_buf, and my strategy is to request X random bytes at a time, and repeat this many times.
My question is how large should X be. Since the nbytes argument to arc4random_buf can be arbitrarily large, I suppose there must be some kind of internal loop that generates some entropy each time its body is executed. Say, if X is a multiple of the number of random bytes generated each iteration, the performance can be improved because I’m not wasting any entropy.
I’m on macOS, which is unfortunately closed-source, so I cannot simply read the source code. Is there any portable way to determine the optimal X?
Doing some benchmarks on typical target systems is probably the best way to figure this out, but looking at a couple of implementations, it seems unlikely that the buffer size will make much difference to the cost of arc4random_buffer.
The original implementation implements arc4random_buffer as a simple loop around a function which generates one byte. As long as the buffer is big enough to avoid excessive call overhead, it should make little difference.
The FreeBSD library implementation appears to attempt to optimise by periodically computing about 1K of random bytes. Then arc4random_buffer uses memcpy to copy the bytes from the internal buffer to the user buffer.
For the FreeBSD implementation, the optimal buffer size would be the amount of data available in the internal buffer, because that minimizes the number of calls to memcpy. However, there's no way to know how much that is, and it will not be the same on every call because of the rekeying algorithm.
My guess is that you will find very little difference between buffer sizes greater than, say, 16K, and probably even less. For the FreeBSD implementation, it will be very slightly more efficient if your buffer size is a multiple of 8.
Addendum: All the implementations I know of have a global rekey threshold, so you cannot influence the cost of rekeying by changing the buffer size in arc4random_buffer. The library simply rekeys every X bytes generated.

Use int or char in arrays?

Suppose I have an array in C with 5 elements which hold integers in the range [0..255], would it be generally better to useunsigned char, unsigned int or int regarding performance? Because a char would be only one byte, but an int is easier to handle for the processor, as far as I know. Or does it mostly depend on how the elements are accessed?
EDIT: Measuring is quite difficult, because the code belongs to a library, and the array is accessed external.
Also, I encounter this problem not only in this very case, so I'm asking for a more general answer
While the answer really depends on the CPU and how it handles loading storing small integers, you can assume that the byte array will be faster on most modern systems:
A char only takes 1/4 of the space that an int takes (on most systems), which means that working on a char array takes only a quarter of the memory bandwidth. And most codes are memory bound on modern hardware.
This one is quite impossible to answer, because it depends on the code, compiler, and processor.
However, one suggestion is to use uint8_t insted of unsigned char. (The code will be the same, but the explicit version conveys the meaning much better.)
Then there is one more thing to consider. You might be best off by packing four 8-bit integers into one 32-bit integer. Most arithmetic and bitwise logical operations work fine as long as there are no overflows (division being the notable exception).
The golden rule: Measure it. If you can't be bothered measuring it, then it isn't worth optimising. So measure it one way, change it, measure it the other way, take whatever is faster. Be aware that when you switch to a different compiler, or a different processor, like a processor that is introduced in 2015 or 2016, the other code might now be faster.
Alternatively, don't measure it, but write the most readable and maintainable code.
And consider using a single 64 bit integer and shift operations :-)

uint32_t vs int as a convention for everyday programming

When should one use the datatypes from stdint.h?
Is it right to always use as a convention them?
What was the purpose of the design of nonspecific size types like int and short?
When should one use the datatypes from stdint.h?
When the programming tasks specify the integer width especially to accommodate some file or communication protocol format.
When high degree of portability between platforms is required over performance.
Is it right to always use as a convention them (then)?
Things are leaning that way. The fixed width types are a more recent addition to C. Original C had char, short, int, long and that was progressive as it tried, without being too specific, to accommodate the various integer sizes available across a wide variety of processors and environments. As C is 40ish years old, it speaks to the success of that strategy. Much C code has been written and successfully copes with the soft integer specification size. With increasing needs for consistency, char, short, int, long and long long, are not enough (or at least not so easy) and so int8_t, int16_t, int32_t, int64_t are born. New languages tend to require very specific fixed integer size types and 2's complement. As they are successfully, that Darwinian pressure will push on C. My crystal ball says we will see a slow migration to increasing uses of fixed width types in C.
What was the purpose of the design of nonspecific size types like int and short?
It was a good first step to accommodate the wide variety of various integer widths (8,9,12,18,36, etc.) and encodings (2's, 1's, sign/mag). So much coding today uses power-of-2 size integers with 2's complement, that one may not realize that many other arrangements existed beforehand. See this answer also.
My work demands that I use them and I actually love using them.
I find it useful when I have to implement a protocol and use them inside a structure which can be a message that needs to be sent out or a holder of certain information.
If I have to use a sequence number that needs to be incremented, I wouldn't use int because sequence numbers aren't supposed to be negative. I use uint32_t instead. I will hence know the sequence number space and can plan/code accordingly.
The code we write will be running on 32 as well as 64 bit machine so using "int" on different bit machines results in subtle bugs which can be a pain to identify. Using unint16_t will allocate 16 bits on 32 or 64 bit architecture.
No, I would say it's never a good idea to use those for general-purpose programming.
If you really care about number of bits, then go ahead and use them but for most general use you don't care so then use the general types. The general types might be faster, and they are certainly easier to read and write.
Fixed width datatypes should be used only when really required (e.g. when implementing transfer protocols or accessing hardware or requiring a certain range of values (you should use the ..._least_... variant there)). Your program won't adapt else on changed environments (e.g. using uint32_t for filesizes might be ok 10 years ago, but off_t will adapt to recent needs). As others have pointed out, there might be a performance impact as int might be faster than uint32_t on 16 bit platforms.
int itself is very problematic due to its signedness; it is better to use e.g. size_t when variable holds result of strlen() or sizeof().

How to choose the type based on the run-time needs?

I have the following situation and I'm expecting some expert advise from SO folks.
I'm writing an application and as a part of that I need to expose an API for creation, modification and deletion of object(s). Each object that gets created should be uniquely identified (only with positive identifiers)!
The system will have the following number of objects in a given day.
Minimal - <50,000 objects (60% - 14.4/24 hrs)
Average - >50,ooo but <65,000 objects (30% - 7.2/24 hrs)
Peak - >65,000 but <100,000 objects (10% - 2.4/24 hrs)
Now, the question is, what should be the type of the object identifier? The case #1 and #2 will fit within unsigned short int (2 bytes). But it cannot accommodate the objects for case #3. So case #3 needs a wider type like int (4 bytes).
I don't want to use an int when the system is in case #1 and case #2 (90% of time), because, say there are currently 65k objects in the system and if we use int to hold object-id then we will use double the size of memory compare to using unsigned short int. OTOH, when the system is in peak (10% of time) we definitely need int to store the object-id.
And, there could be time when the system fluctuates between case #2 and #3 based on the users needs.
In C, is there a way to handle this situation in a memory efficient way i.e. by changing the type of the object-id based on the usage at run-time?!
NOTE: when objects get deleted, the deleted object-id will used for the creation of next object. And object-id wrapping will be done only in the corner case (until and unless it is absolutely required).
The C language doesn't support changing the type of something dynamically. You could probably figure out how to do it one way or another, but it could involve compiling most of your code twice (once for the 16-bit ints and once for the 32-bit ints) and then choosing at run time which version of the code to run. This sounds like a massive pain, and it will only save you 200 kB of memory at most (if anything).
Your computer probably has gigabytes of memory already so I can't imagine 200 kB will make a difference. If you're actually working on an ancient machine with 16 MB of memory then ask your boss for a better machine. Programmers are expensive and hardware is cheap.
If memory usage is critical for you, you can use complex id that will consist of unsigned short and unsigned char - you`ll get 24-bit id and it will be enough for 2^24 = 16777216 objects. Of course it will have some impact on performance, but in such way you can get rid of reallocating space for identifiers.
In case if it is premature optimization - just don`t do it.
This appears to be a case of premature optimization, you're trying to optimize your memory foot print before you even know if it will become an issue for the production server you're running on.
As stated above there are many issues to do with padding and alignment that means that any saving you envisage could ultimately be rendered mute by the compiler. At the same time you're making your code harder to understand and debug with the proposed optimisation of changing the type of the object ID at runtime.
In other words, code it using the smallest type that fits the problem then optimize if it proves that the memory usage is too much. Even if you do get some errors because it is using too much memory, memory is cheap, buy more.
If memory efficiency matter, take a look at trick that UTF-8 encoding use.
This may be analogous to the implementation of pid_t type for process-id in Linux. the Linux user is given an option to increase the maximum number of process/threads created by modifying /proc/sys/kernel/pid_max file. Another idea is to try and create opaque types as mentioned here.

Faster to use Integers as Booleans?

From a memory access standpoint... is it worth attempting an optimization like this?
int boolean_value = 0;
//magical code happens and boolean_value could be 0 or 1
if(boolean_value)
{
//do something
}
Instead of
unsigned char boolean_value = 0;
//magical code happens and boolean_value could be 0 or 1
if(boolean_value)
{
//do something
}
The unsigned char of course takes up only 1 byte as apposed to the integers 4 (assuming 32 bit platform here), but my understanding is that it would be faster for a processor to read the integer value from memory.
It may or may not be faster, and the speed depends on so many things that a generic answer is impossible. For example: hardware architecture, compiler, compiler options, amount of data (does it fit into L1 cache?), other things competing for the CPU, etc.
The correct answer, therefore, is: try both ways and measure for your particular case.
If measurement does not indicate that one method is significantly faster than the other, opt for the one that is clearer.
From a memory access standpoint... is
it worth attempting an optimization
like this?
Probably not. In almost all modern processors, memory will get fetched based on the word size of the processor. In your case, even to get one byte of memory out, your processor probably fetches the entire 32-bit word or more based on the caching of that processor. Your architecture may vary, so you will want to understand how your CPU works to gauge.
But as others have said, it doesn't hurt to try it and measure it.
This is almost never a good idea. Many systems can only read word-sized chunks from memory at once, so reading a byte then masking or shifting will actually take more code space and just as much (data) memory. If you're using an obscure tiny system, measure, but in general this will actually slow down and bloat your code.
Asking how much memory unsigned char takes versus int is only meaningful when it's in an array (or possibly a structure, if you're careful to order the elements to take care of alignment). As a lone variable, it's very unlikely that you save any memory at all, and the compiler is likely to generate larger code to truncate the upper bits of registers.
As a general policy, never use smaller-than-int types except in arrays unless you have a really good reason other than trying to save space.
Follow the standard rules of optimization. First, don't optimize. Then test if your code needs it at some point. Then optimize that point. This link provides an excellent intro to the topic of optimization.
http://www.catb.org/~esr/writings/taoup/html/optimizationchapter.html

Resources