Determining TASK_SIZE from c Program - c

TASK_SIZE is a kernel constant that defines the upper limit of the accessible memory for the code working at the lowest privilege level.
Its value is usually set to 0xc0000000 on systems with less than 1GB of physical memory (all examples included in this article refer to this value). The memory above this limit contains the kernel code .
Is there a way to determine the running kernels TASK_SIZE through c program ??
TASK_SIZE

After a lot of google search and analysis , i got a logic
Assume net virtual address are 4gb and the it is divided in 1:3 ratio.
Rough assumptions:-
Kernel (upper 1 gb): c0000000 -ffffffff
USer space (below 3gb):0-c0000000
than
#define GB *1073741824
unsigned int num;
unsigned int task_size;
task_size=(unsigned)&number+ 1 GB / 1 GB * 1GB;
[the process's stack area will be allocated below the kernel space]
so address of num (in stack)= somewhere around in 3 GB range ex:[3214369612]
now adding 1 GB = 1073741824+3214369612=4288111436
dividing by 1GB=3.993614983 that will be 3 (unsigned int)
now multiplying by 1GB = 3 *1073741824 = 3221225472 i.e. (0xC0000000 in hex)
hence i got the kernel starting address (TASK_SIZE)
I tried it assuming (2:6) ratio also and got correct result.
Is this a fair logic ,Please comment ???

Related

C Hexadecimal Memory locations

Okay, I'm working on learning C and working with some memory locations.
Assume the sizeof(double) is 8
and we have a double dbl_array[5] with a memory location of 0x7fffffffe360.
What would the memory location be for &(dbl_array[4])?
When running locally I can see that the location goes from b34b0 to b34d0, but I'm stuck on how to apply this to the assumed location.
Any tips would be amazing!
Calculate the address in bytes as base + 4*sizeof(double). So 0x7fffffffe360 + 4*8.
What would the memory location be for &(dbl_array[4])?
Without knowing the type of computer and/or operating system running this program, this question cannot be answered; it is only possible to say which address this element would have on 99% of all computers:
If X is an array of the data type Y and the address of the element X[n] is Z, the address of the element X[n+1] is Z+sizeof(Y).
For this reason, the address of the element X[n+M] is Z+M*sizeof(Y).
The address of an array is the address of element X[0].
Now simply take a calculator that can calculate hexadecimal numbers and perform the following calculation: 0x7fffffffe360 + 4 * 8
However, there are counterexamples where the address calculation is done differently: The "huge" memory layout on x86-16 for example...

What is size (in bytes) of Mbed TLS rsa_context?

I use MBED-TLS on STM32F103 device. STM 32F103 device has little SRAM memory (20 Kbytes).
I would like to calculate the ram used by mbedtls_rsa_context
How to do this?
Is it :
sizeof(mbedtls_rsa_context) + 13 * sizeof(mbedtls_mpi ) + mbedtls_mpi_size (D) + ..... + mbedtls_mpi_size (Vf)
Thanks,
Regards.
Note that the struct mbedtls_rsa_context contains these 13 mbedtls_mpi structs, so if you do sizeof(mbedtls_rsa_context), it already includes the 13 * sizeof(mbedtls_mpi ) part. So, no need to add that part.
As for the RAM that each mbedtls_mpi consumes, as you can see in mbedtls_mpi_grow, the size that is allocated is the number of limbs (x->n) multiplied with chars in limbs (CiL).
If you use mbedtls_mpi_size on every mpi, it will just give you the size in bytes that the big integer uses, without the leading zeros, if there are any, which also consume RAM.
Note that this means accessing internal members of the struct, which is not recommended, however there isn't any public API to get that knowledge.
If you are constrained with SRAM, have you considered using ECDSA keys, as same security strength keys consume less RAM?
Regards

maximum size for an array in Matlab

I tried zeros(1500*64) but it says "Maximum variable size allowed by the program is exceeded."
But [C,MAXSIZE] = COMPUTER returns MAXSIZE = 2.1475e+009
So why isn't it working? Also, after trying to issue this aommand on the Matlab command line a few times, I tried everything from zeroes(500*64) to zeros(1500*64) to find the maximum allowed, and sometimes it returned "Maximum variable size allowed by the program is exceeded." for 500*64 and sometimes returned "Out of memory." error. What could be the reason for that? This is what the memory command returns:
Maximum possible array: 486 MB (5.094e+008 bytes) * Memory
available for all arrays: 1436 MB (1.506e+009 bytes) ** Memory used
by MATLAB: 353 MB (3.697e+008 bytes) Physical Memory
(RAM): 3070 MB (3.219e+009 bytes)
Limited by contiguous virtual address space available.
** Limited by virtual address space available.
Output of [u,s] = memory
[u, s] = memory
u =
MaxPossibleArrayBytes: 509411328
MemAvailableAllArrays: 1.5057e+009
MemUsedMATLAB: 369819648
s =
VirtualAddressSpace: [1x1 struct]
SystemMemory: [1x1 struct]
PhysicalMemory: [1x1 struct]
How do I calculate my allowed maximum size from this information, both in terms of nuber of elements and total bytes occupied?
The command
x = zeros(1500*64);
attempts to create a square matrix of double precision zeros, 96000 elements per side, requiring 73 gigabytes.
I suspect you want to use
x = zeros(1500,64);
which creates a 1500-by-64 array of double precision zeros, requiring 0.8 megabytes of memory.
When I google for that error message, first hit is a descriptive help page from MathWorks, the developer of MatLab:
Why do I get the error 'Maximum variable size allowed by the program is exceeded.' ?
According to that, you should use the computer command, not memory, to learn the maximum matrix size supported by your edition of MatLab.
For the "Out of Memory" error, take the "Maximum possible array: 486 MB (5.094e+008 bytes)" reported by memory, and divide by the size of an array element (8 bytes for double-precision real values, which is what MatLab uses by default). The reason it's so low is due to address space fragmentation, which is what the memory command is telling you when it talks about "Limited by contiguous address space".

How to Calculate FAT

I am learning about FAT file system and how to calculate FAT size. Now, I have this question:
Consider a disk size is 32 MB and the block size is 1 KB. Calculate the size of FAT16.
Now, I know that to calculate it, we would multiply the number of bits per entry with the number of blocks.
So first step would be to calculate the number of blocks = (32MB)/(1KB) = 2^15 = 32 KB blocks.
Then, we would put that into the first equation to get = 2^16 * 2^15 = 2^19
Now, up to here I understand and I had thought that that is the answer (and that is how I found it to be calculated in http://pcnineoneone.com/howto/fat1.html).
However, the answer I was given goes one step further to divide 2^19 by (8*1024) , which would basically give an answer of 64KB. Why is that? I have searched for hours, but could find nothing.
Can someone explain why we would perform the extra step of dividing 2^19 by (8*1024)?
oh, and the other question stated that the block size is 2KB and so it divided the end result by(8*1024*1024) ... where is the 8 and 1024 coming from?
please help
you are using FAT16. Clusters are represented with 16 bits which means 16/8=2 bytes. To get size in bytes the result should be divided by 8.to get result in kilobytes you should divide your result by 8*1024

Array index limit in C

On Linux, with 16 GB of RAM, why would the following segfault:
#include <stdlib.h>
#define N 44000
int main(void) {
long width = N*2 - 1;
int * c = (int *) calloc(width*N, sizeof(int));
c[N/2] = 1;
return 0;
}
According to GDB the problem is from c[N/2] = 1 , but what is the reason?
It's probably because the return value of calloc was NULL.
The amount of physical RAM in your box does not directly correlate to how much memory you can allocate with calloc/malloc/realloc. That is more directly determined by the remaining amount of Virtual Memory available to your process.
Your calculation overflows the range of a 32-bit signed integer, which is what "long" may be. You should use size_t instead of long. This is guaranteed to be able to hold the size of the largest memory block that your system can allocate.
You're allocating around 14-15 GB memory, and for whatever reason the allocator cannot
give you that much at the moment- thus calloc returns NULL and you segfault as you're dereferencing a NULL pointer.
Check if calloc returns NULL.
That's assuming you're compiling a 64-bit program under a 64-bit Linux. If you're doing something else - you might overflow the calculation to the first argument to calloc if a long is not 64 bits on your system.
For example, try
#include <stdlib.h>
#include <stdio.h>
#define N 44000L
int main(void)
{
size_t width = N * 2 - 1;
printf("Longs are %lu bytes. About to allocate %lu bytes\n",
sizeof(long), width * N * sizeof(int));
int *c = calloc(width * N, sizeof(int));
if (c == NULL) {
perror("calloc");
return 1;
}
c[N / 2] = 1;
return 0;
}
You are asking for 2.6 GB of RAM (no, you aren't -- you are asking for 14 GB on 64 bit... 2.6 GB overflowed cutoff calculation on 32 bit). Apparently, Linux's heap is utilized enough that calloc() can't allocate that much at once.
This works fine on Mac OS X (both 32 and 64 bit) -- but just barely (and would likely fail on a different system with a different dyld shared cache and frameworks).
And, of course, it should work dandy under 64 bit on any system (even the 32 bit version with the bad calculation worked, but only coincidentally).
One more detail; in a "real world app", the largest contiguous allocation will be vastly reduced as the complexity and/or running time of the application increases. The more of the heap that is used, the less contiguous space there is to allocate.
You might want to change the #define to:
#define N 44000L
just to make sure the math is being done at long resolution. You may be generating a negative number for the calloc.
Calloc may be failing and returning null which would cause the problem.
Dollars to donuts calloc() returned NULL because it couldn't satisfy the request, so attempting to deference c caused the segfault. You should always check the result of *alloc() to make sure it isn't NULL.
Create a 14 GB file, and memory map it.

Resources