IOS Memory Warning and crash while comparing phone numbers from 2 large arrays - ios7.1

I need to compare an array coming from server with my addressbook array and check for same phone numbers, if numbers aren't common, i need to add the element in my address book. Both arrays may be huge in size, while comparing the phone numbers in nested for loops, I get huge memory leaks, and memory increases to more than 1 gb in case of large data, thus the app crashes due to memory pressure while running on a device. How can I solve this problem?

Related

Is stack an efficient data structure?

If two stacks growing in opposite directions are implemented within the same array, would this be a better solution or would this raise issues? Can somebody explain please.
A priori, there is no issue. You can totally grow two stacks in opposite directions within an array.
In general, I don't think it's better or worse than growing the two stacks in two different arrays. Since most libraries provide an implementation for stacks, we usually don't bother implementing our own 2-stacks-in-one-array and we use simple stacks as they are implemented.
In some particular cases, using one array instead of two might be better, and in some other particular cases, it might be worse.
When allocating an array, we ask the operating system for a contiguous region of memory. If you want to allocate two size-10000 array, then you ask for a contiguous region of 10000 free spaces, plus another contiguous region of 10000 free spaces. If you want to allocate a size-20000 array, then you ask for a contiguous region of 20000 free spaces.
Obviously, it's easier to find two regions of 10000 spaces each, than one contiguous region of 20000 spaces. So in the case of two huge stacks, it might be better to use two arrays.
However, if you are working with two stacks with the property that the total number of elements is approximately constant, but the distribution of the elements among the two stacks is changing, then it might be more efficient to store the two stacks in the same array. For instance, if you start with 10000 elements in the first stack and 0 in second, then at some point there are 5000 elements in each stack, then later there are no elements in first stack and 10000 elements in second stack, then storing the two stacks in the same array might be much more efficient than using two separate arrays.

How does array offset access actually work

We all are aware of how easy it is to access elements of an array in the blink of an eye:
#include<stdio.h>
int main()
{
int array[10];
array[5]=6; //setat operation at index 5
printf("%d",array[5]); //getat operation
}
Yea, question may sound a bit stupid but how does a compiler just get you the index that you want to access for inserting data or for displaying it, so fast. Does it traverse to that index on its own for completing setat(),getat() operations.
Cause general means are: if you are asked to pick 502th element from a row of 1000 units, you would start counting until u get the count 502 (in case of computer 501) so is this the same happening in computer.
The array is stored in random-access memory (RAM). RAM is is divided into equal-sized, individually addressable units, such as bytes. Addressable means enumerable by an address, which is a number. Random access means that the processor doesn't have to traverse addresses 0 through 499 in order to access location 500. It directly proceeds to 500. How it works is that the computer places a binary representation of the adress 500 onto a collection of signal lines called the "address bus". All of the devices connected to the address bus simultaneously examine the address and their circuitry answers the question "is this address in my range?". The device for which the answer is yes, then springs into action.
In the case of RAM it circuitry further decodes the address to determine which row and column of which bank to activate. The values read out are placed onto the data bus for the processor to collect. The actual implementation is considerably more complicated due to caching, but that's the basic idea.
The main idea is that the machine accesses memory and memory-like resources (such as I/O ports) using an address, and the address is distributed, as a set of electrical signals, in parallel to all of the devices, which can look at it at once; and those devices themselves have parallel circuitry to further analyze the address to identify a specific resource within their innards. So addressing happens very fast, without having to search through resources that are not being addressed.
C language arrrays are a very low-level concept. A C array sits at some address in memory and holds equal sized objects. These objects are accessed by performing arithmetic. For instance if the array elements are 8 bytes wide, then accessing the 17th element means that the machine has to multiply 17 x 8 to produce the offset 136, which is then added to the address of the array to produce the address of the element.
In youor program you have the expression array[5]. The value 5 is known to the C compiler at compile time (before the program is translated, linked and executed). The size of the array elements, which are of type int is also known at compile time. The address of array isn't known at compile time. Therefore the offset calculation likely takes place at compile time; the 5 is converted to a sizeof (int) * 5 offset calculated at compile time to a value like 20, which is then added to the address of array at run-time to calculate the address of array[5] and fetch its value from that address.

Cannot get memory allocated from `flex_array_alloc` when requesting a relatively big size in linux kernel

I'm doing some linux kernel development.
And I'm going to allocate some memory space with something like:
ptr = flex_array_alloc(size=136B, num=1<<16, GFP_KERNEL)
And ptr turns out to be NULL every time I try.
What's more, when I change the size to 20B or num to 256,there's nothing wrong and the memory can be obtained.
So I want to know if there are some limitations for requesting memory in linux kernel modules. And how to debug it or to allocate a big memory space.
Thanks.
And kzalloc has a similar behavior in my environment. That is, requesting a 136B * (1<<16) space failed, while 20B or 1<<8 succeed.
There are two limits to the size of an array allocated with flex_array_allocate. First, the object size itself must not exceed a single page, as indicated in https://www.kernel.org/doc/Documentation/flexible-arrays.txt:
The down sides are that the arrays cannot be indexed directly, individual object size cannot exceed the system page size, and putting data into a flexible array requires a copy operation.
Second, there is a maximum number of elements in the array.
Both limitations are the result of the implementation technique:
…the need for memory from vmalloc() can be eliminated by piecing together an array from smaller parts…
A flexible array holds an arbitrary (within limits) number of fixed-sized objects, accessed via an integer index.… Only single-page allocations are made…
The array is "pieced" together by using an array of pointers to individual parts, where each part is one system page. Since this array is also allocated, and only single-page allocations are made (as noted above), the maximum number of parts is slightly less than the number of pointers which can fit in a page (slightly less because there is also some bookkeeping data.) In effect, this limits the total size of a flexible array to about 2MB on systems with 8-byte pointers and 4kb pages. (The precise limitation will vary depending on the amount of wasted space in a page if the object size is not a power of two.)

Theta of 1 in big arrays

If I allocated an array of 1,000,000,000 members successfully, how can I access to member in index 999,999,999 in Theta of 1?
According to array attributes, an access for each member should be Theta of 1. However, isn't there some sort of internal loop that counts the indices until it gets to required member? If there is, shouldn't it be Theta of n?
No, there's no internal loop. Arrays are random access, meaning any element can be accessed in Θ(1) time. All the computer has to do is take the array's starting address, add an offset to the desired element, and look up the value at the computed address.
In practice, you are unlikely to ever have an array with a billion elements. Arrays aren't well suited to such large data sets as they'd be several gigabytes or more in size. More sophisticated data structures and/or algorithms are typically employed. For instance, a naïve program might read a 2GB file into a 2GB byte array, whereas a smarter one would read it in small chunks, say 4KB at a time.
It is actually in theta of (1) only. when you declare arr=int[100000000]
arr var will store the first address of the memory allocations.
When you do arr[n] what it does is *(arr+n) directly add n to the starting address and directly access the array.
Arrays are always stored in sequence manner only.
for more info please read https://www.ics.uci.edu/~dan/class/165/notes/memory.html
ask in the comment for more resources if you need.

Fortran: insufficient virtual memory

I - not a professional software engineer - am currently extending a quite large scientific software.
At runtime I get an error stating "insufficient virtual memory".
At this point during runtime, the used working memory is about 550mb and the error accurs when a rather big threedimensional array is dynamically allocated. The array - if it would be allocated - would be about a size of 170mb. Adding this to the already used 550mb the program would still be way below the 2gb boundary that is set for 32bit applications. Also there is more than enough working memory available on the system.
Visual Studio is currently set that it allocates arrays on the stack. Allocating them on the heap does not make any difference anyway.
Splitting the array into smaller arrays (being the size of the one big array in sum) results in the program running just fine. So I guess that the dynamically allocated memory has to be available in one adjacent block.
So there I am and I have no clue how to solve this. I can not deallocate some of the already used 550mb as the data is still required. I also can not change very much of the configuration (e.g. the compiler).
Is there a solution for my problem?
Thank you some much in advance and best regards
phroth248
The virtual memory is the memory your program can address. It is usually the sum of the physical memory and the swap space. For example, if you have 16GB of physical memory and 4GB of swap space, the virtual memory will be 20GB. If your Fortran program tries to allocate more than those 20 addressable GB, you will get an "insufficient virtual memory" error.
To get an idea of the required memory of your 3D array:
allocate (A(nx,ny,nz))
You have nx*ny*nz elements and each element takes 8 bytes in double precision or 4 bytes in single precision. I let you do the math.
Some things:
1. It is usually preferable to to allocate huge arrays using operating system services rather than language facilities. That will circumvent any underlying library problems.
You may have a problem with 550MB in a 32-bit system. Usually there is some division of the 4GB address space into dedicated regions.
You need to make sure you have enough virtual memory.
a) Make sure your page file space is large enough.
b) Make sure that your system is not configured to limit processes address space sizes to smaller than what you need.
c) Make sure that your accounts settings are not limiting your process address space to smaller than allowed by the system.

Resources