I save the data read from a microphone via the *HAL_I2S_Receive_DMA *API using a circular buffer.
I store the received data inside an int16_t data_i2s[SAMPLE_COUNT] array. Using a circular buffer, if I don't move the data to a 'safe' location in time, it will be overwritten every time the buffer size is exceeded.
I think there are two solutions:
Every time the *HAL_I2S_RxCpltCallback *function is triggered, I move/store half of the circular buffer to another array or to memory.
2) I don't move 'data blocks' as in case 1) .. but I move each sample received in real time into another array (actually directly into a SRAM address chosen by me before debugging).
I want to realise this 2nd possibility .. but I don't really know the syntax/command to execute it.
In particular, I previously used a for loop to move each element from an array to the SRAM, but here I don't know how to do it and I think that was an 'end of read' solution, i.e. when you have all the data stored in the array and are no longer receiving
Related
I'm still little new to c and now im little confused over storing data onto a buffer
So let me first explain what I have achieved till now and what I want to achieve next
Achieved
Im able to continuously read the data from an sensor(lets say) store in a buffer and send data serially over the wirepas.
To do
I know that wirepas has 102bytes storage buffer it can allow
What i want to do is that store the sensor data in a continuous buffer and send that complete data at once so that im using the complete bandwidth of wirepas
lets say my each sensor data packet length is 27 bytes, so i can almost send 3 complete data in a single buffer and futher send over wirepas
Now want to know how to combine and store the data in an single buffer so i will utilize the complete buffer.
The default way to implement dynamic arrays is to use realloc. Once len == capacity we use realloc to grow our array. This can cause copying of the whole array to another heap location. I don't want this copying to happen, since I'm designing a dynamic array that should be able to store large amount of elements, and the system that would run this code won't be able to handle such a heavy operation.
Is there a way to achieve that?
I'm fine with loosing some performance - O(logN) for search instead of O(1) is okay. I was thinking that I could use a hashtable for this, but it looks like I'm in a deadlock since in order to implement such a hashtable I would need a dynamic array in the first place.
Thanks!
Not really, not in the general case.
The copy happens when the memory manager can't increase the the current allocation, and needs to move the memory block somewhere else.
One thing you can try is to allocate fixed sized blocks and keep a dynamic array pointing to the blocks. This way the blocks don't need to be reallocated, keeping the large payloads in place. If you need to reallocate, you only reallocate the array of reference which should be much cheaper (move 8 bytes instead 1 or more MB). The ideal case the block size is about sqrt(N), so it's not working in a very general case (any fixed size will be some large or some small for some values).
If you are not against small allocations, you could use a singly linked list of tables, where each new table doubles the capacity and becomes the front of the list.
If you want to access the last element, you can just get the value from the last allocated block, which is O(1).
If you want to access the first element, you have to run through the list of allocated blocks to get to the correct block. Since the length of each block is two times the previous one, this means the access complexity is O(logN).
This data structures relies on the same principles that dynamic arrays use (doubling the size of the array when expanding), but instead of copying the values after allocating a new block, it keeps track of the previous block, meaning accessing the previous blocks adds overhead but not accessing the last ones.
The index is not a position in a specific block, but in an imaginary concatenation of all the blocks, starting from the first allocated block.
Thus, this data structure cannot be implemented as a recursive type because it needs a wrapper keeping track of the total capacity to compute which block is refered to.
For example:
There are three blocks, of sizes 100, 200, 400.
Accessing 150th value (index 149 if starting from 0) means the 50th value of the second block. The interface needs to know the total length is 700, compare the index to 700 - 400 to determine whether the index refers to the last block (if the index is above 300) or a previous block.
Then, the interface compares with the capacity of the previous blocks (300 - 200) and knows 150 is in the second block.
This algorithm can have as many iterations as there are blocks, which is O(logN).
Again, if you only try to access the last value, the complexity becomes O(1).
If you have concerns about copy times for real time applications or large amounts of data, this data structure could be better than having a contiguous storage and having to copy all of your data in some cases.
I ended up with the following:
Implement "small dynamic array" that can grow, but only up to some maximum capacity (e.g. 4096 words).
Implement an rbtree
Combine them together to make a "big hash map", where "small array" is used as a table and a bunch of rbtrees are used as buckets.
Use this hashmap as a base for a "big dynamic array", using indexes as keys
While the capacity is less than maximum capacity, the table grows according to the load factor. Once the capacity reached maximum, the table won't grow anymore, and new elements are just inserted into buckets. This structure in theory should work with O(log(N/k)) complexity.
Let's say I have an 2D array. Along the first axis I have a series of properties for one individual measurement. Along the second axis I have a series of measurements.
So, for example, the array could look something like this:
personA personB personC
height 1.8 1.75 2.0
weight 60.5 82.0 91.3
age 23.1 65.8 48.5
or anything similar.
I want to change the size of the array very often - for example, ignoring personB's data and including personD and personE. I will be looping through "time", probably with >10^5 timesteps. Each timestep, there is a chance that each "person" in the array could be deleted and a chance that they will introduce several new people into the simulation.
From what I can see there are several ways to manage an array like this:
Overwriting and infrequent reallocation
I could use a very large array with an extra column, in which I put a "skip" flag. So, if I decide I no longer need personB, I set the flag to 1 and ignore personB every time I loop through the list of people. When I need to add personD, I search through the list for the first person with skip == 1, replace the data with the data for personD, and set skip = 0. If there aren't any people with skip == 1, I copy the array, deallocate it, reallocate it with several more columns, and then fill the first new column with personD's data.
Advantages:
infrequent allocation - possibly better performance
easy access to array elements
easier to optimise
Disadvantages:
if my array shrinks a lot, I'll be wasting a lot of memory
I need a whole extra row in the data, and I have to perform checks to make sure I don't use the irrelevant data. If the array shrinks from 1000 people to 1, I'm going to have to loop through 999 extra records
could encounter memory issues, if I have a very large array to copy
Frequent reallocation
Every time I want to add or remove some data, I copy and reallocate the entire array.
Advantages:
I know every piece of data in the array is relevant, so I don't have to check them
easy access to array elements
no wasted memory
easier to optimise
Disadvantages:
probably slow
could encounter memory issues, if I have a very large array to copy
A linked list
I refactor everything so that each individual's data includes a pointer to the next individual's data. Then, when I need to delete an individual I simply remove it from the pointer chain and deallocate it, and when I need to add an individual I just add some pointers to the end of the chain.
Advantages:
every record in the chain is relevant, so I don't have to do any checks
no wasted memory
less likely to encounter memory problems, as I don't have to copy the entire array at once
Disadvantages:
no easy access to array elements. I can't slice with data(height,:), for example
more difficult to optimise
I'm not sure how this option will perform compared to the other two.
--
So, to the questions: are there other options? When should I use each of these options? Is one of these options better than all of the others in my case?
I am currently writing a mail client in C. But I have a question about storing the password.
I just want to store it as long as the program runs.
As a password is a "string" I could store it in a char array, which gets overwritten shortly before the program ends. But this would be relatively insecure.
How can I securely store a password just during the program's runtime?
You're quite right to avoid holding the password in cleartext.
If you have to retain the password (for instance, you need to send it to the server periodically), it's best to keep it in an encrypted form. Which encrypted form is up to you. This makes it harder for someone doing memory profiling on the program to see the password. (Harder, not impossible; if you have to send the password to a server, and someone has physical access to the machine to do memory profiling, all you can do is make it difficult.)
When using it, you'd do this:
Allocate a temporary array
Decrypt to an array
Use the array (for instance, to send the password to the server)
Overwrite the array
Release the array
If you want to be really paranoid, when overwriting the array, do several items in sequence, for instance (this is just an example):
All zeros
All ones
Choose a random byte value
Fill the array with that byte value
Fill again with the value bitshifted once
Fill again with the value bitshifted again
Fill again with the value bitshifted a third time
Fill again with the value bitshifted a fourth time
Fill again with the value bitshifted a fifth time
Fill again with the value bitshifted a sixth time
Fill again with the value bitshifted a seventh time
All ones
All zeros
...as apparently some researchers have been able, on occasion, to retrieve an echo of previous data through forensic analysis of the memory cells in RAM. But I think this falls in the category of paranoia. (But it's cheap to do. ;-) )
Lets say I have an array as:
int a[]={4,5,7,10,2,3,6}
when I access an element, such as a[3], what does actually happen behind the scenes?
Why do many algorithm books (such as the Cormen book...) say that it takes a constant time?
(I'm just a noob in low-level programming so I would like to learn more from you guys)
The array, effectively, is known by a memory location (a pointer). Accessing a[3] can be found in constant time, since it's just location_of_a+3*sizeof(int).
In C, you can see this directly. Remember, a[3] is the same as *(a+3) - which is a bit more clear in terms of what it's doing (dereferencing the pointer "3 items" over).
an array of 10 integer variables, with indices 0 through 9, may be stored as 10 words at memory addresses 2000, 2004, 2008, … 2036, so that the element with index i has the address 2000 + 4 × i.
this process take one multiplication and one addition .since these two operation take constant time.so we can say the access can be perform in constant time
Just to be complete, "what structure is accessed in linear time?" A Linked List structure is accessed in linear time. To get the n element you have to travel through n-1 previous elements. You know, like a tape recorder or a VHS cassette, where to go to the end of the tape/VHS you had to wait a long time :-)
An array is more similar to an hard disk: every point is accessible in "constant" time :-)
This is the reason the RAM of a computer is called RAM: Random Access Memory. You can go to any location if you know its address without traversing all the memory before that location.
Some persons told me that HD access isn't really in constant time (where by access I mean "time to position the head and read one sector of the HD"). I have to say that I'm not sure of it. I have googled around and I haven't found anyone speaking of it. I DO know that the time isn't linear, because it is still accessed randomly. In the end, if you think that HD access isn't constant enough for you (but then, what is constant? the access of the RAM? considering Cache, Prefetching, Data Locality and Compiler optimizations?), feel free to consider the sentence as An array is more similar to an USB disk stick: every point is accessible in "constant" time :-)
Because arrays are stored in memory sequentially. So actually, when you access array[3] you are telling the computer, "Get the memory address of the beginning of array, then add 3 to it, then access that spot." Since adding takes constant time, so does array access!
"constant time" really means "time that doesn't depend on the 'problem size'". For the 'problem' "get something out of a container", the 'problem size' is the size of the container.
Accessing an array element takes basically the same amount of time (this is a simplification) regardless of the container size, because a fixed set of steps is used to retrieve the element: its location in memory (this is also a simplification) is calculated, and then the value at that location in memory is retrieved.
A linked list, for example, can't do this, because each link indicates the location of the next one. To find an element you have to work through all the previous ones; on average, you'll work through half the container, so the size of the container obviously matters.
Array is collection of similar types of data. So all the elements will take same amount of memory.Therefore if you know the base address of array and type of data it holds you can easily get element Array[i] in constant time using formula stated below:
int takes 4 bytes in a 32 bit system.
int array[10];
base address=4000;
location of 7th element:4000+7*4.
array[7]=array[4000+7*4];
Hence, its clearly visible you can get ith element in constant time if you know the base address and type of data it holds.
Please visit this link to know more about array data structure.
An array is sequential, meaning that, the next element's address in the array is next to the present one's. So if you want to get 5th element of an array, you calculate the address of 5th element by summing up the base address of the array with 5. This direct address is now directly used to get the element at that address.
You may now further ask the same question here- "How does the computer knows/accesses the calculated address directly?" This is the nature and the principle of computer memory (RAM). If you are further interested in how RAM accesses any address in constant time, you can find it in any computer organization text or you can just google it.
If we know memory location that need to be accessed then this access is in constant time. In case of array the memory location is calculated by using base pointer, index of element and size of element. This involves multiplication and addition operation which takes constant time to execute. Hence element access inside array takes constant time.