I am working on a problem where three memory pages are available and data is supposed to be written in one of the pages.
To keep history the data is first written to 1st page, and when that is full the next page shall be used. Finally, the last page is also full so we have to erase the data in the first page and use the first page. And so on...
How can I know which of the pages is the 'oldest'? How do I determine which to erase?
I think that a counter is needed, and this counter increments every time a new page is used. The counter values is read in the beginning to find which page is the newest and then the next page is the oldest (since circular approach). However, eventually the counter will overflow, the counter restarts and it will not be possible to be sure which value is the highest (since the new value is 0).
Example:
0 0 0 (from beginning)
1 0 0 (page0 was used)
1 2 0 (page1 was used)
1 2 3 (page2 was used)
4 2 3 (page0 was used)
4 5 3 (page1 was used)
...
255 0 254 (I dont know... )
Is the problem clear? Otherwise I can try to re-explain.
This is a technique used in EEPROM wear leveling. The concept is that since EEPROM usually has a limited life of write/erase cycles, we balance out the wear in the memory so that effectively the life increases. Since the data in EEPROM stays in the controller even on power off, we may have to store log values of some variables periodically on the EEPROM for further use.
One simple approach that you can follow is that as suggested in the comments you can update the counter by keep calculating (counter modulo 3).
Other (more general) approach is to have three registers for the counter. Whenever you have to write to a page, first scan these three registers and check the combinations where (C[i] != C[i-1] + 1)
0 0 0
1 0 0 // 1 to 0
1 2 0 // 2 to 0
1 2 3 // 3 to 1
4 2 3 // 4 to 2
...
255 0 254 // 0 to 254.
This link has more information about this subject: Is there a general algorithm for microcontroller EEPROM wear leveling?
Your idea of using a circular buffer is a good one. All you need in addition to that are to indices, one to point at the oldest page and one to point at the newest. You need to update those indices whenever you add or replace a page.
The reason you need to is that in beginning -- until the buffer is full -- only one of them will be advancing while the other will remain stationary.
I do this kind of cycles like this:
// init
int page0=adress of page0; // oldest data
int page1=adress of page1; // old data
int page2=adress of page2; // actual data (page for write)
// after page 2 is full
int tmp;
tmp=page0;
page0=page1;
page1=page2;
page2=tmp;
this way you allways know which page is which
page 0 allways the oldest data
page 1 allways the old data
page 2 allways actual data
it is easily extendable to any number of pages
instead of adress you can store the page number ... use what is more suitable for your task
Related
So, I am working on a feature in a web application. The problem is like this-
I have four different entities. Let's say those are - Item1, Item2, Item3, Item4. There's two phase of the feature. Let's say the first phase is - Choose entities. In the first phase, User will have option to choose multiple items for each entity and for every combination from that choosing, I need to do some calculation. Then in the second phase(let's say Relocate phase) - based on the calculation done in the first phase, for each combination I would have to let user choose another combination where the value of the first combination would get removed to the row of the second combination.
Here's the data model for further clarification -
EntityCombinationTable
(
Id
Item1_Id,
Item2_Id,
Item3_Id,
Item4_Id
)
ValuesTable
(
Combination_Id,
Value
)
So suppose I have following values in both values -
EntityCombinationTable
Id -- Item1_Id -- Item2_Id -- Item3_Id -- Item4_Id
1 1 1 1 1
2 1 2 1 1
3 2 1 1 1
4 2 2 1 1
ValuesTable
Combination_Id -- Value
1 10
2 0
3 0
4 20
So if in the first phase - I choose (1,2) for Item1, (1,2) for Item_2 and 1 for both Item_3 and Item4, then total combination would be 2*2*1*1 = 4.
Then in the second phase, for each of the combination that has value greater than zero, I would have to let the user choose different combination where the values would get relocated.
For example - As only combination with Id 1 and 2 has value greater than zero, only two relocation combination would need to be shown in the second dialog. So if the user choose (3,3,3,3) and (4,4,4,4) as relocation combination in the second phase, then new row will need to be inserted in
EntityCombinationTable for (3,3,3,3) and (4,4,4,4). And values of (1,1,1,1) and (2,2,1,1) will be relocated respectively to rows corresponding to (3,3,3,3) and (4,4,4,4) in the ValuesTable.
So the problem is - each of the entity can have items upto 100 or even more. So in worst case the total number of combinations can be 10^8 which would lead to a very heavy load in database(inserting and updating a huge number rows in the table) and also generating all the combination in the code level would require a substantial time.
I have thought about an alternative approach to not keep the items as combination. Rather keep separate table for each entity. and then make the combination in the runtime. Which also would cause performance issue. As there's a lot more different stages where I might need the combination. So every time I would need to generate all the combinations.
I have also thought about creating key-value pair type table, where I would keep the combination as a string. But in this approach I am not actually reducing number of rows to be inserted rather number of columns is getting reduced.
So my question is - Is there any better approach this kind of situation where I can keep track of combination and manipulate in an optimized way?
Note - I am not sure if this would help or not, but a lot of the rows in the values table will probably have zero as value. So in the second phase we would need to show a lot less rows than the actual number of possible combinations
Here's my problem. I have a set of 20 objects stored in memory as an array. I want to store a second piece of data that defines an order for the objects to be displayed.
The simplest way to store the order is as an array of 20 unsigned integers, each of which is 5 bits (aka 0-31). The position of the object in the output list would be defined by the number stored in this array at the same index as the object in it's array.
But.. I know from statistics that there are only 20! (that's 20 factorial), ways to arrange these objects.
This could be stored in 62 bits, since 2^62 > 20!
I'm currently using 100 bits to store the same information.
So my question is this: Is there a space efficient way to store ORDER as a sequence of bits?
I have some addition constraints as well. This will run on an embedded device, so I can't use any huge arrays or high level math functions. I would need a simple iterative method.
Edit: Some clarification on why this is necessary. Say for example the objects are pictures, and they're stored in ROM (aka they can't be moved around). Now lets say I want to keep track of what order to display the images in, and i'm going to update that order every second. My device has 1k of storage with wear leveling, but each bit in the storage can only be written 1000 times before it becomes unreliable. If I need 1kb to store the order, than my device will only work for 1000 seconds. If I need 0.1kb, it will work for 10k seconds, and so on. Thus the devices longevity will be inversely proportional to the number of bits I need to update every cycle.
You can store the order in a single 64-bit value x:
For the first choice, 20 possibilities, compute the index as x % 20 and update x as x /= 20,
For the next choice, only 19 possibilities, compute x % 19 and update x as x /= 19.
Continue this process 17 more times and you are done.
I think I've found a partial solution to my own question. Assuming I start at the left side of the order array, for every move right there are fewer remaining possibilities for the position value. The number of possibilities is 20,19,18,etc. I can take advantage of this by populating the order array in a relative fashion. The first index will place a value in the order array. There are 20 possibilities so this takes 5 bits. Placing the next value, there are only 19 position available (still 5 bits). Proceeding though the whole array. The bits-required is now 5,5,5,5,4,4,4,4,4,4,4,4,3,3,3,3,2,2,1,0. So that gets me down to 69 bits, much better.
There's still some "wasted" precision in each of the values, since for example the first position can store 32 possible values, even though there are only 20. I'm not sure how to deal with this, but I think will have something to do with carrying a remainder from one calculation to the next..
The title might have been a bit unclear. Sadly enough, I could not come up with a better one.
So the problem is the following. There is an array of a fixed size where each position can have 4 states: empty, blocked, positive (a limited value between 0 and 1) and negative (a limited value between 0 and 1 (though this could be 0 to -1)). By limited, I mean that the value can only take the form of 0.1, 0.2, ... 1.0 and each value also only occurs once. Depending on how the array is filled, I would like to predict what the next version of the array will look like. I tried to represent each position in the array as one input node, but I could not figure out how to do this (How to represent all the four states as one number). What also should be noted is that the maximum amount of each state is known. So rather than having each node represent an index in the array, I could have each node represent a state (blocked, -1.0, -0.9, ..., 0.9, 1.0) and then say at what index that state occurs as an input value for that node.
Which way is more practical or efficient for a neural network?
By the way, it is a neural network with one input layer, one hidden layer and one output layer.
I would suggest you start by using 3 neurons for each cell in your array:
One which tells the network if the cell is empty. You can just use the values 0 for empty and 1 for non-empty (just an example, other values should work as well, just make sure to be consistent).
One which does the exact same thing, but it tells the network if the cell is blocked. (again 0 and 1 as possible inputs)
And finally the Neuron to receive the positive/negative value. You can try various things here. You should probably first try, if it works to just set the value if there is one, and if there is no value you set the input to 0. This might confuse the network a little tough, since it can't see the difference between 0 = null and 0 = 0. If it doesn't work you could try to map all your input values between 0 and 1 (in your case just add 1 and then divide by 2 before passing the value to the network) and input -1 if the cell is blocked/empty.
If you can't get this to work first play around with the parameters for a while, change the amount of hidden layers and neurons, vary the learning-rates and the momentum for training.
If the inputs turn out not to be very good for your task, i would suggest you do what you mentioned before: one neuron for every possible state of each cell. This method will take longer to train, but it should definitely work (if not, your task is rather too comolex for the network, or you need more training time).
CREATE SEQUENCE has CACHE option
MSDN defines it as
[ CACHE [<constant> ] | NO CACHE ]
Increases performance for applications that use sequence objects by
minimizing the number of disk IOs that are required to generate
sequence numbers. Defaults to CACHE. For example, if a cache size of
50 is chosen, SQL Server does not keep 50 individual values cached. It
only caches the current value and the number of values left in the
cache. This means that the amount of memory required to store the
cache is always two instances of the data type of the sequence object.
I understand it improves performance by avoiding reads from disk IO and maintaining some info in the memory that would help reliably generate the next number in the sequence, but I cannot imagine what a simple memory representation of the cache would look like for what the MSDN describes in the example.
Can someone explain how would the cache work with this sequence
CREATE SEQUENCE s
AS INT
START WITH 0
INCREMENT BY 25
CACHE 5
describing what the cache memory would hold when each of the following statements is executed independently:
SELECT NEXT VALUE FOR s -- returns 0
SELECT NEXT VALUE FOR s -- returns 25
SELECT NEXT VALUE FOR s -- returns 50
SELECT NEXT VALUE FOR s -- returns 75
SELECT NEXT VALUE FOR s -- returns 100
SELECT NEXT VALUE FOR s -- returns 125
This paragraph in the doc is very helpful:
For an example, a new sequence is created with a starting value of 1 and a cache size of 15. When the first value is needed, values 1
through 15 are made available from memory. The last cached value (15)
is written to the system tables on the disk. When all 15 numbers are
used, the next request (for number 16) will cause the cache to be
allocated again. The new last cached value (30) will be written to the
system tables.
So, in your scenario
CREATE SEQUENCE s
AS INT
START WITH 0
INCREMENT BY 25
CACHE 5
You will have 0, 25, 50, 75 and 100 in Memory and you will get only one I/O write in disk: 100.
The problem you could have, explained in the the doc, is if the server goes down and you haven't used all the 5 items, next time you ask for a value you'll get 125.
I have a list of devices and a bitmask of channels they are on (channels are numbered 0..3). There can be up to 256 devices.
For example:
Device1: 1 0 0 1 (on channels 0, 3)
Device2: 0 1 1 0 (on channels 1, 2)
Device3: 1 1 0 0 (on channels 2, 3)
I need to find a bitmask of channels which will result in the message to be received by all devices with a fewest possible unnecessary messages.
Correct result bitmasks for example data are 1 0 1 0 (channel 1 delivers to Device2 and channel 3 to Device1 and Device3) and 0 1 0 1 (channel 0 delivers to Device1 and channel 2 to Device2 and Device3), either one of them is OK.
Result bitmask 1 1 0 0 would be bad because Device3 would get the message twice.
Since there may not be a perfect solution and we only have 16 possibilities for the result I would just use a brute force approach and iterate through all 16 possible masks, and see which one(s) is/are optimal (minimum number of repeated messages).
Take a look at backtrack search.
You could add the number of 1's in each column to find out how many "receptions" will occur for a message on that channel. That way for any valid mask (that reaches all devices) you can easily add up the total number of messages received by all devices. You can then brute force all 16 possible masks seeing which ones will actually work and choosing the one that both works and has the lowest total number of receptions. Getting around the brute-force part is going to require operations on the entire matrix.
Oddly, if you actually had 256 devices you'd probably have to send on all channels anyway.