Fast way to get bitmask for delivery to all devices - c

I have a list of devices and a bitmask of channels they are on (channels are numbered 0..3). There can be up to 256 devices.
For example:
Device1: 1 0 0 1 (on channels 0, 3)
Device2: 0 1 1 0 (on channels 1, 2)
Device3: 1 1 0 0 (on channels 2, 3)
I need to find a bitmask of channels which will result in the message to be received by all devices with a fewest possible unnecessary messages.
Correct result bitmasks for example data are 1 0 1 0 (channel 1 delivers to Device2 and channel 3 to Device1 and Device3) and 0 1 0 1 (channel 0 delivers to Device1 and channel 2 to Device2 and Device3), either one of them is OK.
Result bitmask 1 1 0 0 would be bad because Device3 would get the message twice.

Since there may not be a perfect solution and we only have 16 possibilities for the result I would just use a brute force approach and iterate through all 16 possible masks, and see which one(s) is/are optimal (minimum number of repeated messages).

Take a look at backtrack search.

You could add the number of 1's in each column to find out how many "receptions" will occur for a message on that channel. That way for any valid mask (that reaches all devices) you can easily add up the total number of messages received by all devices. You can then brute force all 16 possible masks seeing which ones will actually work and choosing the one that both works and has the lowest total number of receptions. Getting around the brute-force part is going to require operations on the entire matrix.
Oddly, if you actually had 256 devices you'd probably have to send on all channels anyway.

Related

How to encode list of 16 numbers in less than 2 bytes

I need to convey information of availability of 16 items with their id(0-15) in a variable.
char item_availablity[16];
I can encode it with 2 bytes where every bit is mapped with one item id where 1 represents available and 0 represents unavailable
For ex 0000100010001000
This number has information that Items with id 4,8,12 are available
I need to encode this information by using less than 2 Bytes.
Is this possible? If so, how?
To put it simply:
No, you cannot. It's simply impossible to store 1 bit of information about 16 separate things in less than 16 bits. That is, in the general case.
An exception is if there are some restrictions. For instance, let's call the items i_1, i_2 ... i_16. If you know that i_1 is available if and only if i_2 also is available, then you can encode the availability about these two items in just one bit.
A more complicated example is that i_1 is available iff i_2 or i_3 is available. Then you could store the availability for these three items in two bits.
But for the general case, nope, it's completely impossible.
Trying to think out of the box here - If some items are more rare than others, you could use a variable length encoding so that, on average, it would take less than 16 bits to store the information. However, there will be combinations of availabilities where it would take more than 16 bits.

Algorithm to find newest of n values

I am working on a problem where three memory pages are available and data is supposed to be written in one of the pages.
To keep history the data is first written to 1st page, and when that is full the next page shall be used. Finally, the last page is also full so we have to erase the data in the first page and use the first page. And so on...
How can I know which of the pages is the 'oldest'? How do I determine which to erase?
I think that a counter is needed, and this counter increments every time a new page is used. The counter values is read in the beginning to find which page is the newest and then the next page is the oldest (since circular approach). However, eventually the counter will overflow, the counter restarts and it will not be possible to be sure which value is the highest (since the new value is 0).
Example:
0 0 0 (from beginning)
1 0 0 (page0 was used)
1 2 0 (page1 was used)
1 2 3 (page2 was used)
4 2 3 (page0 was used)
4 5 3 (page1 was used)
...
255 0 254 (I dont know... )
Is the problem clear? Otherwise I can try to re-explain.
This is a technique used in EEPROM wear leveling. The concept is that since EEPROM usually has a limited life of write/erase cycles, we balance out the wear in the memory so that effectively the life increases. Since the data in EEPROM stays in the controller even on power off, we may have to store log values of some variables periodically on the EEPROM for further use.
One simple approach that you can follow is that as suggested in the comments you can update the counter by keep calculating (counter modulo 3).
Other (more general) approach is to have three registers for the counter. Whenever you have to write to a page, first scan these three registers and check the combinations where (C[i] != C[i-1] + 1)
0 0 0
1 0 0 // 1 to 0
1 2 0 // 2 to 0
1 2 3 // 3 to 1
4 2 3 // 4 to 2
...
255 0 254 // 0 to 254.
This link has more information about this subject: Is there a general algorithm for microcontroller EEPROM wear leveling?
Your idea of using a circular buffer is a good one. All you need in addition to that are to indices, one to point at the oldest page and one to point at the newest. You need to update those indices whenever you add or replace a page.
The reason you need to is that in beginning -- until the buffer is full -- only one of them will be advancing while the other will remain stationary.
I do this kind of cycles like this:
// init
int page0=adress of page0; // oldest data
int page1=adress of page1; // old data
int page2=adress of page2; // actual data (page for write)
// after page 2 is full
int tmp;
tmp=page0;
page0=page1;
page1=page2;
page2=tmp;
this way you allways know which page is which
page 0 allways the oldest data
page 1 allways the old data
page 2 allways actual data
it is easily extendable to any number of pages
instead of adress you can store the page number ... use what is more suitable for your task

fast poker hand ranking

I am working on a simulation of poker and now I have to rank hands effectively:
Every hand is a combination of 5 cards and is represented as an uint64_t.
Every bit from 0 (Ace of Spades), 1 (Ace of Hearts) to 51 (Two of Clubs) indicates if the corresponding card is part (bit == 1) or isn't part (bit == 0) of the hand. The bits from 52 to 63 are always set to zero and don't hold any information.
I already know how I theoretically could generate a table, so that every valid hand can be mapped to rang (represented as uint16_t) between 1 (2,3,4,5,7 - not in the same color) and 7462 (Royal Flush) and all the others to the rang zero.
So a naive lookup table (with the integer value of the card as index) would have an enormous size of
2 bytes * 2^52 >= 9.007 PB.
Most of this memory would be filled with zeros, because almost all uint64_t's from 0 to 2^52-1 are invalid hands and therefor have a rang equal to zero.
The valuable data occupies only
2 bytes * 52!/(47!*5!) = 5.198 MB.
What method can I use for the mapping so that I only have to save the ranks from the valid cards and some overhead (max. 100 MB's) and still don't have to do some expensive search...
It should be as fast as possible!
If you have any other ideas, you're welcome! ;)
You need only a table of 13^5*2, with the extra level of information dictating if all the cards are of the same suit. If for some reason 'heart' outranks 'diamond', you need still at most a table with size of 13^6, as the last piece of information encodes as '0 = no pattern, 1 = all spades, 2 = all hearts, etc.'.
A hash table is probably also a good and fast approach -- Creating a table from nCk(52,5) combinations doesn't take much time (compared to all possible hands). One would, however, need to store 65 bits of information for each entry to store both the key (52 bits) and the rank (13 bits).
Speeding out evaluation of the hand, one first rules out illegal combinations from the mask:
if (popcount(mask) != 5); afterwards once can use enough bits from e.g. crc32(mask), which has instruction level support in i7-architecture at least.
If I understand your scheme correctly, you only need to know that the hamming weight of a particular hand is exactly 5 for it to be a valid hand. See Calculating Hamming Weight in O(1) for information on how to calculate the hamming weight.
From there, it seems you could probably work out the rest on your own. Personally, I'd want to store the result in some persistent memory (if it's available on your platform of choice) so that subsequent runs are quicker since they don't need to generate the index table.
This is a good source
Cactus Kev's
For a hand you can take advantage of at most 4 of any suit
4 bits for the rank (0-12) and 2 bits for the suit
6 bits * 5 cards is just 30 bit
Call it 4 bytes
There are only 2598960 hands
Total size a little under 10 mb
A simple implementation that comes to mind would be to change your scheme to a 5-digit number in base 52. The resulting table to hold all of these values would still be larger than necessary, but very simple to implement and it would easily fit into RAM on modern computers.
edit: You could also cut down even more by only storing the rank of each card and an additional flag (e.g., lowest bit) to specify if all cards are of the same suit (i.e., flush is possible). This would then be in base 13 + one bit for the ranking representation. You would presumably then need to store the specific suits of the cards separately to reconstruct the exact hand for display and such.
I would represent your hand in a different way:
There are only 4 suits = 2bits and only 13 cards = 4 bits for a total of 6 bits * 5 = 30 - so we fit into a 32bit int - we can also force this to always be sorted as per your ordering
[suit 0][suit 1][suit 2][suit 3][suit 4][value 0][value 1][value 2][value 3][value 4]
Then I would use a separate hash for:
consectutive values (very small) [mask off the suits]
1 or more multiples (pair, 2 pair, full house) [mask off the suits]
suits that are all the same (very small) [mask off the values]
Then use the 3 hashes to calculate your rankings
At 5MB you will likely have enough caching issues that will make a bit of math and three small lookups faster

pattern recognition - "is this a pattern?"

I have a large vector of numbers, say 500 numbers. I would like a program to detect patterns (reoccurrence in this case) in such vector based on following rules:
A sequence of numbers is a pattern if:
The size of the sequence is between 3 and 20 numbers.
The RELATIVE positions of the numbers in sequence is repeated at
least one other time in a vector. So let's say if I have a sequence
(1,4,3) and then (3,6,5) somewhere else in the vector then (1,4,3) is
a pattern. (as well as (2,5,4), (3,6,5) etc.)
The sequences can't intersect. So, a vector (1,2,3,4,5) does not
contain patterns (1,2,3) and (3,4,5)(we can't use the same number for
both sequences). However, (1,2,3,3,4,5) does contain a pattern
(1,2,3) (or (3,4,5))
A subset A of a pattern B is a pattern ONLY IF A appears somewhere
else outside B. So, a vector (1,2,3,4,7,8,9,2,3,4,5) would contain
patterns (1,2,3,4) and (1,2,3), because (1,2,3,4) is repeated (in a
form of (2,3,4,5)) and (1,2,3) is repeated (in a form (7,8,9)).
However, if the vector was (1,2,3,4,2,3,4,5) the only pattern will
be (1,2,3,4), because (1,2,3) appeares only in context of (1,2,3,4).
I'd like to know several things:
First of all I hope the rules don't go against each other. I made them myself so there might be a clash somewhere that I didn't notice, please let me know if you do notice it.
Secondly, how would one implement such system in the most efficient way? Maybe someone can point out towards some particular literature on the subject? I could go number by number starting with searching a sequence repetition for all subsets of 3, then 4,5 and till 20. But that seems to be not very efficient..
I am interested in implementation of such system in C, but any general guidance is very welcome.
Thank you in advance!
Just a couple of observations:
If you're interested in relative values, then your first step should be to calculate the differences between adjacent elements of the vector, e.g.:
Original numbers:
1 4 3 2 5 1 1 3 6 5 6 2 5 4 4 4 1 4 3 2
********* ********* ********* *********
Difference values:
3 -1 -1 3 -4 0 2 3 -1 1 4 3 -1 -3 0 -3 3 -1 -1
****** ****** ****** ******
Once you've done that, you could use an autocorrelation method to look for repeated patterns in the data. This can be computed in O(n log n) time, and possibly even faster if you're only concerned with exact matches.

FFT and convolution

Im writing for school 2dFFT using on image filtering.
And I have problem with filter matrix.
I made my fft so it accepts 2^n input, and all filter matrix are odd numbers.
So I need solution to somehow transform filter matrix to acceptable input for my function.
I have next idea and Im not sure how it will work.
If I have filter matrix:
1 2 3
4 5 6
7 8 9
To transform it to:
0 0 0 0
1 2 3 0
4 5 6 0
7 8 9 0
And when Im matching "center" of matrix with my pixel, match center of "submatrix" and after that extract values I need.
Is that possible?
Also Can someone tell me what is max size of filter I can get? Is it larger than lets say 32x32?
Filter masks are used to express filters with compact support. Compact support means that the signal has non-zero values only in a limited range. By extending your filter mask with zero values, you are in fact doing a natural thing. The zeros are part of the original filter.
The real problem however is a different thing. I assume that you use FFT according to the convolution theorem. For that, you need element-wise multiplication. You can only do element-wise multiplication when both your filter and your signal have the same number of elements. So you would need to extend your filter to the signal size (using zeros).
There is no limit on filter mask size. For convolution the only restriction is compact support (as explained above).

Resources