There is a square grid in which there are empty and filled blocks. A no of contiguous blocks form an object. My bot can find out the status of it's neighbouring 8 blocks (whether they are filled or not). It can move to a neighbouring position if there is no filled block over there.
My code works for the case when there are no tight spaces (i.e., when objects have atleast 2 empty blocks between them). But, in the case when there can be a single space between 2 blocks, since my bot has no way of knowing if the neighbouring filled blocks belong to the same or different objects, it fails to encircle the object.
Is there a way to get around this problem?
How about this: when the bot is encircling an object, it is modifying its internal map, changing the border squares of the object from "unknown" to "filled". If it finds itself adjacent to two (or more) sold blocks, the one it should pay attention to is the one marked "unknown" with a "filled" neighbor.
There are still some peculiar cases to deal with, and a lot depends on some design choices-- whether diagonal neighbors are "contiguous", and if so how to deal with checkerboards, whether to mark a square before or after finding the next one, and so on. But the combination of map marks and visible blocks should give enough information for the bot to keep its bearings.
Related
I'm trying to read amplitude from a waveform and shine a green, yellow or red light depending on the amplitude of the signal. I'm fairly new to labVIEW and couldnt get my idea that wouldve worked with any other programming language I know to work. What I'm trying to do is take the value of the signal and for everytime it updates I'll store the value of the amplitude into an index of a large array. With each measurement being stored in the n+1 index of the array.
After a certain amount of data points I want to start over and replace values in the array (I use the formula node with the modulus for this). By keeping a finite amount of indexes to check for max value I restrict my amplitude check to a certain time period.
However my problem is that whenever I use the replace array subset to insert a new value into index n, all the other index points get erased. Rendering it pretty much useless. I was thinking its the Initialize array causing problems but I just cant seem to wrap my head around what to do here.
I tried creating just basic arrays in the front panel, but those either are control or indicator arrays and can't seem to be both written and read from, its either control (read but not write) or indicate(write but not read)?. Maybe its just not possible to do what I had in mind in an eloquent way in LabVIEW. If its not possible to do this with arrays in LabVIEW I will look for a different way to do it.
I'm pretty sure I got most of the rest of the code down except for an unfinished part here and there. Its just my issue with the arrays not working as I want them too.
I expected the array to retain its previously inputted data for index n-1 when index n is inputted. And only to be replaced once the index has come back to that specific point.
Instead its like a new array is initialized every time a new index is input.
download link for the VI
What you want to do:
Transport the content of the modified array into the next iteration of the WHILE loop.
What happens:
On each iteration, the content of the array is the same. It is the content of the initial array you created outside.
To solve this, right-click the orange square on the left border of the loop, and make it a "shift register". The symbol changes, and a similar symbol appears on the right border. Now, wire the modified array to the symbol on the right. What flows out into that symbol on the right, comes in from the left symbol on the next iteration.
Edit:
I have optimized your code a little. There is a modulo function, and an IF clause can handle ranges. ..3 means "values lower or equal 3". The next case is "Default", the next "7..". Unfortunately, this only works for integers. Otherwise, one would use nested IF clauses with the < comparator or similar.
I'm working on a cardiac simulation tool that uses 4-dimensional data, i.e. several (3-30) variables at locations in 3D space.
I'm now adding some tissue geometry which will leave over 2/3 of the points in the containing 3D box outside of the tissue to be simulated, so I need a way to efficiently store the active points and not the others.
Crucially, I need to be able to:
Iterate over all of the active points within a constrained 3D box (iterator, perhaps?)
Having accessed one point, find its orthogonal neighbours (x,y,z) +/- 1.
That's probably more than one question! My main concern is how to efficiently represent the sparse data.
I'm using C.
How often do you add the tissue, and how much time can it take?
One simple solution is using a linked list+hash with pointers from one to the other.
Meaning:
Save a linked list containing all of the relevant points and their data
Save a hash to easily get to this data: key = coordinates, data = pointer to the linked list.
The implementation of the operations would be:
Add a box: Go over the full linked list, and take only the relevant elements into the "work" linked list
Iterate: Go over the "work" linked list
Find neighbors: Seek each of the neighbors in the hash.
Complexity:
Add: O(n), Iterate O(1) for finding next element, Neighbor O(1) average (due to hash).
If you want to use plain array indexing, you can create a sparse array on POSIX systems using mmap():
float (*a)[500][500];
a = mmap(0, (size_t)500 * sizeof a[0], PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
if (a && (void *)a != MAP_FAILED)
{
/* a is now 500 x 500 x 500 sparse array of floats */
You can then just access a[x][y][z] as you like, and it will only actually allocate real memory for each page that's touched. The array will be initialised to zero bytes.
If your system doesn't have MAP_ANONYMOUS you can achieve the same effect by mapping from /dev/zero.
Note that on many systems, swap space will be reserved (though not used) for the whole array.
First off, I think it's worth considering what your real requirement is. I suspect that it's not just "store the active points and none of the others in as space-efficient a manner as possible", but also a certain amount of "store adjacent points in nearby memory locations so that you get good caching behavior" and "store points in a manner for which lookups can be done efficiently".
With that said, here's what I would suggest. Divide the full 3D region into cubical blocks, all the same size. For each block, store all of the points in the block in dense arrays, including a boolean isTissue array for whether each point is in the tissue region or not. Allocate only the blocks that have points in them. Make a (dense) array of pointers to blocks, with NULL pointers for non-allocated blocks.
Thus, to find the point at (i,j), you first compute ii=i/blockside, jj=j/blocksize, and then look in the pointer-to-block table at (ii,jj) to find the block that contains your point. If that pointer is NULL, your point isn't in the tissue. If it's non-null, you look at (i mod blocksize, j mod blocksize) in that block, and there is your (i,j) point. You can check its isTissue flag to see if it's a "present" point or not.
You'll want to choose the block size as a balance between minimizing the number of times you do adjacent-point computations that cross block boundaries, and minimizing the number of points that are in blocks but not in the tissue region. I'd guess that at a minimum you want a row of the block to be a cache-line long. Probably the optimum is rather larger than that, though it will depend at least somewhat on your geometry.
To iterate over all the points in a 3D box, you would either just do lookups for each point, or (more efficiently) figure out which blocks the box touches, and iterate over the regions in those blocks that are within your box, skipping the ones where isTissue is false.
If you're doing a lot of deallocation and re-allocation of blocks, you probably want to "deallocate" blocks by dropping them into an "unused" pool, and then pulling blocks out of that pool rather than reallocating them. This also has the advantage that those blocks will already have all of their points set to "not present" (because that's why you deallocated the block), so you don't need to initialize them.
The experienced reader will probably recognize similarities between this and ways of laying out data for parallel computations; if you have a really big simulation, you can easily distribute the blocks across multiple nodes, and you only need to do cross-node communication for the cross-block computations. For that sort of application, you may find it useful to do nested levels of blocks, where you have meta-blocks (for the cross-node communication) containing smaller blocks (for the geometry).
Let's say I have an 2D array. Along the first axis I have a series of properties for one individual measurement. Along the second axis I have a series of measurements.
So, for example, the array could look something like this:
personA personB personC
height 1.8 1.75 2.0
weight 60.5 82.0 91.3
age 23.1 65.8 48.5
or anything similar.
I want to change the size of the array very often - for example, ignoring personB's data and including personD and personE. I will be looping through "time", probably with >10^5 timesteps. Each timestep, there is a chance that each "person" in the array could be deleted and a chance that they will introduce several new people into the simulation.
From what I can see there are several ways to manage an array like this:
Overwriting and infrequent reallocation
I could use a very large array with an extra column, in which I put a "skip" flag. So, if I decide I no longer need personB, I set the flag to 1 and ignore personB every time I loop through the list of people. When I need to add personD, I search through the list for the first person with skip == 1, replace the data with the data for personD, and set skip = 0. If there aren't any people with skip == 1, I copy the array, deallocate it, reallocate it with several more columns, and then fill the first new column with personD's data.
Advantages:
infrequent allocation - possibly better performance
easy access to array elements
easier to optimise
Disadvantages:
if my array shrinks a lot, I'll be wasting a lot of memory
I need a whole extra row in the data, and I have to perform checks to make sure I don't use the irrelevant data. If the array shrinks from 1000 people to 1, I'm going to have to loop through 999 extra records
could encounter memory issues, if I have a very large array to copy
Frequent reallocation
Every time I want to add or remove some data, I copy and reallocate the entire array.
Advantages:
I know every piece of data in the array is relevant, so I don't have to check them
easy access to array elements
no wasted memory
easier to optimise
Disadvantages:
probably slow
could encounter memory issues, if I have a very large array to copy
A linked list
I refactor everything so that each individual's data includes a pointer to the next individual's data. Then, when I need to delete an individual I simply remove it from the pointer chain and deallocate it, and when I need to add an individual I just add some pointers to the end of the chain.
Advantages:
every record in the chain is relevant, so I don't have to do any checks
no wasted memory
less likely to encounter memory problems, as I don't have to copy the entire array at once
Disadvantages:
no easy access to array elements. I can't slice with data(height,:), for example
more difficult to optimise
I'm not sure how this option will perform compared to the other two.
--
So, to the questions: are there other options? When should I use each of these options? Is one of these options better than all of the others in my case?
I'm trying to apply only the minimal number of changes when table's data is updated (it's an iOS app and table view is the UITableView of course, but I don't think it's relevant here). Those changes include adding new items, removing old ones and also moving some existing ones to a different position without updating their content. I know there are similar questions on SO, but most of them only take the adds and removes into account and existing ones are either ignored or simply reloaded.
Mostly the moves involve not more than a few existing elements and the table can have up to 500 elements.
Items in the arrays are unique.
I can easily get added items by subtracting the set of items in new array from the set of items in the old array. And the opposite operation will yield a set of deleted items.
So the problem comes down to finding the minimal differences between two arrays having the same elements.
[one, two, three, four]
[one, three, four, two]
Diffing those arrays should result in just a move from index 1 to 3.
The algorithm doesn't know if there's only one such move. Just as well the change can be:
[one, two, three, four, five]
[one, four, five, three, two]
Which should result in moving index 1 to 4 and 2 to 3, not moving 3 and 4 two indexes to the left, because that could result in moving 300 items, when in fact the change should be much simpler. In terms of applying the visual change to the view, that is. That may require recalculating cell heights or performing lots of animations and other related operations. I would like to avoid them. As an example - marking an item as favorite that causes moving the item to top of the list or 300 items takes about 400 milliseconds. That's because with the algorithm I'm using currently, e.g. 100 items are moved one index up, one moved to index 0, 199 other are left untouched. If I unmark it, one item is moved 100 indices down and that's great, but that is the perfect, but a very rare, case.
I have tried finding item's index in old array, checking if it changed in the new array. If there were a change I moved the item from new index to old one, recorded the opposite change and compared arrays until there're equal in terms of element order. But that sometimes results in moving the huge chunks of items that actually were not changed, depending on those items' position.
So the question is: what can I do?
Any ideas or pointers? Maybe a modified Levenshtein distance algorithm? Could the unmodified one work for that? I'll probably have to implement it in one form or another if so.
Rubber duck talked:
Thinking about finding all unchanged sequences of items and moving around all the other items. Could it be the right direction?
I have an idea, don't know if it would work.. Just my two cents. How about if you would implement an algorithm similar to the longest common subsequences on your array items.
The idea would be to find large "substrings" of data that have kept the initial sequence, the largest ones first. Once you've covered a certain threshold percent of items in 'long sequences' apply a more trivial algorithm for solving the remaining problems.
Sorry for being rather vague, it's just meant to be a sugestion. Hope you solve your problem.
I'm working on a cardiac simulation tool that uses 4-dimensional data, i.e. several (3-30) variables at locations in 3D space.
I'm now adding some tissue geometry which will leave over 2/3 of the points in the containing 3D box outside of the tissue to be simulated, so I need a way to efficiently store the active points and not the others.
Crucially, I need to be able to:
Iterate over all of the active points within a constrained 3D box (iterator, perhaps?)
Having accessed one point, find its orthogonal neighbours (x,y,z) +/- 1.
That's probably more than one question! My main concern is how to efficiently represent the sparse data.
I'm using C.
How often do you add the tissue, and how much time can it take?
One simple solution is using a linked list+hash with pointers from one to the other.
Meaning:
Save a linked list containing all of the relevant points and their data
Save a hash to easily get to this data: key = coordinates, data = pointer to the linked list.
The implementation of the operations would be:
Add a box: Go over the full linked list, and take only the relevant elements into the "work" linked list
Iterate: Go over the "work" linked list
Find neighbors: Seek each of the neighbors in the hash.
Complexity:
Add: O(n), Iterate O(1) for finding next element, Neighbor O(1) average (due to hash).
If you want to use plain array indexing, you can create a sparse array on POSIX systems using mmap():
float (*a)[500][500];
a = mmap(0, (size_t)500 * sizeof a[0], PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
if (a && (void *)a != MAP_FAILED)
{
/* a is now 500 x 500 x 500 sparse array of floats */
You can then just access a[x][y][z] as you like, and it will only actually allocate real memory for each page that's touched. The array will be initialised to zero bytes.
If your system doesn't have MAP_ANONYMOUS you can achieve the same effect by mapping from /dev/zero.
Note that on many systems, swap space will be reserved (though not used) for the whole array.
First off, I think it's worth considering what your real requirement is. I suspect that it's not just "store the active points and none of the others in as space-efficient a manner as possible", but also a certain amount of "store adjacent points in nearby memory locations so that you get good caching behavior" and "store points in a manner for which lookups can be done efficiently".
With that said, here's what I would suggest. Divide the full 3D region into cubical blocks, all the same size. For each block, store all of the points in the block in dense arrays, including a boolean isTissue array for whether each point is in the tissue region or not. Allocate only the blocks that have points in them. Make a (dense) array of pointers to blocks, with NULL pointers for non-allocated blocks.
Thus, to find the point at (i,j), you first compute ii=i/blockside, jj=j/blocksize, and then look in the pointer-to-block table at (ii,jj) to find the block that contains your point. If that pointer is NULL, your point isn't in the tissue. If it's non-null, you look at (i mod blocksize, j mod blocksize) in that block, and there is your (i,j) point. You can check its isTissue flag to see if it's a "present" point or not.
You'll want to choose the block size as a balance between minimizing the number of times you do adjacent-point computations that cross block boundaries, and minimizing the number of points that are in blocks but not in the tissue region. I'd guess that at a minimum you want a row of the block to be a cache-line long. Probably the optimum is rather larger than that, though it will depend at least somewhat on your geometry.
To iterate over all the points in a 3D box, you would either just do lookups for each point, or (more efficiently) figure out which blocks the box touches, and iterate over the regions in those blocks that are within your box, skipping the ones where isTissue is false.
If you're doing a lot of deallocation and re-allocation of blocks, you probably want to "deallocate" blocks by dropping them into an "unused" pool, and then pulling blocks out of that pool rather than reallocating them. This also has the advantage that those blocks will already have all of their points set to "not present" (because that's why you deallocated the block), so you don't need to initialize them.
The experienced reader will probably recognize similarities between this and ways of laying out data for parallel computations; if you have a really big simulation, you can easily distribute the blocks across multiple nodes, and you only need to do cross-node communication for the cross-block computations. For that sort of application, you may find it useful to do nested levels of blocks, where you have meta-blocks (for the cross-node communication) containing smaller blocks (for the geometry).