I have multiple instances of 4096 items. I need to search and find an item on a reocurring basis and i'd like to optimize it. Since not all 4096 items may be used, I thought, an approach to speed things up would be to use a linked list instead of an array. And whenever I have to search an item, once I found it, I'd place it on the head of the list so that next time it comes around, I'd have to do only minimal search (loop) effort. Does this sound right?
EDIT1
I don't think the binary search tree idea is really what I can use as I have ordered data, like an array i.e. every node following the previous one is larger which defeats the purpose, doesn't it?
I have attempted to solve my problem with caching and came up with something like this:
pending edit
But the output I get, suggests that it doesn't work like I'd like it to:
any suggestions on how I can improve this?
When it comes to performance there is only one important rule: measure it!
In your case you could for example have two different considerations, a theoretical runtime analysis and what is really going on one the machine. Both are heavily depending on the characteristics of your 4096 items. If your data is sorted you can have a O(log n) search, if it is unsorted it is worst case O(n) etc.
Regarding your idea of a linked list you might have more hardware cache misses because the data is not stored together anymore (spatial locality) ending up in having a slower implementation even if your theoretical consideration is right.
If you interested in general in such problems I recommend this cool talk from the GoingNative 2013
http://channel9.msdn.com/Events/GoingNative/2013/Writing-Quick-Code-in-Cpp-Quickly
Worst case, your search is still O(N) unless you sort the array or list, like Brett suggested. Therefore with a sorted list, you increase complexity of insertion (to insert ordered), but your searching will be much faster. What you are suggesting is almost like a "cache." It's hard for us to say how useful that will be without any idea of how often a found item is searched for again in the near-term. Clearly there are benefits to caching, it's why we have the whole L1, L2, L3 architecture in memory. But whether it will work out for you it's unsure.
If your data can be put in a binary search tree: http://en.wikipedia.org/wiki/Binary_search_tree
Then you can use a data structure called Splay tree: "A splay tree is a self-adjusting binary search tree with the additional property that recently accessed elements are quick to access again" http://en.wikipedia.org/wiki/Splay_tree
Respond to Edit1:
I think if your data element is not large, say, only a few bytes or even tens of bytes, 4096 of them can be fitted into memory. In this case what you need is a hash table. In C++, you use unordered_map. For example, you can define unorderedmap<int, ptr_to_your_node_type> and get the element in O(1) if your key type is int.
The fastest search could be O(1) if you can design your hash well and the worst case could be O(n). If these items are large and can not be fitted into memory, you can use the so called least recently used cachealgorithm to save memory.
An example code for LRU cache
template <typename K>
class Key_Age{
list<K> key_list;
unordered_map<K, typename list<K> :: iterator> key_pos;
public:
void access(K key){
key_list.erase(key_pos[key]);
insert_new(key);
}
void insert_new(K key){
key_list.push_back(key);
key_pos[key] = --key_list.end();
}
K pop_oldest(){
K t = key_list.front();
key_list.pop_front();
return t;
}
};
class LRU_Cache{
int capacity;
Key_Age<int> key_age;
unordered_map<int, int> lru_cache;
public:
LRU_Cache(int capacity): capacity(capacity) {
}
int get(int key) {
if (lru_cache.find(key) != lru_cache.end()) {
key_age.access(key);
return lru_cache[key];
}
return -1;
}
void set(int key, int value) {
if (lru_cache.count(key) < 1) {
if (lru_cache.size() == capacity) {
int oldest_key = key_age.pop_oldest();
lru_cache.erase(oldest_key);
}
key_age.insert_new(key);
lru_cache[key] = value;
return;
}
key_age.access(key);
lru_cache[key] = value;
}
};
Related
I decided to polish my knowledge of data structures and came across linked list.
I was wondering if anyone has any practical example of where do we use them in relation to mobile development or any other kind of projects that use swift as their language since to me it seems the Array class already has enough flexibility as it is.
You should use linked lists anytime you need their primary benefit: O(1) insertion and deletion at arbitrary points in the list.
The mistake most people make with linked lists is trying to implement them as a value type in Swift. That's very hard, and the resulting data structure is generally useless. The most common form is with an enum:
enum List<T> {
indirect case Cons(T, List<T>)
case Nil
}
let list = List.Cons(1, .Cons(2, .Cons(3, .Nil)))
I've never seen anyone come up with an efficient implementation of that. Efficient collections in Swift require copy-on-write, and that's very hard to build for an enum.
The natural way to build linked lists in Swift is as a mutable reference type. That's very easy to make efficient and is very useful.
class List<T> {
var value: T
var next: List<T>?
init(_ value: T, _ next: List<T>?) {
self.value = value
self.next = next
}
}
let list = List(1, List(2, List(3, nil)))
And in that form, you'd use it any time you want O(1) insertion and deletion at arbitrary points in the list.
You could also build this as an immutable reference type (using let) if you had several large lists that shared a significant tail. I've never seen that come up in practice.
Most folks I know who go down this route want an immutable linked list so they can implement the many very beautiful recursive algorithms that are built on them and which are very common in FP languages (this may say more about the people I know than the question itself). But Swift is at best mildly hostile to recursion, so this is rarely worth the trouble IMO except as an exercise. But recursive algorithms are just one use of linked lists. As an O(1) arbitrary-insertion data structure they're as useful in Swift as they are in Pascal and C, and you build them roughly the same way.
want to sort a linked list, but my code doesnt want to:)
Here it is:
void swap(element *p,element*q) {
int aux;
aux=p->info;
p->info=q->info;
q->info=aux;
}
void ordonare(element *lista) {
element *p,*q;
for(p=lista; p!=NULL; p=p->urmator) {
if(p->info>p->urmator->info) {
swap(p,p->urmator);
}
}
}
If this works, it will only sort the values, without changing the nodes' positions.
I can't seem to find the bug here, and I would appreciate if you could also indicate the solution where the nodes will change their positions.
Thanks,
Radu
UPDATE
the code above works, but as #Daniel.S mentioned, it only does one iteration through the list.
What conditions am I supposed to put in order to iterate until it is sorted?
Thanks!!:)
Look up merge sort, it is perfect for lists and easy to implement. That link has an example implementation:
Merge sort is often the best choice for sorting a linked list: in this situation it is relatively easy to implement a merge sort in such a way that it requires only Θ(1) extra space, and the slow random-access performance of a linked list makes some other algorithms (such as quicksort) perform poorly, and others (such as heapsort) completely impossible.
Your sorting algorithm is not complete. You're missing parts in the loop. One loop iteration over the list is not enough. Example:
You have the list {3,2,1}. If you sort it with your algorithm, it will first compare 3 and 2 and swap them, then you get
{2,3,1} it will then compare 3 and 1 and swap those, too. Then you get {2,1,3}. Your algorithm finishes here, but the list isn't sorted yet.
Depending on the algorithm you want to implement (e.g. bubble sort), you have to add additional code.
What algorithm do you want to implement?
I started with a programming assignment. I had to design a DFA based on graphs. Here is the data structure I used for it:
typedef struct n{
struct n *next[255]; //pointer to the next state. Value is NULL if no transition for the input character( denoted by their ascii value)
bool start_state;
bool end_state;
}node;
Now I have a DFA graph-based structure ready with me. I need to utilize this DFA in several places; The DFA will get modified in each of these several places. But I want unmodified DFAs to be passed to these various functions. One way is to create a copy of this DFA. What is the most elegant way of doing this? So all of them are initialized either with a NULL value or some pointer to another state.
NOTE:
I want the copy to be created in the called function i.e. I pass the DFA, the called function creates its copy and does operation on it. This way, my original DFA remains undeterred.
MORE NOTES:
From each node of a DFA, I can have a directed edge connecting it with another edge, If the transition takes place when there the input alphabet is c then state->next[c] will have a pointer of the next node. It is possible that several elements of the next array are NULL. Modifying the NFA means both adding new nodes as well as altering the present nodes.
If you need a private copy on each call, and since this is a linked data structure, I see no way to avoid copying the whole graph (except perhaps to do a copy-on-write to some sub branches if the performance is that critical, but the complexity is significant and so is the chance of bugs).
Had this been c++, you could have done this in a copy constructor, but in c you just need to clone on every function. One way is to clone the entire structure (Like Mark suggested) - it's pretty complicated since you need to track cycles/ back edges in the graph (which manifest as pointers to previously visited nodes that you don't want to reallocate but reuse what you've already allocated).
Another way, if you're willing to change your data structure, is to work with arrays - keep all the nodes in a single array of type node. The array should be big enough to accommodate all nodes if you know the limit, or just reallocate it to increase upon demand, and each "pointer" is replaced by a simple index.
Building this array is different - instead of mallocing a new node, use the next available index (keep it on the side), or if you're going to add/remove nodes on the fly, you could keep a queue/stack of "free" indices (populate at the beginning with 1..N, and pop/push there whenever you need a new location or about to free an old one.
The upside is that copying would be much faster, since all the links are relative to the instance of the array, you just copy a chunk of contiguous memory (memcpy would now work fine)
Another upside is that the performance of using this data structure should be superior to the linked one, since the memory accesses are spatially close and easily prefetchable.
You'll need to write a recursive function that visits all the nodes, with a global dictionary that keeps track of the mapping from the source graph nodes to the copied graph nodes. This dictionary will be basically a table that maps old pointers to new pointers.
Here's the idea. I haven't compiled it or debugged it...
struct {
node* existing;
node* copy
} dictionary[MAX_NODES] = {0};
node* do_copy(node* existing)
{
node* copy;
int i;
for(i=0;dictionary[i].existing;i++) {
if (dictionary[i].existing == existing) return dictionary[i].copy;
}
copy = (node*)malloc(sizeof(node));
dictionary[i].existing = existing;
dictionary[i].copy = copy;
for(int j=0;j<255 && existing->next[j];j++) {
node* child = do_copy(existing->next[j]);
copy->next[j] = child;
}
copy->end_state = existing->end_state;
copy->start_start = existing->start_state;
return copy;
}
I'm trying to resolve this for fun but I'm having a little bit of trouble on the implementation, the problem goes like this:
Having n stacks of blocks containing m blocks each, design a program in c that controlls a robotic arm that moves the blocks form an inicial configuration to a final one using the minimum amount of movements posible, your arm can only move one block at a time and can only take the block at the top of the stack, your solution should use either pointers or recursive methods
In other words the blocks should go from this(suposing there are 3 stacks and 3 blocks):
| || || |
|3|| || |
|2||1|| |
to this:
| ||1|| |
| ||2|| |
| ||3|| |
using the shortest amount of movements printing each move
I was thinking that maybe I could use a tree of some sorts to solve it (n-ary tree maybe?) since that is the perfect use of pointers and recursive methods but so far it has proved unsuccesfull, I'm having lots of trouble defining the estructure that will store all the movements since I would have to check every time I want to add a new move to the tree if that move has not been done before, I want each leaf to be unique so when I find the solution it will give me the shortest path.
This is the data structure I was thinking of:
typedef struct tree(
char[MAX_BLOCK][MAX_COL] value;
struct tree *kids
struct tree *brothers;
)Tree;
(I'm really new at C so sorry beforehand if this is all wrong, I'm more used to Java)
How would you guys do it? Do you have any good ideas?
You have the basic idea - though I am not sure why you have elected to choose brothers over the parent.
You can do this problem with a simple BFS search, but it is a slightly less interesting solution, and not the one you for which seemed to have set yourself up.
I think it will help if we concisely and clearly state our approach to the problem as a formulation of either Dijkstra's, A*, or some other search algorithm.
If you are unfamiliar with Dijkstra's, it is imperative that you read up on the algorithm before attempting any further. It is one of the foundational works in shortest path exploration.
With a familiarity of Dijkstra's, A* can readily be described as
Dijsktra's minimizes distance from the start. A* adds a heuristic which minimizes the (expected) distance to the end.
With this algorithm in mind, lets state the specific inputs to an A* search algorithm.
Given a start configuration S-start, and an ending configuration S-end, can we find the shortest path from S-start to S-end given a set of rules R governed by a reward function T
Now, we can envision our data structure not as a tree, but as a graph. Nodes will be board states, and we can transition from state to state using our rules, R. We will pick which edge to follow using the reward function T, the heuristic to A*.
What is missing from your data-structure is the cost. At each node, you will want to store the current shortest path, and whether it is finalized.
Let's make a modification to your data-structure which will allow us to readily traverse a graph and store the shortest path information.
typedef struct node {
char** boardState;
struct node *children;
struct node *parent;
int distance;
char status; //pseudo boolean
} node;
You may want to stop here if you were interested in discovering the algorithm for yourself.
We now consider the rules of our system: one block at a time, from the top of a stack. Each move will constitue an edge in our graph, whose weight is governed by the shortest number of moves from S-begin plus our added heuristic.
We can then sketch a draft of the algorithm as follows:
node * curr = S-begin;
while (curr != S-end) {
curr->status == 'T'; //T for True
for(Node child : children) {
// Only do this update if it is cheaper than the
int updated = setMin(child->distance, curr->distance + 1 + heuristic(child->board));
if(updated == 1) child->parent = curr;
}
//set curr to the node with global minimum distance who has not been explored
}
You can then find the shortest path by tracing the parents backwards from S-end to S-begin.
If you are interested in these types of problems, you should consider taking a uppergraduate level AI course, where they approach these types of problems :-)
I have a very simple binary tree structure, something like:
struct nmbintree_s {
unsigned int size;
int (*cmp)(const void *e1, const void *e2);
void (*destructor)(void *data);
nmbintree_node *root;
};
struct nmbintree_node_s {
void *data;
struct nmbintree_node_s *right;
struct nmbintree_node_s *left;
};
Sometimes i need to extract a 'tree' from another and i need to get the size to the 'extracted tree' in order to update the size of the initial 'tree' .
I was thinking on two approaches:
1) Using a recursive function, something like:
unsigned int nmbintree_size(struct nmbintree_node* node) {
if (node==NULL) {
return(0);
}
return( nmbintree_size(node->left) + nmbintree_size(node->right) + 1 );
}
2) A preorder / inorder / postorder traversal done in an iterative way (using stack / queue) + counting the nodes.
What approach do you think is more 'memory failure proof' / performant ?
Any other suggestions / tips ?
NOTE: I am probably going to use this implementation in the future for small projects of mine. So I don't want to unexpectedly fail :).
Just use a recursive function. It's simple to implement this way and there is no need to make it more compilcated.
If you were doing it "manually" you'd basically end up implementing the same thing, just that you wouldn't use the system call stack for temporary variables but your own stack. Usually this won't have any advantages outweighing the more complicated code.
If you later find out that a substantial time in your program is spend calculating the sizes of trees (which probably won't happen) you can still start to profile things and try how a manual implementation performs. But then it might also better to do algorithmic improvements like already keeping track of the changes in size during the extraction process.
If your "very simple" binary tree isn't balanced, then the recursive option is scary, because of the unconstrained recursion depth. The iterative traversals have the same time problem, but at least the stack/queue is under your control, so you needn't crash. In fact, with flags and an extra pointer in each node and exclusive access, you can iterate over your tree without any stack/queue at all.
Another option is for each node to store the size of the sub-tree below it. This means that whenever you add or remove something, you have to track all the way up to the root updating all the sizes. So again if the tree isn't balanced that's a hefty operation.
If the tree is balanced, though, then it isn't very deep. All options are failure-proof, and performance is estimated by measurement :-) But based on your tree node struct, either it's not balanced or else you're playing silly games with flags in the least significant bits of pointers...
There might not be much point being very clever with this. For many practical uses of a binary tree (in particular if it's a binary search tree), you realise sooner rather than later that you want it to be balanced. So save your energy for when you reach that point :-)
How big is this tree, and how often do you need to know its size? As sth said, the recursive function is the simplest and probably the fastest.
If the tree is like 10^3 nodes, and you change it 10^3 times per second, then you could just keep an external count, which you decrement when you remove a node, and increment when you add one. Other than that, simple is best.
Personally, I don't like any solution that requires decorating the nodes with extra information like counts and "up" pointers (although sometimes I do it). Any extra data like that makes the structure denormalized, so changing it involves extra code and extra chances for errors.