Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I encountered a question where a given array of integers I needed to find the pair which could satisfy the given sum.
The first solution that came to me was to check all possible pairs which was about O(n^2) time, but the interviewer requested me to come up with the improved run time at which I suggested to sort the array and then do binary search but that was also O(nlogn).
Overall I failed to come up with the O(n) solution. Googling that I came to know that it can be achieved via extra memory using set.
I know that there cannot be any fix rules to thinking about algorithms but I am optimistic and think that there must be some heuristic or mental model while thinking algorithms on array. I want to know if there is any generic strategy or array specific thinking which would help me explore more about solution rather than acting dead.
Generally, think about how to do it naively first. If in an interview, make clear what you are doing, say "well the naive algorithm would be ...".
Then see if you can see any repeated work or redundant steps. Interview questions tend to be a bit unrealistic, mathematical special case type questions. Real problems more often come down to using hash tables or sorted arrays. A sort is N log N, but it make all subsequent searches O log N, so it's usually worth sorting data. If data is dynamic, keep it sorted via a binary search tree (C++ "set").
Secondly, can you "divide and conquer" or "build up". Is the N = 2 case trivial? In than case, can we divide N = 4 into two N = 2 cases, and another integration step? You might need to divide the input into two groups, low and high, in which case it is "divide and conquer", or you might need to start with random pairs, then merge into fours, eights and so on, in which case it is "build up".
If the problem is geometrical, can you exploit local coherence? If the problem is realistic instead of mathematical, and there typical inputs you can exploit (real travelling salesmen don't travel between cities on a random grid, but over a hub and spoke transport system with fast roads connecting major cities and slow roads then branching out to customer destinations)?
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 11 months ago.
Improve this question
Let's say we're programming a snake game.
The game has a 20x20 playing field. That means there is 400 individual cells.
In snake, a new apple has to generate on a random unoccupied field/cell.
There's two common ways to do this
Try to place the apple on a random cell, until you hit a free one
Create a list of all free cells, and randomly choose one to place the apple on
Most people on here would recommend the second approach, because of "better performance". And I thought the same. But after having had statistics in my computer science class, i'm not so sure anymore:
At the start of the game, by logic, the first approach would be faster, because it would very likely instantly find a free cell, whereas the second apprach has a big overhead of creating a list of free cells.
And a little performance test in JS confirms this:
At the end of the game - which isn't reached often - when there is only one free field left, the second approach would probably win in speed, because it would always find the field in 1 go. The first approach needs way more tries - using logarithm we can calulate how many.
50% of the time, it takes less than 276 tries. 90% of the time, it takes less than 919 tries. 99.999999% of the time, it takes less than 5519 tries. And so on. log with base 399/400 of (100-percentage). A few thousands tries more is nothing for a modern computer, so it should only be a little bit slower. This is confirmed by another performance test:
0%-4% slower on average ... negligible.
And most of the time, most cells are free, which means the first approach is way faster on average.
Ontop of that, in many languages, for example in C, the first appreach would be shorter in terms of code. There is no overhead for a second array.
This brings me to the conclusion, that randomly choosing a cell until you find a free one is the best practise and creating a list of empty cells is premate optimization, and it actually does the opposite (makes performance worse on average because of the added overhead).
Do you agree? Did I miss something?
What's the best practice in your opinion and why?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In C++ map used to point the value with respect to the key.
How to achieve the same in C programming language without using C++ STL concepts
All you really need to do is define a struct that contains at least members for your key and value, and then define a container that can hold multiple of that struct. The container just needs to support three basic operations: Insert, Remove, and Find.
Almost any sort of data structure will do, but with different tradeoffs between easy implementation and efficiency. Some of the most likely options are:
An array or linked list: If you know there will never be a large amount of data and efficiency isn't really a concern, you could just go simple. The Find operation can just be a simple linear search.
A sorted array: You could also choose to keep a simple array sorted every time you Insert or Remove an entry. This lets the Find algorithm use a binary search, and might be appropriate if Find will be needed much more often than Insert or Remove.
A red-black tree: If you want the O(log N) Insert/Remove/Find efficiency performance provided by C++'s std::map for large data sets, a red-black tree is a good choice. This also guarantees the elements are sorted by key, which is useful if you need to deal with subranges of the data.
A hash table: C++11 introduced std::unordered_map, and if you can find or produce a hash function for your key type, you can imitate it in C by implementing a hash table. This data structure has O(1) average case Insert/Remove/Find, but O(N) worst case Insert/Remove/Find. The entries are not sorted.
Implementing a red-black tree or hash table can be a bit tricky, but there are a number of free implementations available that you could find and use.
There is no such data structure available from the standard C library. You need to implement it yourself.
You'd have to implement a dictionary (it is usually an RB-tree in case of std::map) on your own, or use any of the libraries that provides one.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Which would be the best method for uniformly distributing values into buckets. The values are generated using gaussian distribution, hence most values are near the median.
I am implementing bucket sort in CUDA. Since most of the values are generated near median they are inserted into 4-5 buckets. I can make large number of buckets and would like to evenly distribute the values in all/most buckets instead of just 3-4 buckets.
It seems you're looking for an histogram.
If you are looking for performance, go into the CUB or Thrust libraries as the two comments point out, otherwise you'll end up expending a lot of time and still not achieving those performance levels.
If you are decided to implement the histogram I'll recommend you to start with the simplest implementation; a two-step approach. In the first step you calculate the number of elements which falls into each bucket, so you can create the container structure with the right array sizes. The second step simply copy the elements to the corresponding array of the structure.
Since here, you can evolve to more complex versions, using for example a prefix sum to calculate the initial positions of the buckets on a large array.
The application is bounded by memory traffic (you have not arithmetic workload at all), so try to improve the locality and the access patterns as much as you can.
Of course, check out the open source code to get some ideas.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Here the problem:
you got a list of ingredients (assuming their value unitary) with
their respective quantities, and a list of products. Each product got
a price and the recipe which contain the needed ingredients an their
quantities.
You need is to maximize the total proceeds from those products with
the given ingredients.
The first thing blowing up in my mind is to create a price/(n° needed items) ratio and start creating the products with the highest ratio. I know that this is some kind of greedy algorithm (if I'm not wrong) and not always lead to the best solution but I had no other implementable ideas.
Another way may be to brute-force all the possibilities, but I'm not able to realize how I can implement it; I'm not so familiar with the brute-forcing. My first brute-force algorithm was this one, but it was easy because it was with numbers and, furthermore, the element that comes after is not precluded by the previous elements.
Here the things are different, because the next element is a function of the available ingredients, whom are influenced from the previous products, and so on.
Have you any hint? This is some kind of homework, so I prefer not a direct solution, but something to start from!
The language I have to use is C
Many thanks in advance :)
I would first try looking at this as a linear programming problem; there are algorithms available to solve them efficiently.
If your problem can't accept a fractional number of items, then it is actually an integer programming problem. There are algorithms available to solve these as well, but in general it can be difficult (as in time-consuming) to solve large integer programming problems exactly.
Note that a linear programming solution may be a good first approximation to an integer programming solution, e.g. if your production quantities are large.
If you have the CPU cycles to do it (and efficiency doesn't matter), brute force is probably the best way to go, because it's the simplest and also guaranteed to always (eventually) find the best answer.
Probably the first thing to do is figure out how to enumerate your options -- i.e. come up with a way to list all the different possible combinations of pastries you could make with the given ingredients. Don't worry about prices at first.
As a (contrived) example, with a cup of milk and a dozen eggs and some flour and sugar, I could make:
12 brownies
11 brownies and 1 cookie
10 brownies and 2 cookies
[...]
1 brownie and 11 cookies
12 cookies
Then once you have that list, you can iterate over the list, calculate how much money you would make on each option, and choose the one that makes the most money.
As far as generating the list of options goes, I would start by calculating how many cookies you could make if you were to make only cookies; then how many brownies you could make if you were to make only brownies, and so on. That will give you an absolute upper bound on how many of each item you ever need to consider. Then you can just consider every combination of items with per-type-numbers less than or equal to that bound, and throw out any combinations that turn out to require more ingredients than you have on hand. This would be really inefficient and slow, of course, but it would work.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
What is the best algorithm to sort a link list [in C/C++]?
Merge sort is suitable for sorting linked lists. Some details here. Sample C code here.
Why is it suitable? Well, to put it simply, mergesort's main component - the merging of sorted subsequences - can be easily done on two linked lists, because all it ever needs is comparing the head elements, so it doesn't require the random access an array has.
Also, here's an article named A COMPARATIVE STUDY OF LINKED LIST SORTING ALGORITHMS you may find interesting.
Merge sort.
Better yet, just use std::list and its sort() method.
Depends on what you mean by "best". Do you mean quickest to implement, or most efficient runtime? The size and characteristics of what you're sorting also play a role. Sorting 10 items versus sorting 10 million items will likely lead to a different "best" algorithm. For large datasets where "best" means fastest runtime, I like quicksort.
A stack-based mergesort implementation is the weapon of choice.
Some time ago, I needed to sort a doubly-linked list. Wikipedia told me to use mergesort as it's the only one of the three common general-purpose sorting algorithms (heapsort, mergesort and quicksort) which performs well when random-access is slow and even provided a link to Simon Tatham's implementation.
First, I re-implemented his algorithm, which showed that it was basically the well-known iterative version. It's a bad choice as it relies on random access and therefore performs poorly.
The recursive version is better suited for linked lists as it only needlessly traverses one of the lists when splitting and not both (as does Simon's variant).
What you should use is a stack-based version as this implementation gets entirely rid of unnecessary list traversals.
It's worth thinking about how you got to this point. If you're going to need your items sorted in one particular way, consider keeping the list sorted and inserting them directly into the sorted list. That way you don't have to sort it when you need it.