Comparison of routing algorithms for pedestrian navigation [closed] - mobile

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm currently working on software for pedestrian navigation and the topic that is hard for me is to find the routing algorithm best suited for that task. I heard that A* is one of the algorithms actually used in that kind of software.
Could you suggest other algorithms that solve this problem? How do they compare regarding performances? How much memory and space do they require?
Thanks in advance for answers.

Well you could have a look at iterative deepening search. It's a very smart approach to your problem in my opinion, but you risk to have a quite bad time complexity with respect to A* with a good heuristic since the algorithm is designed to make a complete exploration.
From wikipedia:
IDDFS combines depth-first search's space-efficiency and breadth-first
search's completeness (when the branching factor is finite). It is
optimal when the path cost is a non-decreasing function of the depth
of the node. The space complexity of IDDFS is O(bd), where b is the
branching factor and d is the depth of shallowest goal. Since
iterative deepening visits states multiple times, it may seem
wasteful, but it turns out to be not so costly, since in a tree most
of the nodes are in the bottom level, so it does not matter much if
the upper levels are visited multiple times.
And again, for what concerns time complexity:
The time complexity of IDDFS in well-balanced trees works out to be
the same as Depth-first search: O(bd).

Related

Developing algorithmic thinking [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I encountered a question where a given array of integers I needed to find the pair which could satisfy the given sum.
The first solution that came to me was to check all possible pairs which was about O(n^2) time, but the interviewer requested me to come up with the improved run time at which I suggested to sort the array and then do binary search but that was also O(nlogn).
Overall I failed to come up with the O(n) solution. Googling that I came to know that it can be achieved via extra memory using set.
I know that there cannot be any fix rules to thinking about algorithms but I am optimistic and think that there must be some heuristic or mental model while thinking algorithms on array. I want to know if there is any generic strategy or array specific thinking which would help me explore more about solution rather than acting dead.
Generally, think about how to do it naively first. If in an interview, make clear what you are doing, say "well the naive algorithm would be ...".
Then see if you can see any repeated work or redundant steps. Interview questions tend to be a bit unrealistic, mathematical special case type questions. Real problems more often come down to using hash tables or sorted arrays. A sort is N log N, but it make all subsequent searches O log N, so it's usually worth sorting data. If data is dynamic, keep it sorted via a binary search tree (C++ "set").
Secondly, can you "divide and conquer" or "build up". Is the N = 2 case trivial? In than case, can we divide N = 4 into two N = 2 cases, and another integration step? You might need to divide the input into two groups, low and high, in which case it is "divide and conquer", or you might need to start with random pairs, then merge into fours, eights and so on, in which case it is "build up".
If the problem is geometrical, can you exploit local coherence? If the problem is realistic instead of mathematical, and there typical inputs you can exploit (real travelling salesmen don't travel between cities on a random grid, but over a hub and spoke transport system with fast roads connecting major cities and slow roads then branching out to customer destinations)?

If all data were put into memory, what's the fastest way to do a "SELECT ...WHERE ..." thing? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
If all data were put into memory, which means the media speed is much more faster, what's the fastest way to do a "SELECT .. WHERE .." query (filter data)? So far the options in my mind:
1) b tree like algorithms, but it may still need index and larger space
2) fixed length array, smaller size but may be slower.
So are there any other better ways, if both speed and size are the concerns
It is dependent on the specific case you have - what operations you need fast, what is the exact size, and more. Some examples:
For AND queries, a set of sorted lists is usually maintained (a list for each feature). This data structure is called an inverted index, and
is used often by search engines to get the relevant documents from a
given query. (Apache Lucene uses this data structure, for example).
If arrays can be used - and iteration over the data is needed - it is a very efficient approach, since arrays are basically the most cache efficient data structure there is. Reading sequentially from an array is much faster in most cases then any other DS, since it gets you the fewest "hit misses", which are often the bottle neck when iterating your data.
If your data is strings for example, and you are going to filter according to some string attributes (prefix for example) using a designed data structure for strings, such as a trie or a radix tree - might get you the best performance.
Buttom line: If you are going to do something custom made in order to enhance performance of the default libraries, you should consider the specific problem details before designing your data structure of choice.

Is there any advantage to limiting the length of a password if the password is stored as a hash? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've seen a lot of sites that limit the length of a password to something like 10 or 12 characters. I understand that that could be a sign that they are storing the password in plain text and they limit the length because they think it will save space, but if they are storing a password as a hash, is there any advantage to this limit?
Edit: I am quite aware that longer passwords are stronger and that a hash is a hash, and the length is the same regardless of the input. My real question here is is there some sort of convoluted reason that system designers use to rationalize this inherently insecure practice?
A hash function takes an input of unspecified length, and returns a value of specified length. So no, the length of the input has no effect on the length of the output, as the length of the output is always the same for a given hash function.
Putting lower bounds on the length of passwords users can use is only ever to encourage users to use stronger passwords. Upper bounds, I couldn't say. Could be something against spam-bots, or they don't want to have to crunch a 200-character password for performance reasons.
And nobody stores passwords in plaintext.
No. Not much of an advantage if they're storing it plain-text, either, really, in real life.
To directly answer your question, no, there is no benefit to deliberately implementing a low maximum length. What you'll find is that this ofen happens when there's a legacy dependency; your password can only be 10 chars long because it's back-ending into a system which implements this limit.
I suspect this is the case in scenarios such as Tesco's. You've got 13 year old plus system and, as you say, it's (allegedly) storing passwords in the clear and there are possibly multiple points where that 10 chat limit is implemented (DB col, SQL command params, etc).
The only reason I can possibly think of - and it's a stretch - is that text box with no lmit could allow the maximum request length to be exceeded but we're talking ridiculously long passwords here.

Normalizing too much vs too little, examples? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I don't usually design databases, so I'm having some doubts about how to normalize (or not) a table for registering users. The fields I'm having doubts are:
locationTown: I plan to normalize for countries, and have a separate table for it, but should I do the same for towns? I guess users would type this in when registering, and not choosing from a dropdown. Can one normalize when the input may be coming from users?
maritalStatus: I would have a choice of about 5 or so different statuses.
Also, does anyone know of a good place to find real world database schema/normalizing examples?
Thanks
locationTown - just store it directly inside user table. Otherwise you will have to search for existing town, taking typos and code case into account. Also some people use non-standard characters and languages (Kraków vs. Krakow vs. Cracow, see also: romanization). If you really want to have a table with towns, at least provide auto-complete box so the users are more likely choosing existing town. Otherwise prepare for lots of duplicates or almost duplicates.
maritalStatus - this in the other hand should be in a separate table. Or more accurately: use single character or a number to represent marital status. An extra table mapping this to human-readable form is just for convenience (remember about i18n) and foreign key constraint makes sure incorrect status aren't used.
I wouldn't worry about it too much - database normalization (3NF, et al) has been over-emphasized in academia and isn't overly practical in industry. In addition, we would need to see your whole schema in order to judge where these implementations are appropriate. Focus on indexing commonly-used columns before you worry about normalization.
You might want to take a look at this SO question before you dive in any further.

Join slows down sql [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
We have a discussion over SQL Server 2008 and join. One half says the more joins the slower you sql runs. The other half says ihat it does not matter because SQL server takes care of business so you wil not notice any performance loss. What is true?
Instead of asking the question the way you have, consider instead:
Can I get the data I want without the join?
No => You need the join, end of discussion.
It is also a matter of degree. It is impossible for a join not to add additional processing. Even if the Query Optimizer takes it out (e.g. left join with nothing used from the join) - it still costs CPU cycles to parse it.
Now if the question is about comparing joins to another technique, such as one special case of LEFT JOIN + IS NULL vs NOT EXISTS for a record in X not in Y scenario, then let's discuss specifics - table sizes (X vs Y), indexes etc.
It will slow it down: the more complicated a query, the more work the database server has to do to execute it.
But about that "performance loss": over what? Is there another way to get at the same data? If so, then you can profile the various options against each other to see which is fastest.

Resources