Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We've had this question already, but I want to narrow it down to already high-speed typists.
The original poster had hit a barrier of 75 WPM and wanted to increase his speed. I'm at a barrier where I can reliably type around 130, and I can sometimes hit 150, probably depending on the distribution of words in the text.
I feel that methods to increase speed from this high end to higher might be different than going from 30 to 60, or even 75 to 100. Anybody have any suggestions?
You need to start looking at the hardware... Try out different keyboards to get the best reactionary keyboard to your typing method. I find myself typing much faster on some keyboards even though they are identically sized with identical features...the key response times are slight different.
If you're trying to learn how to type your favorite language faster, may I suggest that you learn a better language?
When I typed dictation a lifetime ago, I could type over 120wpm on a dictaphone. When I type C I rarely exceed 90wpm, but when I write lisp, I don't even reach 50wpm.
The time you're spending typing isn't being spent thinking.
If you're trying to learn how to type copy more quickly, learning to read more quickly can help. I had great success using rapid serial visual presentation to increase my reading speed.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I understand that an algorithm is a set of instructions. Ai is essentially the same thing, only, more complicated? Let's say I use a minmax algorithm to allow moves to be played on a tic tac toe board, generally people would consider this ai. But if I implement an algorithm to solve a rubiks cube, is that considered ai?
I guess what I'm asking is, is it the complexity of the algorithm, the fact that situations change on the fly in an algorithm, the ignorance of the user/programmer as to how the algorithm works or all/some of the above? Or am I missing something?
I feel like this field is quite arbitrary. I imagine for good reason.I imagine because complexity is complex.
It is indeed quite arbitrary.
If you consult wikipedia you might find following definition which in my personal opinion catches the load quite accurately:
Computer science defines AI research as the study of "intelligent
agents": any device that perceives its environment and takes actions
that maximize its chance of successfully achieving its goals. A more
elaborate definition characterizes AI as "a system's ability to
correctly interpret external data, to learn from such data, and to use
those learnings to achieve specific goals and tasks through flexible
adaptation."
To take your Rubiks Cube as an example, there would be at least 2 ways you could write the algoritm to solve the puzzle. Firstly, any cube can be solved by following a hardcoded path or set of instructions once you have a certain start position. Implementing this would not be considered AI in my opinion as the machine itself is not learning anything. It just follows a well defined path of instructions till the end.
A second way to implement this would be to have the program just start solving it randomly. But the machine remembers it's moves, and learns the most effective path to reach the solution. When solving the next cube, the machine can build upon this newly learned information to solve it faster and again learn from this iteration to improve it's algorithm.
So in short, as far as I'm concerned, it can be considered AI when a machine is capable of optimizing/extending its own algorithms to become more efficient in its tasks.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Is there a way to estimate the energy consumed by a program on an ARM CPU? In embedded systems, energy consumption is one of the most important parameters and I was wondering whether it is possible for a programmer to know approximately how much energy is needed to run the program?
For example, since on the ARM CPU division executed on multiple cycles I imagine that a code using divisions would consume more energy than a code that doesn't. But this reasing is quite intuitive, is there a better way to qulify the energy consumed by a CPU when executing a code?
I don't think there are any ARM-specific tricks here (and 'ARM' covers umpteen different things anyway). You usually look at the current consumption in the various different power states you use (run, sleep, etc) and then estimate what proportion of time is spent in each state. This lets you calculate average current/power.
It doesn't usually make much sense to say 'this instruction uses a lot of power' - what you might instead care about is 'this sequence of instructions take a lot of time to run, hence I can't get back to sleep quickly'.
Closest you'll get with off the shelf tools is something similar to http://ds.arm.com/ds-5/optimize/arm-energy-probe/
Generally battery run systems have fuel gauges which are exposed through sysfs entries and can provide how much current is passing by. Think it like smart phone battery/charge indicator. Those are generally not that reliable and hard to correlate with exact time of application run, but may give you a rough estimate.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have been given a task to write a C language analyser using an AFD. I can choose whichever language I want so I think I will go for Ruby. However this task is a little overwhelming to grasp at the beginning.
The problem I stumble across is : How do I even represent the AFD of the entire C language?.
I have been doing a little bit of digging and I ended up reading this on lexical analysis. In this paper the author defines every token of the language as a transition between 2 states (which is very logical). I find it almost impossible for me not to miss a few or build such a big AFD by hand without many mistakes. Any tips ?
The task you have is a similar one posed to many undergraduate students in compiler courses every year in thousands of universities, and the notes you cite are good sample of the many sets of course notes available on the topic.
The solution is the same as any software engineering problem: testing against the specification.
Although the intellectual problem of the analysis and creation of AFDs for a whole language by hand might seem overwhelming error prone, don't forget you are tasked with also implementing this (in your chosen language of Ruby).
This implementation can be tested by feeding it carefully graded and selected samples of C language input. When it does not deliver the expected result there error will either be in the coding of the AFD or a fault in the AFD you constructed. You make the necessary change and go around the testing loop again.
You will eventually end up with a valid AFD for the entire C language and an analyser for it written in Ruby.
It is often a good idea to start small and implement a subset of the C language and get that working first and then add more to it using stepwise refinement. This is a less risky strategy than attempting to do the whole thing in one go.
You need to apply all those techniques you should have learned about building specifications, designs, programs and testing and apply it to this problem. Just apply good computer science and software engineering to this problem.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I know there have been a number of discussions of whether break and continue should be considered harmful generally (with the bottom line being - more or less - that it depends; in some cases they enhance clarity and readability, but in other cases they do not).
Suppose a new project is starting development, with plans for nightly builds including a run through a static analyzer. Should it be part of the coding guidelines for the project to avoid (or strongly discourage) the use of continue and break, even if it can sacrifice a little readability and require excessive indentation? I'm most interested in how this applies to C code.
Essentially, can the use of these control operators significantly complicate the static analysis of the code possibly resulting in additional false negatives, that would otherwise register a potential fault if break or continue were not used?
(Of course a complete static analysis proving the correctness of an aribtrary program is an undecidable proposition, so please keep responses about any hands-on experience with this you have, and not on theoretical impossibilities)
Thanks in advance!
My immediate reaction is that the hoops you'd have to jump through to avoid break and continue would probably hurt the code overall, and make static analysis (or much of anything else) considerably more difficult.
It'll depend a bit on the exact sort of code you're dealing with though. Just for example, if you have something that would really be best implemented as a switch statement, a prohibition against break would essentially force you to use nested if/elses which would make the code much more difficult to analyze correctly, and depending on the circumstances, would be very likely to negatively impact the output code as well.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
Just out of curiosity, assuming there exists a software life form. How would you detect him/her? What are your criteria of figuring out if something/someone is intelligent or not?
It seems to me that it should be quite simple to create such software once you set the right target (not just following a naive "mimic human->pass Turing Test" way).
When posting an answer try also finding a counter example. I have real difficuly inventing anything consistent which I myself agree with.
Warmup
First we need to understand what a life form is.
Take this explanation, for example:
An entity which exists and tries to continue its existence through nourishment or procreation.
If we accept this explanation then in fact many programs represent a life form.
They exist, that's obvious. They attempt to continue their existence through opening child processes, surviving in persistent data storages and continuing the next day.
So, here we are, among digital life forms around us.
On the other hand, there's the idea of evolving and being sentient.
With evolving, it's easy. Many programs have been written to be able to modify their body to adapt to certain scenarios. Computer viruses are first examples of that.
With sentience, it is a different story. An entity needs to be aware of its existence, understand itself and the environment around it, also take active decisions on its life activities.
A computer program has nothing of that kind. In fact, if it still applies, the scientists haven't figured out the definition of "being aware of itself" and consciousness. So until we know what that means, we can't attribute that quality to an entity or the other way around, to take it away.
The bottom line is, you can argue a computer program to be a life form, but it does not qualify for a sentient being.
Thinks humanly, acts humanly.
OR
Thinks rationally, acts rationally.