Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
Just out of curiosity, assuming there exists a software life form. How would you detect him/her? What are your criteria of figuring out if something/someone is intelligent or not?
It seems to me that it should be quite simple to create such software once you set the right target (not just following a naive "mimic human->pass Turing Test" way).
When posting an answer try also finding a counter example. I have real difficuly inventing anything consistent which I myself agree with.
Warmup
First we need to understand what a life form is.
Take this explanation, for example:
An entity which exists and tries to continue its existence through nourishment or procreation.
If we accept this explanation then in fact many programs represent a life form.
They exist, that's obvious. They attempt to continue their existence through opening child processes, surviving in persistent data storages and continuing the next day.
So, here we are, among digital life forms around us.
On the other hand, there's the idea of evolving and being sentient.
With evolving, it's easy. Many programs have been written to be able to modify their body to adapt to certain scenarios. Computer viruses are first examples of that.
With sentience, it is a different story. An entity needs to be aware of its existence, understand itself and the environment around it, also take active decisions on its life activities.
A computer program has nothing of that kind. In fact, if it still applies, the scientists haven't figured out the definition of "being aware of itself" and consciousness. So until we know what that means, we can't attribute that quality to an entity or the other way around, to take it away.
The bottom line is, you can argue a computer program to be a life form, but it does not qualify for a sentient being.
Thinks humanly, acts humanly.
OR
Thinks rationally, acts rationally.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I understand that an algorithm is a set of instructions. Ai is essentially the same thing, only, more complicated? Let's say I use a minmax algorithm to allow moves to be played on a tic tac toe board, generally people would consider this ai. But if I implement an algorithm to solve a rubiks cube, is that considered ai?
I guess what I'm asking is, is it the complexity of the algorithm, the fact that situations change on the fly in an algorithm, the ignorance of the user/programmer as to how the algorithm works or all/some of the above? Or am I missing something?
I feel like this field is quite arbitrary. I imagine for good reason.I imagine because complexity is complex.
It is indeed quite arbitrary.
If you consult wikipedia you might find following definition which in my personal opinion catches the load quite accurately:
Computer science defines AI research as the study of "intelligent
agents": any device that perceives its environment and takes actions
that maximize its chance of successfully achieving its goals. A more
elaborate definition characterizes AI as "a system's ability to
correctly interpret external data, to learn from such data, and to use
those learnings to achieve specific goals and tasks through flexible
adaptation."
To take your Rubiks Cube as an example, there would be at least 2 ways you could write the algoritm to solve the puzzle. Firstly, any cube can be solved by following a hardcoded path or set of instructions once you have a certain start position. Implementing this would not be considered AI in my opinion as the machine itself is not learning anything. It just follows a well defined path of instructions till the end.
A second way to implement this would be to have the program just start solving it randomly. But the machine remembers it's moves, and learns the most effective path to reach the solution. When solving the next cube, the machine can build upon this newly learned information to solve it faster and again learn from this iteration to improve it's algorithm.
So in short, as far as I'm concerned, it can be considered AI when a machine is capable of optimizing/extending its own algorithms to become more efficient in its tasks.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have been given a task to write a C language analyser using an AFD. I can choose whichever language I want so I think I will go for Ruby. However this task is a little overwhelming to grasp at the beginning.
The problem I stumble across is : How do I even represent the AFD of the entire C language?.
I have been doing a little bit of digging and I ended up reading this on lexical analysis. In this paper the author defines every token of the language as a transition between 2 states (which is very logical). I find it almost impossible for me not to miss a few or build such a big AFD by hand without many mistakes. Any tips ?
The task you have is a similar one posed to many undergraduate students in compiler courses every year in thousands of universities, and the notes you cite are good sample of the many sets of course notes available on the topic.
The solution is the same as any software engineering problem: testing against the specification.
Although the intellectual problem of the analysis and creation of AFDs for a whole language by hand might seem overwhelming error prone, don't forget you are tasked with also implementing this (in your chosen language of Ruby).
This implementation can be tested by feeding it carefully graded and selected samples of C language input. When it does not deliver the expected result there error will either be in the coding of the AFD or a fault in the AFD you constructed. You make the necessary change and go around the testing loop again.
You will eventually end up with a valid AFD for the entire C language and an analyser for it written in Ruby.
It is often a good idea to start small and implement a subset of the C language and get that working first and then add more to it using stepwise refinement. This is a less risky strategy than attempting to do the whole thing in one go.
You need to apply all those techniques you should have learned about building specifications, designs, programs and testing and apply it to this problem. Just apply good computer science and software engineering to this problem.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
An experienced programer claims that passing values by pointer can slow the program or at least the compiler. Is that true ?
https://www.youtube.com/watch?feature=player_embedded&v=w7ay7QXmo_o#t=288
I watched the given segment of the video.
Situation:
A guy has a small third-party struct and passes it by value.
Why is it good:
1. Small struct doesn't take so much space to slow down the parameter passing through stack and you can (theoretically) achieve better memory/cache usage since you don't use the pointer to access memory. It's possible that compiler/optimizer couldn't do this for you as the guy mentions.
2. It is a third-party struct, it is not very probable that its size will change during the development of the program.
3. There is a difference in what the function signature is saying about its access/ownership with regard to the struct when it takes const pointer vs non-const pointer vs value, ...
What is questionable:
1. The guy doesn't really explain in-depth what is going on and why he did this optimization. Why to do it and speak about it at all then?
2. I don't see how this would slow down a compiler/optimizer in any way, but I'm not any expert on this matter.
Why this shouldn't be a general programming rule:
1. If you're not using a third-party struct, it is quite probable that your struct will change during the development process and you will either have inefficient code or alot to rewrite. Or probably the compiler will do the job for you and then there's no point of starting with it in the first place.
2. In development process where you are creating a new code, only thing you should think about performance-wise is the efficiency of the core algorithms and datastructures. If you write terrible sort algorithm, you won't help it by passing a struct by a value. As mentioned in comments, it depends on the consequences. I doubt that anyone can really foresee that something as marginal (performance-wise) as passing by value vs passing by pointer, when it comes to small structs, makes significant performance impact. Making such decision should be based on either knowing the consequences very well (ideally having solved this exact issue earlier) or having a profiler report that states that there is a performance problem with this.
Taking that into account, then a function that updates the game(?) window, that is run 60 or possibly even 120 times per second, is to be assumed the core of the program and should be optimized as much as possible. And it seems that the guy was working on it and found that he gets better results by passing the struct by value instead of passing by pointer.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am prone to writing code like this:
if (*t) while (*++t);
It reads: if string t does not start with /0, then move to the end.
Note the while loop has no body, so the semicolon terminates it.
I'd like to know if it is good practice to do this? Why and why not?
C is one of the oldest popular language in use today. I believe there's a good chance of finding one or more established style guide(s).
I know that Google has one for their C++ open source projects - http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml
Can anyone point me to resources on why or why not write code in certain manner?
Usually it is a good practice to write separate lines of code. Like in case of large pieces of code, debugging is clearer if we write code in separate lines.
It depends! Who is going to have to read and maintain this code? Coding standards exist for two major reasons:
To make code more readable and maintainable. When there are multiple developers, it makes code more consisent (which is more readable).
To discourage common errors. For example, a standard might require putting literals first in conditionals to discourage the assignment-as-comparison bug.
How do these goals apply to your specific code? Are you prone to making mistakes? If this is Linux kernel code, it's a lot more tolerable to have code like this than if it's a web app maintained by entry level programmers.
It reads: if string t does not start with /0, then move to the end.
Then consider putting a comment on it that says that.
Surprisingly - it is usually more expensive to maintain code over time than to write it in the first place. Maintenance costs are minimized if code is more readable.
There are three audiences for your code. You should think of how valuable their time is while you are formatting:
Fellow coders, including your co-workers and code-reviewers. You
want these people to have a high reputation of you. You should write code that is easily understandable for them.
Your future self. Convoluted code may be obvious while you are
writing it, but pick it up again in two weeks, and you will not
remember what it means. The 'concise' statement that you wrote in 10
minutes will someday take you 20 minutes to decipher.
The Optimizing Compiler, which will produce efficient code no matter
whether your line is concise or not. The compiler does not care - try to save time for the other two. (Cue angry remarks about this item. I am in favor of writing efficient code, but concise styles like the one we are describing here will not affect compiler efficiency.)
Bad practice, because not easy to parse. I'd do
while (*t) ++t;
and let the compiler do the tiny bit of optimization.
The textual translation of it reads even shorter than yours
advance t until it points to a 0
Although you can write some pretty clever code in one line in C, it's usually not good practice in terms of readability and ease of maintenance. What's straightforward for you to understand may look completely foreign to someone maintaining your code in future.
You need to strike a balance between conciseness and readability. To this end, it's usually better to separate the code out so each line does one thing.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I know there have been a number of discussions of whether break and continue should be considered harmful generally (with the bottom line being - more or less - that it depends; in some cases they enhance clarity and readability, but in other cases they do not).
Suppose a new project is starting development, with plans for nightly builds including a run through a static analyzer. Should it be part of the coding guidelines for the project to avoid (or strongly discourage) the use of continue and break, even if it can sacrifice a little readability and require excessive indentation? I'm most interested in how this applies to C code.
Essentially, can the use of these control operators significantly complicate the static analysis of the code possibly resulting in additional false negatives, that would otherwise register a potential fault if break or continue were not used?
(Of course a complete static analysis proving the correctness of an aribtrary program is an undecidable proposition, so please keep responses about any hands-on experience with this you have, and not on theoretical impossibilities)
Thanks in advance!
My immediate reaction is that the hoops you'd have to jump through to avoid break and continue would probably hurt the code overall, and make static analysis (or much of anything else) considerably more difficult.
It'll depend a bit on the exact sort of code you're dealing with though. Just for example, if you have something that would really be best implemented as a switch statement, a prohibition against break would essentially force you to use nested if/elses which would make the code much more difficult to analyze correctly, and depending on the circumstances, would be very likely to negatively impact the output code as well.