Can I minimax a battleship 2 player game? [closed] - artificial-intelligence

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 10 months ago.
Improve this question
I have a game project to implement and was thinking of building a battleship game (https://en.wikipedia.org/wiki/Battleship_(game)).
The project requires me to build an AI computer that can run a minimax algorithm.
Is it possible to implement minimax on this kind of game?

Short answer: No.
The minimax algorithm needs some sort of evaluation of the gamestate in every node. In battleship you don't have all information as a player or AI (opponent ships are not known) which makes it impossible to do this. You could of course cheat and let the AI test all possible moves and find the hidden ships X moves ahead, but I would say this is against the rules.
The AI would then always find the ship and always make hits which would also make it very boring to play against.
You can find some inspiration for example here on other algorithms to use.

Related

Plotting fidelity graph [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I want to plot a graph for fidelity with respect to time.
My function is
Here F is fidelity and consider alpha = sqrt of 5.
How to plot this?
Can you give me some model programming code and recommend me some online site for plotting this?
Plot Online (wolframalpha.com)
When it comes to programming, that depends on the desired output. This could be only the screen (gdi, directx, opengl), raster graphics (bmp, jpg, png, ...) or vector graphics (svg, gltf, ...).
But before you (can) put anything out, you would have to calculate the plot points first. Here you definitively need the standard math library (math.h).
Once you have your set of points, you could plot them by using additional software like plotutils.

When does an algorithm become considered artificial intelligence? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I understand that an algorithm is a set of instructions. Ai is essentially the same thing, only, more complicated? Let's say I use a minmax algorithm to allow moves to be played on a tic tac toe board, generally people would consider this ai. But if I implement an algorithm to solve a rubiks cube, is that considered ai?
I guess what I'm asking is, is it the complexity of the algorithm, the fact that situations change on the fly in an algorithm, the ignorance of the user/programmer as to how the algorithm works or all/some of the above? Or am I missing something?
I feel like this field is quite arbitrary. I imagine for good reason.I imagine because complexity is complex.
It is indeed quite arbitrary.
If you consult wikipedia you might find following definition which in my personal opinion catches the load quite accurately:
Computer science defines AI research as the study of "intelligent
agents": any device that perceives its environment and takes actions
that maximize its chance of successfully achieving its goals. A more
elaborate definition characterizes AI as "a system's ability to
correctly interpret external data, to learn from such data, and to use
those learnings to achieve specific goals and tasks through flexible
adaptation."
To take your Rubiks Cube as an example, there would be at least 2 ways you could write the algoritm to solve the puzzle. Firstly, any cube can be solved by following a hardcoded path or set of instructions once you have a certain start position. Implementing this would not be considered AI in my opinion as the machine itself is not learning anything. It just follows a well defined path of instructions till the end.
A second way to implement this would be to have the program just start solving it randomly. But the machine remembers it's moves, and learns the most effective path to reach the solution. When solving the next cube, the machine can build upon this newly learned information to solve it faster and again learn from this iteration to improve it's algorithm.
So in short, as far as I'm concerned, it can be considered AI when a machine is capable of optimizing/extending its own algorithms to become more efficient in its tasks.

graph traversal in C [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have to implement a backtracing algorithm in C which will emulate a hardware in purely software, and trace a single path from output of a system to the input pin, registering all the gates, their outputs and inputs(taken) on the path in a reverse order. I figured out that it can be done in graph traversal algorithm but not able to implement in C. Any useful suggestions shall be helpful indeed!
I've done a few maze solving algorithms, both breadth and depth first search.
I'd say you should first build a graph and make sure its perfectly built and without any incoherence, and something i found to be very useful was to find a way to print my graph to check for errors :).
Other than that, good luck !
Depends on what kind of path tracing, it can follow both breadth first search or else Depth first search. I have tried both of them and it works.

Writting a syntax analyser using an AFD for C language [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have been given a task to write a C language analyser using an AFD. I can choose whichever language I want so I think I will go for Ruby. However this task is a little overwhelming to grasp at the beginning.
The problem I stumble across is : How do I even represent the AFD of the entire C language?.
I have been doing a little bit of digging and I ended up reading this on lexical analysis. In this paper the author defines every token of the language as a transition between 2 states (which is very logical). I find it almost impossible for me not to miss a few or build such a big AFD by hand without many mistakes. Any tips ?
The task you have is a similar one posed to many undergraduate students in compiler courses every year in thousands of universities, and the notes you cite are good sample of the many sets of course notes available on the topic.
The solution is the same as any software engineering problem: testing against the specification.
Although the intellectual problem of the analysis and creation of AFDs for a whole language by hand might seem overwhelming error prone, don't forget you are tasked with also implementing this (in your chosen language of Ruby).
This implementation can be tested by feeding it carefully graded and selected samples of C language input. When it does not deliver the expected result there error will either be in the coding of the AFD or a fault in the AFD you constructed. You make the necessary change and go around the testing loop again.
You will eventually end up with a valid AFD for the entire C language and an analyser for it written in Ruby.
It is often a good idea to start small and implement a subset of the C language and get that working first and then add more to it using stepwise refinement. This is a less risky strategy than attempting to do the whole thing in one go.
You need to apply all those techniques you should have learned about building specifications, designs, programs and testing and apply it to this problem. Just apply good computer science and software engineering to this problem.

Increasing WPM - High End [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We've had this question already, but I want to narrow it down to already high-speed typists.
The original poster had hit a barrier of 75 WPM and wanted to increase his speed. I'm at a barrier where I can reliably type around 130, and I can sometimes hit 150, probably depending on the distribution of words in the text.
I feel that methods to increase speed from this high end to higher might be different than going from 30 to 60, or even 75 to 100. Anybody have any suggestions?
You need to start looking at the hardware... Try out different keyboards to get the best reactionary keyboard to your typing method. I find myself typing much faster on some keyboards even though they are identically sized with identical features...the key response times are slight different.
If you're trying to learn how to type your favorite language faster, may I suggest that you learn a better language?
When I typed dictation a lifetime ago, I could type over 120wpm on a dictaphone. When I type C I rarely exceed 90wpm, but when I write lisp, I don't even reach 50wpm.
The time you're spending typing isn't being spent thinking.
If you're trying to learn how to type copy more quickly, learning to read more quickly can help. I had great success using rapid serial visual presentation to increase my reading speed.

Resources