Math Library in C & Exersices [closed] - c

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I tried to find a good place to ask my question, which isn't programming, though it implies programming in C.
Our schoolteachers told us that we need to start exercise our programing skills, in C, based on math exersices. Even if I search the web for the best ways to solve such exersices and I came up with the Math library [<math.h>] I couldn't find a good page with many examples of solving exersices. My best course so far is Wikipedia but I can understand the fact that the Wikipedia can't store all the functionalities the library gives. I even looked in some examples want to find a complete coursebook for sovling all good math exersices we can make with a paper and a pen!
Have anyone any good idea?

So, you are trying to learn programming to solve math problems, that's good. But, I think you are getting a wrong idea about programming, Programming does not solve problems for you. To solve a problem, you have to decide on an algorithm to solve a problem and then create programmatic statements, in whichever language you like, then the program that you have created will give you an outcome based on the algorithm. You have to take the outcome and then decide yourself for further study.
for example
finding a factorial of a number
int i = number;
fact = 1;
while(i > 0)
fact * = i;
this way you will get a factorial of the number you specify, It is you who has to decide whether your algorithm is working fine, by comparing it with manual work, or with records. As you can see, the above program is an infinite loop and it is me who has to debug such issues, the program does not do that. Then What does a program do, it just automates what you do manual and help you save time and improve efficiency at work.
So to solve exercises, you have to understand problem statements and then script a program. I see you are talking about math library, we have good resource online to study important math functions which are highly useful, for in depth understand of their working open up the library files and study.
A Program can never solve your problem without you writing it, and deciding on its efficiency

See if this cmath solves you problem.
Header <cmath> declares a set of functions to compute common mathematical operations and transformations

Related

When does an algorithm become considered artificial intelligence? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I understand that an algorithm is a set of instructions. Ai is essentially the same thing, only, more complicated? Let's say I use a minmax algorithm to allow moves to be played on a tic tac toe board, generally people would consider this ai. But if I implement an algorithm to solve a rubiks cube, is that considered ai?
I guess what I'm asking is, is it the complexity of the algorithm, the fact that situations change on the fly in an algorithm, the ignorance of the user/programmer as to how the algorithm works or all/some of the above? Or am I missing something?
I feel like this field is quite arbitrary. I imagine for good reason.I imagine because complexity is complex.
It is indeed quite arbitrary.
If you consult wikipedia you might find following definition which in my personal opinion catches the load quite accurately:
Computer science defines AI research as the study of "intelligent
agents": any device that perceives its environment and takes actions
that maximize its chance of successfully achieving its goals. A more
elaborate definition characterizes AI as "a system's ability to
correctly interpret external data, to learn from such data, and to use
those learnings to achieve specific goals and tasks through flexible
adaptation."
To take your Rubiks Cube as an example, there would be at least 2 ways you could write the algoritm to solve the puzzle. Firstly, any cube can be solved by following a hardcoded path or set of instructions once you have a certain start position. Implementing this would not be considered AI in my opinion as the machine itself is not learning anything. It just follows a well defined path of instructions till the end.
A second way to implement this would be to have the program just start solving it randomly. But the machine remembers it's moves, and learns the most effective path to reach the solution. When solving the next cube, the machine can build upon this newly learned information to solve it faster and again learn from this iteration to improve it's algorithm.
So in short, as far as I'm concerned, it can be considered AI when a machine is capable of optimizing/extending its own algorithms to become more efficient in its tasks.

Getting started with coding unix commands [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have been learning c and data structures for quite some time now and I wanted to see whether I could apply what I have learnt. I searched a bit and found out that I could start with util linux but, before I could do so, I thought I'd check and perhaps dabble a bit with the code for basic unix commands like "cat". I was able to understand what a part of the code might have been trying to do, but I was not able to understand the entire code as a unit.
For example, in the "cat" code, a pointer to the output buffer and input buffer is declared and is appropriately used, which I could understand. What i could not understand, are parts of code like io_blksize (stat_buf) which has no description whatsoever, on what it does. Or how two pointers declared as pointers to the input and output buffers, actually correspond to the input and output buffers ?
So my question being, how do I approach these type of code, how can I understand something that has no description to what it does (in the example given above) and how can I make and implement changes in the code, so that I can see the changes when i run a command ?
(Would really appreciate references or topics I should start with, so that I can relate what I have learnt to how command code's can be modified. I also apologize if the question is to abstract.)
This is a bit of a subjective question so my answers will just be my opinion of course.
A good place to start when you run into something you don't recognise while reading source code is the manpages. Each function will generally have a manpage, e.g. man 2 read or man 3 printf. Beyond that, I feel perhaps you should get more of a foundation in Unix before attempting to read the straight source code, a good book is Advanced Programming in the Unix Environment. I've been working through it myself and am finding my Unix knowledge improving considerably.
Just my two cents.

Writting a syntax analyser using an AFD for C language [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have been given a task to write a C language analyser using an AFD. I can choose whichever language I want so I think I will go for Ruby. However this task is a little overwhelming to grasp at the beginning.
The problem I stumble across is : How do I even represent the AFD of the entire C language?.
I have been doing a little bit of digging and I ended up reading this on lexical analysis. In this paper the author defines every token of the language as a transition between 2 states (which is very logical). I find it almost impossible for me not to miss a few or build such a big AFD by hand without many mistakes. Any tips ?
The task you have is a similar one posed to many undergraduate students in compiler courses every year in thousands of universities, and the notes you cite are good sample of the many sets of course notes available on the topic.
The solution is the same as any software engineering problem: testing against the specification.
Although the intellectual problem of the analysis and creation of AFDs for a whole language by hand might seem overwhelming error prone, don't forget you are tasked with also implementing this (in your chosen language of Ruby).
This implementation can be tested by feeding it carefully graded and selected samples of C language input. When it does not deliver the expected result there error will either be in the coding of the AFD or a fault in the AFD you constructed. You make the necessary change and go around the testing loop again.
You will eventually end up with a valid AFD for the entire C language and an analyser for it written in Ruby.
It is often a good idea to start small and implement a subset of the C language and get that working first and then add more to it using stepwise refinement. This is a less risky strategy than attempting to do the whole thing in one go.
You need to apply all those techniques you should have learned about building specifications, designs, programs and testing and apply it to this problem. Just apply good computer science and software engineering to this problem.

Taking notes when programming? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I just got my first programming book, and I just started programming.
I have a little question. Should I take notes while reading the book, or should I just memorize, and refer back if I forget something?
Thanks
Never read from just 1 book, read multiple books on the subject to get a better picture.
for C read The C Programming Language, How To Program - C, C: A complete reference and then C a reference manual(by Harbison and Steele) touching on at least C99
Take notes, keep a book handy at all times - think before you ink though.
Always sit by a computer + text editor + compiler (yes, do not use an IDE - learn with manual compilation)
Learn good debugging techniques, gdb is fine to start off with(although has a significant learning curve)
Be attentive to what is being said in the books and - also do not forget to experiment all the time. Programming is best learnt by doing it/practicing it.
The best thing to do is to understand what is being said to you by the book, I've tried memorizing before.. it doesn't work I would suggest a bit of both, but mostly understanding it that way you would know what you are coding also practice your level of confidence of coding will increase and you'll continue to want to code more.

Are there any solid large integer implementations in C? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I am working on a project where I need to crunch large integers (like 3^361) with absolute precision and as much speed as possible. C is the fastest language I am familiar with, so I am trying to code my solution in that language.
The problem is that I have not been able to find a good implementation of any data types for representing limitless integers in C other than Python's source code. It is taking me time to go through the code and determine what I need.
I would much rather use someone else's tested code with a full set of functionality (addition, subtraction, multiplication, division, modulation, exponentiation, equality checking... even bitwise operation would be sweet) than spending the weeks it would take me to even begin to get my own version up to par. While it would be a great learning experience, it is not the focus of my problem, and I'd rather get to the part that interests me :)
A couple of people have already mentioned GMP. I would only add that at least the last time I looked, it was pretty well restricted to working with gcc.
If you want to use other compilers, are couple you might consider are NTL and MIRACL. I've tested MIRACL a bit, and it seems to work reasonably well. I've used NTL quite a bit more, and while large integers are more of a sideline for it, it still does them quite nicely. It doesn't claim to be as fast as GMP (and, in fact, can use GMP to do basic operations), but when I've done some minimal benchmarking between the two I haven't found a lot of significant differences (though that was long enough ago that I doubt it's valid anymore either).
Gnu MP provides a bignum library.
The OpenSSL library also provides a solid BigNum implementation (<openssl/bn.h>).
I use MAPM which is a portable arbitrary precision (integer and floating point) library.
libtommath, from libtomcrypt, is probably the smallest, simplest, and fastest. (Funny how those 3 superlatives almost always come together...) If you can't find an upstream you can get the source from the dropbear ssh source tree.
If you want ANSI Standard C, get the code in Dave Hanson's C Interfaces and Implementations. Very clear and well designed.
If gcc and gcc extensions are OK, then as others have pointed out the Gnu Multiprecision Library (GMP) is well thought of and widely used.
Mbed has a bignum implementation that serves as basis for the crypto functions.
It is heavily used on microcontrollers.

Resources