Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
What are the key differences between Ruby and C?
They are almost totally different.
Ruby
Strong, dynamic typing
Purely object oriented
Automatic garbage collection and no pointers
Interpreted (or JIT compilation with JRuby/IronRuby)
Reflective
Supports functional programming (closures, coroutines, etc.)
No preprocessor or macros
C
Weak, static typing
Procedural (not object oriented)
Not garbage collected and has pointers
Compiled
No reflection
Does not support functional programming
Has a preprocessor and supports macros
To Ruby From C and C++
Why do you ask? Do you have a specific project or goals in mind?
In addition to what others have already mentioned; I'd also say that some key differences to keep in mind is that the C family is much more portable....or rather, much easier to distribute the finished software. C programs will also be much faster than Ruby...whether that is important or not depends on what you are building (well, that's ALWAYS important, but it isn't a make or break proposition for a lot of programs).
Ruby is just simply a beautiful language to work with (do not underestimate the importance of a language that works with you); developing programs is much quicker in Ruby than C ( C is a compiled language, so that is to be expected )...Ruby is also a pretty simple language to learn; most people consider C to be fairly tough for newbies to pick up.
-- edit --
wow, just saw this was a 3 year old thread....my bad
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
As far as I know, it is almost true that any code that can be represented in the LLVM intermediate language, can also be represented in C, with two important exceptions:
Exceptions. (No pun intended.)
Signed integer arithmetic with well-defined behavior on overflow.
Is there anything else that can be represented in LLVM but not in C?
In addition to exception handling, other big features are garbage collection and out-of-the-box coroutines. Going to a lower level, there are trampoline intrinsics, patch points for JITs, and direct support for Obj-C ARC Runtime intrinsics.
C is Turing complete, so all of these things can be introduced to C with libraries and so on, but I put them as they are part of the LLVM language.
Metadata for example, including LLVM's branch-weight and debugloc metadata.
Except that they can if you're willing to be tortuous enough about the C you write. I think that's general: IF you're willing to write really tortuous, unidiomatic C, THEN you can write anything. So I vote to close this as unclear.
EDIT: Most things probably are expressible in C given enough discipline, verbosity and preprocessing directives, but I wonder about aliasing.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
How is functional programming useful over normal procedural languages like c or object oriented programming languages like c++ and where does it shine?
C lacks several features of functional programming that need to be worked around (likewise, while you can write in an object-oriented style in C, you need to work around several missing features as well).
C functions are not first-class objects. You cannot return a function from a function, store a function in a variable, or pass a function to another function. You cannot nest functions, and you cannot create anonymous functions. The workaround is that C does allow you to use pointers to functions, so you can write a function that takes a pointer to a function as an argument, but this is not as clean as what you can do in a language oriented towards functional programming.
C lacks closures, which are a way of capturing the “environment” of execution at a particular point in a program (namely, what variable names are bound to).
C lacks generics, except in the most broad sense. In most functional languages, it is possible to write one function which applies to a large number of different types because they don’t depend on specific attributes of those types.
C is a low level language giving the programmer the full control over the program execution. It was newer designed to thigh level abstract language.
Functional programming is not popular and most languages used nowadays are object oriented.
If you need the language which gives you ability to control the program and the environment you should consider C++
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
When I compile programs in Ada, I typically notice a longer compile time for code of similar length and of similar content to programs written in C or C++.
While it is true that it comes down to the compiler and system to determine compile time the Ada compilation generally takes longer. Is this process radically different than the compile/link process of C or C++. Does it consist of different stages?
What about the Ada compilation process makes the compilation take longer than ?
It is all about the amount of time and effort put into making the compiler fast.
Compilers that have a broader scope tend to have more money to invest in making fast; however, sometimes there are other elements at stake. For example, the details of a compiler might include static type checking, various "extra" correctness checks, and other items (programming contract compliance, code quality, etc) that might adjust the compile time.
Ada tends to have had less money thrown at its compiler, and it is likely a slightly more complex language to parse than C. Both of these factors lend themselves to making it likely that its compiler will be slower.
Note that speed of compilation has little to do with the "quality" of the language. While C might have a larger footprint, Ada has made its mark on the programming world in other ways.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have been given a task to write a C language analyser using an AFD. I can choose whichever language I want so I think I will go for Ruby. However this task is a little overwhelming to grasp at the beginning.
The problem I stumble across is : How do I even represent the AFD of the entire C language?.
I have been doing a little bit of digging and I ended up reading this on lexical analysis. In this paper the author defines every token of the language as a transition between 2 states (which is very logical). I find it almost impossible for me not to miss a few or build such a big AFD by hand without many mistakes. Any tips ?
The task you have is a similar one posed to many undergraduate students in compiler courses every year in thousands of universities, and the notes you cite are good sample of the many sets of course notes available on the topic.
The solution is the same as any software engineering problem: testing against the specification.
Although the intellectual problem of the analysis and creation of AFDs for a whole language by hand might seem overwhelming error prone, don't forget you are tasked with also implementing this (in your chosen language of Ruby).
This implementation can be tested by feeding it carefully graded and selected samples of C language input. When it does not deliver the expected result there error will either be in the coding of the AFD or a fault in the AFD you constructed. You make the necessary change and go around the testing loop again.
You will eventually end up with a valid AFD for the entire C language and an analyser for it written in Ruby.
It is often a good idea to start small and implement a subset of the C language and get that working first and then add more to it using stepwise refinement. This is a less risky strategy than attempting to do the whole thing in one go.
You need to apply all those techniques you should have learned about building specifications, designs, programs and testing and apply it to this problem. Just apply good computer science and software engineering to this problem.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
So, across my programming experience I have come across two types of type annotations for statically typed languages: I call them 'before' and 'after'. C-Style languages use the format
int i = 5
While most non-c-family languages use the format
var c:int = 5
Examples of the former category would be C, C++, Java; examples of the latter category would be Scala, Haxe, Go.
This may seem to some to be superficial, but my question is: what are the advantages of each style? Why use one over the other? Why did C adopt that style in the first place?
The machine doesn't care - it's just that people who designed certain languages felt that some types of syntax are better or more easily readable than the others. Modern compilers usually have several stages of processing, and almost all of this syntactic differences are usually lost after the first stage, which parses text and converts then into compiler internal structures (AST - abstract syntax tree).
There are some historical precedences, e.g. the "prefix" vs "infix" vs "postfix" notation (http://en.wikipedia.org/wiki/Polish_notation, http://en.wikipedia.org/wiki/Infix_notation, http://en.wikipedia.org/wiki/Reverse_Polish_notation) which in the context of computer engineering history were used in edge cases - e.g. the "infix" notation is usually harder to parse and requires more memory than the postfix/RPN notation so it was not used where resources were really scarce (several KiB of memory or less), but most of those reasons are now obsolete as hardware is powerful enough.
Today, when designing a language, the choice of such syntax details is influenced by trying to make the language similar to some other popular language or group of languages for which there are already existing programmers, to avoid making a "language from Mars" which few people will use.
tl;dr: Depends on the person who created the language and what he though is more readable or "the right thing to do").