Which is a strongly typed language: Python or Prolog? - typing

I am new to Python and prolog. From my understanding, Python is a strongly typed language. Is Prolog a strongly typed language also?

Like Python, Prolog will give you a type error if you try to add things that aren't integers. But that's just about the limit of what Prolog will do for you. It's not terribly useful to say that Prolog is or is not "strongly typed"—I have already written so many answers to questions about "strongly types", and rewritten other people's wrong answers to questions about "strongly typed", that I never want to hear the words again. And yet somewhere, someone on the Internet is wrong.
Here's what's useful to know:
Both Prolog and Python are dynamically typed, which is to say that programs are not checked for "type errors" until run time. A typical "type error" in this case is a function/method (Python) or a relation (Prolog) applied to the "wrong" kinds of values. And Python will detect cases where you apply something to the wrong number of arguments.
In Python, there are quite a few terms (expressions) that are ill-typed, i.e., that will be rejected at run time because of type errors.
In Prolog, almost every term is type-correct by definition. For example, a user-defined functor may be applied to any list of terms of any length, and Prolog will cheerfully try to interpret it as a well-formed relation. If you get the "wrong" number of arguments to a relation, Prolog doesn't treat this as a type error; it just assumes you have two different relations with different arities of the same name. (Whether this behavior is useful is subject to debate, but that's the way Prolog behaves.) Prolog is a bit stricter with builtin relations like IS, as in
X is Y + Z
What's true and useful to know is that in Prolog, the dynamic type system rejects very few terms—many fewer than Python's dynamic type system. If on this account you choose to call Prolog "weaker" and Python "stronger", you can do that, because the terms "strong" and "weak" don't have any universally agreed-upon technical meaning. But you'd be better off thinking, and saying, that Prolog's dynamic type system accepts almost all relations and terms as well typed—unlike Python's. That way you will communicate what is actually going on.

Python is strongly typed.
ie:
"1" + 1
Raises TypeError
I believe Prolog is not strongly typed though.

Prolog is not a strongly typed language.
ref: http://scom.hud.ac.uk/scomtlm/book/node187.html

[...] Is Prolog a strongly typed language also?
No

Related

How much lisp to implement in C before writing extension in itself?

I am implementing a lisp interpreter in C, i have implemented along with few primitives like cons , car, cdr , eq, basic arithmetic stuff.
Just before i was starting to implement define and lambda it occurred to me that i need to implement an environment. I am unsure if i could implement it in lisp itself.
My intent is to implement minimal amount of lisp so that i could write extension to the language in itself. I am not sure how much is minimal, Would implementing FFI Qualify as minimal ?
The answer to your question depends on the meaning that you give to the word “minimal”.
Given your question, and assuming that you don't want to make an implementation competing with the nowdays fine implementations of Common Lisp and Schema, my hypothesis is that with “minimal” you intend: Turing complete, that is capable of expressing any computation expressible in a general purpose programming language.
With this assumption, you need to implement three other things:
conditional forms (cond)
lambda expressions (lambda)
a way of defining recursive lambda expression (labels or defun)
Your interpreter then should be able to evaluate forms. This should be sufficient to have a language equivalent to the initial LISP, that allow to express in the language any computable function.
First off, you are talking about first writing a LISP interpreter. You have a lot of choices to take when it comes to scoping, LISP1 vs LISP2 since these questions alter the implementation core. An interpreter is a general purpose program that reads and evaluates code. It can support abstractions but it won't extend itself by making more native stuff.
If you are interested in such stuff you can perhaps make a compiler instead. Eg. there are many Sceme like subsets that compiles to C or Java code, but you can make your own VM. Thus it can indeed compile itself to be run on it's own target machine (self hosting) if all the forms and procedures you use has been implemented using the primitives supported by the compiler.
Making a dumb compiler is not much difference from making an interpreter. That is very clear if yo've watched the SICP videos (10A is about compilation, 7A-B is about interpreters)
The environment can be a chain of pairs just as in a LISP interpreter. It would be difficult to implement the environment of itself in LISP without making it a very difficult Lisp language to use (unless it's compiled that is)
You may use the data structures of lisp and the primitives from the C code though.
Making a FFI is a fast way to give your language lots of features. It solves the chicken and egg problem by using other peoples work from within your language. In fuses the top (primitives and syntax) and the bottom layer (a runtime) of your system. It's the ultimate primitive and you can think of it as system call or message bus to the runtime.
I strongly suggest to read Queinnec's book: Lisp In Small Pieces. It is a book dedicated entirely to answer your question, and it explains in detail the many trade-offs and the internals of Lisp implementations and definitions, by giving many explained examples of Lisp interpreters and compilers.
You might also consider using libffi. You could be interested in the internals of M.Serrano's Bigloo & Hop implementations. You might even look inside my MELT lisp-like language to customize the GCC
compiler.
You also need to learn more about garbage collection (you might read the GC handbook). You could use Boehm's conservative Garbage Collector (or something else, e.g. my Qish or MPS) or write your own GC.
You may want to learn more about Chicken, Scheme 48, Guile and read their papers and look inside their code.
See also J.Pitrat's blog: it is not about Lisp (but about bootstrapping strong AI) and has several fascinating entries related to bootstrapping.

C bindings to Ada package

While I can find plenty of information on generating Ada bindings to C libraries, I can't seem to find anything on the opposite. Specifically, I have some math/science packages that I would like to export to C, to make them more widely usable. Exporting functions or procedures through pragmas/aspects is easy enough, but I'm confused about how to deal with types declared in Ada.
Say, for example, I have a package implementing unit-agnostic planar angles. Along with arithmetic operators, there are functions for handling the different units, like Degrees(«Value») return «Angle» as a constructor and Gradians(«Angle») return «Value» as an accessor. This way, there's a common representation, carefully picked, to allow the programmer to work with any unit.
What I'm confused about, is when the functions or procedures are exported, how does C also know and understand the angle type that it is to pass?
Edit:
Okay, so I was adding "with Export, Convention => C;" to the types, when there should not have been an "Export". Silly oversight.
Still closely related, I now don't know how to actually call the code from C. As stated, there's plenty of documentation about importing stuff from C, but even the LRM doesn't provide examples about exporting stuff to C.
I know C includes libraries/files through the preprocessor, specifically "#include" directives. I wouldn't think this would work though, as the C compiler would then try to parse the Ada spec. How does C then see the Ada stuff?

Is a C implementation of the C++ STL technically infeasible?

There are a few questions on here asking whether such a thing exists, usually with answers promoting void pointers.
I'd like the data structures and algorithms to be strongly typed. I think this is feasible, though the implementation would be lengthy!
The plan is to parameterise the data structures, iterators and algorithms by type name, performing name mangling with the C preprocessor. It'll lean hard on the #include directive to write out the permutations. Finally, wrap the result in a friendly polymorphic interface to hide the mangled names (C11 _Generic based).
The ideal answer would be either "sure, that'll work" or "nope, C can't do X". I'm essentially looking for guidence on whether this looks like a fool's errand. Thank you
1992 is calling and want their technology back.
The first C++ compilers actually converted C++ into C, so yes it can be done, but you need to look a bit deeper into why you want to do it and whether it's worthwhile.

What parser-generators with code separation and language extensibility would you recommend?

I'm looking for a context-free grammar parser generator with grammar/code separation and a possibility to add support for new target languages. For instance if I want parser in Pascal, I can write my own pascal code generator without reimplementing the whole thing.
I understand that most open-source parser generators can in theory be extended, still I'd prefer something that has extendability planned and documented.
Feature-wise I need the parser to at least support Python-style indentation, maybe with some additional work. No requirement on the type of parser generated, but I'd prefer something fast.
Which are the most well-known/maintained options?
Popular parser-generators seem to mostly use mixed grammar/code approach which I really don't like. Comparison list on Wikipedia lists a few but I'm a novice at this and can't tell which to try.
Why I don't like mixing grammar/code: because this approach seems like a mess. Grammar is grammar, implementation details are implementation details. They're different things written in different languages, it's intuitive to keep them in separate places.
What if I want to reuse parts of grammar in another project, with different implementation details? What if I want to compile a parser in a different language? All of this requires grammar to be kept separate.
Most parser generators won't handle context-free grammars. They handle some subset (LL(1), LL(k), LL(*), LALR(1), LR(k), ...). If you choose one of these, you will almost certainly have to hack your grammar to match the limitations of the parser generator (no left recursion, limited lookahead, ...). If you want a real context free parser generator you want an Early parser generator (inefficient), a GLR parser generator (the most practical of the lot), or a PEG parser generator (and the last isn't context-free; it requires rules to be ordered to determine which ones take precedence).
You seem to be worried about mixing syntax and parser-actions used to build the trees.
If the tree you build isn't a direct function of the syntax, there has to be some way to tie the tree-building machinery to the grammar productions. Placing it "near" the grammar production is one way, but leads to your "mixed" notation objection.
Another way is to give each rule a name (or some unique identifier), and set the tree-building machinery off to the side indexed by the names. This way your grammar isn't contaminated with the "other stuff", which seems to be your objection. None of the parser generator systems I know of do this. An awkward issue is that you now have to invent lots of rule names, and anytime you have a few hundred names that's inconvenient by itself and it is hard to make them mnemonic.
A third way is to make the a function of the syntax, and auto-generate the tree building steps. This requires no extra stuff off to the side at all to produce the ASTs. The only tool I know that does it (there may be others but I've been looking for 20 odd years and haven't seen one) is my company's product,, the DMS Software Reengineering Toolkit. [DMS isn't just a parser generator; it is a complete ecosystem for building program analysis and transformation tools for arbitrary languages, using a GLR parsing engine; yes it handles Python style indents].
One objection is that such trees are concrete, bloated and confusing; if done right, that's not true.
My SO answer to this question:
What is the difference between an Abstract Syntax Tree and a Concrete Syntax Tree? discusses how we get the benefits of ASTs from automatically generated compressed CSTs.
The good news about DMS's scheme is that the basic grammar isn't bloated with parsing support. The not so good news is that you will find lots of other things you want to associate with grammar rules (prettyprinting rules, attribute computations, tree synthesis,...) and you come right back around to the same choices. DMS has all of these "other things" and solves the association problem a number of ways:
By placing other related descriptive formalisms next to the grammar rule (producing the mixing you complained about). We tolerate this for pretty-printing rules because in fact it is nice to have the grammar (parse) rule adjacent to the pretty-print (anti-parse) rule. We also allow attribute computations to be placed near the grammar rules to provide an association.
While DMS allows rules to have names, this is only for convenient access by procedural code, not associating other mechanisms with the rule.
DMS provides a third way to associate these mechanisms (esp. attribute grammar computations) by using the rule itself as a kind of giant name. So, you write the grammar and prettyprint rules in one place, and somewhere else you can write the grammar rule again with an associated attribute computation. In principle, this is just like giving each rule a name (well, a signature) and associating the computation with the name. But it also allows us to define many, many different attribute computations (for different purposes) and associate them with their rules, without cluttering up the base grammar. Our tools check that a (rule,associated-computation) has a valid rule in the base grammar, so it makes it relatively each to track down what needs fixing when the base grammar changes.
This being my tool (I'm the architect) you shouldn't take this as a recommendation, just a bias. That bias is supported by DMS's ability to parse (without whimpering) C, C++, Java, C#, IBM Enterprise COBOL, Python, F77/F90/F95 with column6 continues/F90 continues and embedded C preprocessor directives to boot under most circumstances), Mumps, PHP4/5 and many other languages.
First off, any decent parser generator is going to be robust enough to support Python's indenting. That isn't really all that weird as languages go. You should try parsing column-sensitive languages like Fortran77 some time...
Secondly, I don't think you really need the parser itself to be "extensible" do you? You just want to be able to use it to lex and parse the language or two you have in mind, right? Again, any decent parser-generator can do that.
Thirdly, you don't really say what about the mix between grammar and code you don't like. Would you rather it be all implemented in a meta-language (kinda tough), or all in code?
Assuming it is the latter, there are a couple of in-language parser generator toolkits I know of. The first is Boost's Spirit, which is implemented in C++. I've used it, and it works. However, back when I used it you pretty much needed a graduate degree in "boostology" to be able to understand its error messages well enough to get anything working in a reasonable amount of time.
The other I know about is OpenToken, which is a parser-generation toolkit implemented in Ada. Ada doesn't have the error-novel problem that C++ has with its templates, so OpenToken is far easier to use. However, you have to use it in Ada...
Typical functional languages allow you to implement any sublanguage you like (mostly) within the language itself, thanks to their inhernetly good support for things like lambdas and metaprogramming. However, their parsers tend to be slower. That's really no problem at all if you are just parsing a configuration file or two. Its a tremendous problem if you are parsing hundreds of files at a go.

Does C's FILE have an object-oriented interface?

Does the FILE type used through standard C functions fopen, etc. have an object-oriented interface?
I'm looking for opinions with reasoning rather than an absolute answer, as definitions of OO vary by who you ask. What are the important OO concepts it meets or doesn't meet?
In response to JustJeff's comment below, I am not asking whether C is an OO language, nor whether C (easily or not) allows OO programming. (Isn't that a separate issue?)
Is C an object-oriented language?
Was OOP (object-oriented-programming) anything more than a laboratory concept when C and FILE were created?
Answering these questions will answer your question.
EDIT:
Further thoughts:
Object Oriented specifically means several behaviors, including:
Inheritence: Can you derive new classes from FILE?
Polymorphism: Can you treat derived classes as FILEs?
Encapsulation: Can you put a FILE inside another object?
Methods & Properties: Does a FILE have methods and properties specific to it? (eg.
myFile.Name, myFile.Size, myFile.Delete())
Although there are well known C "tricks" to accomplish something resembling each of these behaviors, this is not built in to FILE, and is not the original intent.
I conclude that FILE is not Object Oriented.
If the FILE type were "object oriented", presumably we could derive from it in some meaningful way. I've never seen a convincing instance of such a derivation.
Lets say I have new hardware abstraction, a bit like a socket, called a wormhole. Can I derive from FILE (or socket) to implement it. Not really - I've probably got to make some changes to tables in the OS kernel. This is not what I call object orientation
But this whole issue comes down to semantics in the end. Some people insist that anything that uses a jump-table is object oriented, and IBM have always claimed that their AS/400 boxes are object-oriented, through & through.
For those of you that want to dip into the pit of madness and stupidity that is the USENET comp.object newsgroup, this topic was discussed quite exhaustively there a few years ago, albeit by mad and stupid people. If you want to trawl those depths, the Google Groups interface is a good place to start.
Academically speaking, certainly the actual files are objects. They have attributes and you can perform actions on them. Doesn't mean FILE is a class, just saying, there are degrees of OO-ness to think about.
The trouble with trying to say that the stdio FILE interface qualifies as OO, however, is that the stdio FILE interface doesn't represent the 'objectness' of the file very well. You could use FILEs under plain old C in an OO way, but of course you forfeit the syntactic clarity afforded by Java or C++.
It should probably further be added that while you can't generate 'inheritance' from FILE, this further disqualifies it as OO, but you could argue that's more a fault of its environment (plain C) than the abstract idea of the file-as-object itself.
In fact .. you could probably make a case for FILE being something like a java interface. In the linux world, you can operate almost any kind of I/O device through the open/close/read/write/ioctl calls; the FILE functions are just covers on top of those; therefore in FILE you have something like an abstract class that defines the basic operations (open/read/etc) on an 'abstact i/o device', leaving it up to the various sorts of derived types to flesh those out with type-specific behavior.
Granted, it's very hard to see the OO in a pile of C code, and very easy to break the abstractions, which is why the actual OO languages are so much more popular these days.
It depends. How do you define an "object-oriented interface"? As the comments to abelenky's post shows, it is easy to construct an argument that FILE is object-oriented. It depends on what you mean by "object-oriented". It doesn't have any member methods. But it does have functions specific to it.
It can not be derived from in the "conventional" sense, but it does seem to be polymorphic. Behind a FILE pointer, the implementation can vary widely. It may be a file, it may be a buffer in memory, it may be a socket or the standard output.
Is it encapsulated? Well, it is essentially implemented as a pointer. There is no access to the implementation details of where the file is located, or even the name of the file, unless you call the proper API functions on it. That sounds encapsulated to me.
The answer is basically whatever you want it to be. If you don't want FILE to be object-oriented, then define "object-oriented" in a way that FILE can't fulfill.
C has the first half of object orientated.
Encapsulation, ie you can have compound types like FILE* or structs but you can't inherit from them which is the second (although less important) half
No. C is not an object-oriented language.
I know that's an "absolute answer," which you didn't want, but I'm afraid it's the only answer. The reasoning is that C is not object-oriented, so no part of it can have an "object-oriented interface".
Clarification:
In my opinion, true object-orientation involves method dispatch through subtype polymorphism. If a language lacks this, it is not object-oriented.
Object-orientation is not a "technique" like GTK. It is a language feature. If the language lacks the feature, it is not object-oriented.
If object-orientation were merely a technique, then nearly every language could be called object-oriented, and the term would cease to have any real meaning.
There are different definitions of oo around. The one I find most useful is the following (inspired by Alan Kay):
objects hold state (ie references to other objects)
objects receive (and process) messages
processing a message may result in
messages beeing sent to the object itself or other objects
a change in the object's state
This means you can program in an object-oriented way in any imperative programming language - even assembler. A purely functional language has no state variables, which makes oo impossible or at least awkward to implement (remember: LISP is not pure!); the same should go for purely declarative languages.
In C, message passing in most often implemented as function calls with a pointer to a struct holding the object's state as first argument, which is the case for the file handling api. Still, C as a language can't be classified as oo as it doesn't have syntactic support for this style of programming.
Also, some other definitions of oo include things like class-based inheritance (so what about prototypal languages?) and encapsulation - which aren't really essential in my opinion - but some of them can be implemented in C with some pointer- and casting magic.

Resources