Wikipedia defines reification as:
Reification is the process by which an abstract idea about a computer
program is turned into an explicit data model or other object created
in a programming language.
Definition of
implementation sounds:
Implementation is the realization of an application, or execution of a
plan, idea, model, design, specification, standard, algorithm, or
policy.
When I was looking into meaning of reification it definition striked me as something for which I would use word implementation e.g. translating theory into practice in one of possible ways.
Is implementation superset of reification? What is the difference between these two words?
I'd say you're about right when you hypothesize that a reification is a subset of an implementation.
Reification applies to an abstraction to be transposed into a concrete object whereas implementation is about the whole program.
For example, if you were to implement a video game (thus the implementation) you may need to translate abstract concepts such as coordinates into concrete objects, i.e.Point structures (which would be ONE reification).
Related
I wrote a code which has all the rules of Sudoku written into it (one occurence of a digit per column, line, and square). The code takes an input (unfilled sudoku grid), and returns a solution by translating logical clauses into DIMACS format and using a SAT solver.
Given that the algorithm respects rules, takes in data, and uses that data to form conclusion based on implications (eg if there is a 1 in the first cell, there cannot be a 1 in the second cell), is this code considered an "expert system"? Thank you.
Whether a program is an expert system is subjective, but I'd say unless your program is encoding non-trivial knowledge acquired from a domain expert, it's not an expert system. If you can't teach another person to practically do what your program is doing, it's not an expert system.
By that definition, what you've done is probably not an expert system since it would be too time consuming for a person to use the same technique. I've written a sudoku solver using a production system (https://sourceforge.net/p/clipsrules/code/HEAD/tree/branches/63x/examples/sudoku/) that I would consider to be an expert system. The encoded knowledge was acquired from websites with advanced techniques for humans to use for solving sudoku puzzles. All of the encoded techniques can be practically used by humans for solving puzzles (although some of the more complex techniques push that boundary).
Although my sudoku solver can solve much more complicated puzzles than I could, calling it an expert system is not an indication of its sophistication. There are better approaches for solving extremely complex sudoku puzzles than emulating approaches humans might take.
In the 80's, I had written a clone of the Emycin expert system engine. One important characteristic was the ability for the user to ask WHY the expert system got some conclusion. The system could reply (in an almost natural language) that it applied such and such rules to get to the conclusion.
With this kind of system, the knowledge is modeled and implemented (by a cognitician engineer) as an explicit set of rules. These rules are objects known by the engine. The engine can trigger the rules (forward or backward or maybe using metarules...) and can log the triggered rules and thus explain its conclusions.
(this is my sense for expert systems).
I see the term 'construct' come up very often in programming readings. The current book I am reading, "Programming in C" by Stephen Koching has used it a few times throughout the book. One example is in the chapter on looping, which says:
"When developing programs, it sometimes becomes desirable to have the
test made at the end of the loop rather than at the beginning.
Naturally, the C language provides a special language construct to
handle such a situation. This looping statement is known as the do
statement."
In this case what does the term 'construct' mean, and does the word 'construct' have any relation to an object 'constructor' in other languages?
In this case you can replace the word construct with syntax.
does the word 'construct' have an relation to an object 'constructor' in other languages?
No. These two terms are different. There is nothing like constructor in C
It's a generic term that normally refers to some particular syntax included in the language to perform some task (like a loop with the condition at the end). It has no relation at all with constructors.1
Well, besides the fact that constructors are a particular language construct in many OO languages.
does the word 'construct' have an relation to an object 'constructor' in other languages?
The sentence uses the noun, not a verb, meaning of the word "construct":
construct (n) - something (such as an idea or a theory) that is formed in people's minds.
In this case, "construct" refers to an abstract way of describing something (namely, a loop) in terms of the syntax of that particular language. "Language construct" means "a way to do [something] with that language".
A construct is simply a concept implementation mechanism used by a given programming language - the language's syntax.
In your case, the concept here is a loop and its construct is the manner in which it is implemented by the C programming language.
Programming languages provide constructs for various programming concepts that define how these programming concepts are implemented in that language.
Does the word 'construct' have an relation to an object 'constructor' in other languages?
The two terms are different, a constructor is used in Object Oriented Languages such as java, it is not available in the C programming language.
first of all, remember that c-language, not an Object Oriented Programming language. the constructor is an OOP terminology. So there is Construct refers to syntax and pre-defined keywords like do and while in this case.
The word “construct” as used in computer programming has very broad or general meaning. I hope these thoughts will help to explain what is meant by the word “construct” in programming.
A computer program is a list of instructions that the computer is able to (a) understand and (b) execute (perform or carry out). The simplest program would be a list of - let’s call them statements - that the computer would execute in sequence, one after the other, from the first to the last, and then end. But that would be incredibly limiting - so limiting in fact that I don’t think computers would ever have become much more than simple calculators. One of the fundamental differences between a simple calculator and a computer is that the statements do not have to be executed in sequence. The sequence can be interrupted by “special” instructions (statements) which can divert the flow of execution from one stream to a totally different stream which has a completely different agenda.
The first obvious way this is done is with methods (functions or procedures). When a method is called, the flow of execution is diverted from one stream of statements to a totally different stream of statements, often unrelated to the stream from which it came. If that concept is accepted, then I think that an instruction that calls a method could also be regarded as a “construct”.
Let’s divert this discussion for a moment to talk about “blocks” of code.
Programmers who work in languages like C, C++ or Java know that pairs of opening and closing braces (curly brackets), are used to identify blocks of code. And it’s blocks of code that divide a program up into different processes or procedures. A block of code that is headed by say a while() loop is just as valid as a method, in that it interrupts the otherwise unimpeded flow of execution through a program. The same applies to the many categories of operators. “new” will divert the flow of statement execution to a constructor method. So we have all these various syntactical expressions that have one thing in common - they divert the flow of execution which, left to its own devices - would happily proceed executing statements of code in sequence one after the other, without any interruption.
So I am suggesting that the word “construct” is a useful collective noun that embraces all of these different and diverse syntactical expressions e.g. if() for() switch() etc. that use the different categories of operators to perform functions that are defined in their respective blocks of code. Would love to hear other opinions.
In short, all in-built features(like an array, loop, if else statements) of the programming language are language constructs.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Member functions can be emulated in C by passing the this pointer explicitly. Virtual functions can be emulated by explicitly storing in every object a pointer to a global array of function pointers. Fine.
Now my question is, do people actually do this? I am wondering if it's worth teaching this technique, because I do not want to teach something to C freshmen that is practically never used in the real world.
(I need to fill the last day of a two-week introductory C course for people already familiar with OOP.)
Are there any relevant projects, libraries or frameworks that emulate OO in C in the manner described?
I've about twenty years experience in C. It was the first compiled language I learned and I've never needed to move on, so it's been C and only C, all the way. I write code constantly at work and at home. I have published a library of lock-free data structures. I think I'm a competent C programmer.
With regard to your question, OO consists of a number of concepts. One, for example, is instantiation, e.g a library with a new() and delete() and instances of a given entity (stack, list, etc). C supports this and it is, of course, a very functional and useful approach. I've used this approach for about fifteen years.
Many years ago I began experimenting with another OO concept, well supported in C++, inheritance. I wanted an entity which contained other entities. The problem then is exposing the API of the contained entites. You can do it, but the fact is, the C language does not naturally express such an concept and approach. It is not something I now use.
My advice is; a knife is a knife, a fork is a fork. You can use either as the other, but it doesn't work well. C does not naturally support some (important) OO concepts, such as inheritance. Don't try to make C do these things. If you want to do this, use C++.
Yes, they do.
Are there any relevant projects, libraries or frameworks that emulate OO in C in the manner described?
I wouldn't call it "emulating" just because there's no first-class language support. See GObject.
A lot of project uses the Object oriented paradigms in C codebase. For various reasons they don't use CPP directly. For system level or performance intensive projects, Other languages don't cut the deal. So its a battle between cpp and c.
Why people emulate OO in C instead of full blown CPP is topic of heated arguments. Linus torvalds once famously stated, CPP compilers are not trustworthy. He has little faith on CPP generated code.
Linux kernel is a good example of implementing OO design patterns in C. You can read about how Linux kernel did it in this lwn.net article series :
part1
part2
There is a extensive free document lying around in internet which covers a full range implementation OO design patterns in C.
ooc.pdf
You can find many other projects along the same road.
Examples:
pjsip
sofia
It may not be used in practice, but it is incredibly valuable to learn the concept of the equivalence between member functions and functions that take the object as the first parameter. Having this concept in the back of their head will help them in many problems they will encounter down the road.
Day in and day out I see people asking questions on Stack Overflow about why it doesn't work to point to pass a member function to something requiring function pointer, and things like that. They think that member functions are just some magical functions that are part of an object, and over-complicate the whole situation. If they had realized that member functions were equivalent to functions that took the object as the first parameter, then the problem they're having (that to call the method they would somehow need both the member function pointer as well as the object), as well as possible solutions (somehow pass the object in separately, or make some kind of closure that captures the object) becomes apparent. Apparently, too many people just pretend that OO is "magic" and don't understand this.
In functional programming, we often teach people how data structures and local variables and all that stuff could be written purely in terms of manipulation of functions. Not that this is practical -- it would probably be inefficient -- but this impresses upon them something about the power of functions. And it helps them to understand things in a different way. And maybe down the road if they write a compiler or something, these equivalences will come in handy.
Computer science is all about equivalences and reductions, and how to think about one problem in terms of another. We reduce SAT-3 to subset sum, not because that's actually how we would actually solve the SAT-3 problem, but because this teaches us that subset sum is NP-complete.
Every once in a while, I come across a piece of code written by someone else, where non-instance methods take a pointer to a structure as an argument, and I see a pattern and a light bulb goes off in my head, and I say, ah-ha, this can be re-factored into an instance method, because I know about this equivalence. So you see, knowing these equivalences also helps us to write better, simpler code.
Check out TI's "DSP Algorithm Standard" / xDAIS framework.
There's a generic C API that every conforming DSP algorithm implementation implements (sorry for the tautology). The need for all this "art" stems from several issues common in the DSP world:
relatively small RAMs
multiple data channels (often parallel/concurrent)
complex algorithm usage patterns
something else I forget
The standard and framework aim at making it easier for DSP engineers to use 3rd party DSP algorithms.
There's an interface to configure an algorithm instance and query its memory requirements (based on the configuration) and there are support functions that actually manage the memory.
Some memory areas, scratchpads, can be allocated temporarily and given to an algorithm instance when it's active and taken away from it when it's inactive and given to another instance, effectively shared.
There's also functionality (and APIs) to move instance memory buffers to defragment memory.
There's more, but I'd need to reread the docs to recall the details.
See IALG_*() and ALG_*() interface methods for example.
Also, there are tools to validate implementations of the generic APIs. 3rd parties can request official validation of them from TI.
Some relevant links: spru352g.pdf, spru360e.pdf.
In C-type languages, there is a strong emphasis on structs/records and objects from the very beginning and in every introductory book. Then, their complete systems are designed around managing such structs, their mutual relations and inheritance.
In Lisp documentation, you can usually find 1-2 pages about how Lisp "also" has a defstruct, a simple example, and thats usually it. Also, nesting of structures is never mentioned at all.
For someone coming from a C background, it first seems that organizing different data types hierarchically isnt the preferred method in Lisp, but apart from CLOS, which is a full blown object system and too complicated if you just want structs, and apart from craming everything into lists, there isnt an apparent way to transfer your C struct knowledge.
What is the idiomatic Lisp way of hierarchically organizing data which most resembles C structs?
--
I think the summary answer to my question would be: For beginner learning purposes, defstruct and/or plists, although "legacy features", can be used, since they most closely resemble C structs, but that they have been largerly superseded by the more flexible defclass/CLOS, which what most Lisp programs use today.
This was my first question on SO, so thanks everyone for your time answering it.
Use CLOS. It isn't complicated.
Otherwise use structures.
If you have a specific question how to use them, then just ask.
(defclass point ()
((x :type number)
(y :type number)))
(defclass rectangle ()
((p1 :type point)
(p2 :type point)
(color :type color)))
Stuff like that eventually leads to interfaces like Rectangles in CLIM (the Common Lisp Interface Manager).
History
To expand on it a bit: Historically 'structures' have been used in some low-level situations. Structures have single inheritance and slot access is 'fast'. Some Lisp dialects have more to structures than what Common Lisp offers. Then from the mid-70s on various forms of object-oriented representations have been developed for Lisp. Most of the representation of structured objects moved from structures to some kind of object-oriented Lisp extension. Popular during the 80s were class-based systems like Flavors, LOOPS and others. Frame-based or prototype-based systems like KEE Units or Object Lisp were also popular. The first Macintosh Common Lisp used Object Lisp for all its UI and IO facilities. The MIT Lisp machine used Flavors basically everywhere. Starting in the mid 80s ANSI CL was developed. A common OO-system was developed especially for Common Lisp: CLOS. It was based on Flavors and Loops. During that time mostly nothing was done to really improve structures - besides implementors finding ways to improve the implementation and providing a shallow CLOS integration. For example structures don't provide any packing of data. If there are two slots of 4 bits content, there is no way to instruct Common Lisp to encode both slots into a single 8 bit memory region.
As an example you can see in the Lisp Machine Manual, chapter on structures (PDF), that it had much more complex structures than what Common Lisp provides. Some of that was already present in Maclisp in the 70s: DEFSTRUCT in the Maclisp manual.
CLOS, the Common Lisp Object System
Most people would agree that CLOS is a nice design. It sometimes leads to 'larger' code, mostly because identifiers can get long. But there is some CLOS code, like the one in the AMOP book, that is really nicely written and shows how it is supposed to be used.
Over time implementors had to deal with the challenge that developers wanted to use CLOS, but also wanted to have the 'speed' of structures. Which is even more a task with the 'full' CLOS, which includes the almost standard Meta Object Protocol (MOP) for CLOS. So there are some tricks that implementors provide. During the 80s some software used a switch, so it could compiled using structures or using CLOS - CLX (the low-level Common Lisp X11 interface was an example). The reason: on some computers and implementations CLOS was much slower than structures. Today it would be unusual to provide such a compilation switch.
If I look today at a good Common Lisp implementation, I would expect that it uses CLOS almost everywhere. STREAMs are CLOS classes. CONDITIONs are CLOS classes. The GUI toolkit uses CLOS classes. The editor uses CLOS. It might even integrate foreign classes (say, Objective C classes) into CLOS.
In any non-toy Common Lisp implementation CLOS will be the tool to provide structured data, generic behavior and a bunch of other things.
As mentioned in some of the other answers, in some places CLOS might not be needed.
Common Lisp can return more than one value from a function:
(defun calculate-coordinates (ship)
(move-forward ship)
(values (ship-x ship)
(ship-y ship)))
One can store data in closures:
(defun create-distance-function (ship x y)
(lambda ()
(point-distance (ship-x ship) (ship-y ship) x y)))
For configuration one can use some kind of lists:
(defship ms-germany :initial-x 0 :initial-y 0)
You can bet that I would implement the ship model in CLOS.
A lesson from writing and maintaining CLOS software is that it needs to be carefully designed and CLOS is so powerful that one can create really complex software with it - a complexity which is often not a good idea. Refactor and simplify! Fortunately, for many tasks basic CLOS facilities are sufficient: DEFCLASS, DEFMETHOD and MAKE-INSTANCE.
Pointers to CLOS introductions
For a start, Richard P. Gabriel has his CLOS papers for download.
Also see:
http://cl-cookbook.sourceforge.net/clos-tutorial/index.html
A Brief Guide to CLOS
Book chapter from Practical Common Lisp, Object Reorientation, Classes
Book chapter from Practical Common Lisp, Object Reorientation, Generic Functions
C++ Coder’s Newbie Guide to Lisp-style OO
Book: The Art of the Metaobject Protocol. According to some guy named Alan Kay the most important computer science book in a decade, unfortunately written for Lispers ;-). The book explains how to modify or extend CLOS itself. It also includes a simple CLOS implementation as source. For normal users this book is not really needed, but the programming style is that of real Lisp experts.
Examples with defstruct are short and simple because there isn't much to say about them. C's structs are complicated:
memory management
complicated memory layout due to unions, inline nested structures
In C, structs are also used for other purposes:
to access memory
due to the lack of polymorphism or ability to pass a value of 'any' type: it is idiomatic to pass around a void*
due to inability to have other means of passing data; e.g., in Lisp you can pass a closure which has the needed data
due to the lack of advanced calling conventions; some functions accept their arguments inside structures
In Common Lisp, defstruct is roughly equivalent to Java's/C#'s class: single inheritance, fixed slots, can be used as specifiers in defmethods (analogous to virtual methods). Structures are perfectly usable for nested data structures.
Lisp programs tend not to use deeply nested structures (Lisp's source code being the primary exception) due to that often more simple representations are possible.
Property Lists (plists)
I think the idiomatic equivalent of a C struct is to not need to store the data in structs in the first place. I'd say at least 50% of the C-style code I've ported to Lisp, rather than storing the data in some elaborate structure, I just compute what it is I want to compute. C needs structs to store everything temporarily because its expressions are so weak.
If you a specific example of some C-style code, I'm sure we could demonstrate an idiomatic way to implement it in Lisp.
Beyond that, remember that Lisp's s-exps are hierarchical data. An if expression in Lisp, for example, is itself hierarchical data.
(defclass point ()
( x y z))
(defun distance-from-origin (point)
(with-slots (x y z)
point
(sqrt (+ (* x x) (* y y) (* z z)))))
I think this is kind of what I was looking for, can be found here:
http://cl-cookbook.sourceforge.net/clos-tutorial/index.html
y
Why not use hash tables? Every member in the struct can be a key in a hash table.
If a language has control structures and variables, but no support for arrays, lists, memory access and allocation, etc, can it be Turing-complete?
Maybe if there was no limit to the amount of variables you can create, you can simulate arrays by creating variables like array_1, array_2, ... array_6000 and manually loop through them, and somehow create complex data structures and recursion?
Edit: Even if you cannot access variables by name manipulation (array_10+i is not allowed)?
Certainly. Have a look at Lambda Calculus, which is one of the most minimal Turing Complete languages I've ever seen. Basically, all you have are lambdas (function literals); no assignment, no declaration, no data structures. It's all very very slimmed-down.
You can, however, simulate a linear data structure like a List by chaining functions together. It gets pretty verbose, but it's certainly possible and it's much nicer than having a large series of sequentially named variables.
Generally speaking, whether or not a language is Turing Complete has nothing to do with whether it has Arrays. Functional languages like SML and Haskell lack arrays, just like Lambda Calculus, and these are actually useful languages! Saying a language is "Turing Complete" is merely another way of saying that there is no Turing Computable function which cannot be expressed in said language. This is a surprisingly loose qualification, allowing many languages which would be completely impractical (like Lambda Calculus).
There's plenty of Turing-complete languages that don't even have the notion of a "variable"! Memory access and allocations are implementation details, so they're completely irrelevant. You have to realize that Turing machines and Turing completeness are very theoretical concepts, useful for proving things, but completely divorced from the reality of actual hardware.
Paul Graham has written a long, but very, very interesting essay on the history of computer languages where he describes the two very different main traditions of computer languages:
Lisp, Scheme, etc. - derived from theoretical considerations, very simple, yet conceptually powerful languages, but for the longest time impractical because of their complete disregard for what's easy and efficient to implement
Assembler, FORTRAN, C and pretty much all "mainstream" languages - derived more or less directly from what the hardware could do, easy to implement, efficient, but for the longest time inferior to the (older!) Lisp family in terms of expressiveness.
It sounds like you know only the second tradition, but Turing completeness is a concept that originates from the same principles as the first tradition and makes little sense if you don't know those principles.