How to compile picoProlog from source code? - c

I am a student in Computer Science, and I am learning about logic programming with Prolog.
I have found an interesting Prolog interpreter, picoProlog (http://spivey.oriel.ox.ac.uk/corner/Logic_Programming).
To know more about Prolog, I am trying to compile their source code, but I failed.
In this web page, they said:
The interpreter source is written in a minimal dialect of Pascal, avoiding many features including pointers, but using macros to overcome some of Pascal's limitations, in a style inspired by Kernighan and Plauger's book Software tools in Pascal. It comes with a translator from the Pascal dialect into C that can be used to build the interpreter and also source for the macro processor that is needed.
To build the interpreter on a Linux machine, just extract the tar file and type make. The building happens in several stages:
First, the Pascal-to-C translator ptc is built from C source, including a lexer and parser written with lex and yacc. The file README gives some details of the very restricted Pascal subset accepted by this translator.
Next, ptc is used to build the macro processor ppp.
Finally, the picoProlog interpreter is built from the source code in the file pprolog.x by first expanding macros using ppp to obtain a file pprolog.p, then translating to C with ptc, and lastly compiling the C code.
Text and software copyright © J. M. Spivey, 1996, 2002, 2010.
They said about compiling on Linux only, so I don't know how to compile this source code in Windows machine. Can I compile it with Turbo Pascal 7.0 (without any requirement) on Windows XP? Can you remove some part of script for Pascal compiling only?

I found this question while googling, and though it's old, I thought it would be helpful to add a definitive answer from the author of the program.
It is indeed not too hard to get picoProlog to compile with the Free Pascal Compiler. I've incorporated Marco's suggestions into the source, fixed a small bug that was revealed, and added a workaround for an odd feature of Free Pascal. The results can be found on the GitHub page:
https://github.com/Spivoxity/pprolog
with instructions for building in the README.
Note: I built this with Free Pascal under Linux on x86_64, but haven't tested it on Windows. I can't see a reason why it wouldn't work.
Edit 18 Oct 2022 -- Replaced BitBucket with GitHub.

To avoid spending more time in getting the P2C/PTC bootstrapping to run while you are probably only interested in the interpreter and not its *nix bootstrapping, I think it is easier to forget the PTC stuff and focus getting the pascal parts to compile/work with FPC 2.6.x. (the below took 10 minutes), generating a standalone Windows EXE with 10-20 code line additions.
Start with ppp, hmm, that compiles (nand works!) out of the box:
D:\dls\prlg\pprolog>fpc ppp.p
Free Pascal Compiler version 2.6.2 [2013/02/12] for i386
Copyright (c) 1993-2012 by Florian Klaempfl and others
Target OS: Win32 for i386
Compiling ppp.p
Linking ppp.exe
394 lines compiled, 0.1 sec , 30352 bytes code, 1692 bytes data
The code does looks like it is meant to have its input piped in. We haul pprolog.x through it (ppp) and it (pprolog.pp) almost compiles. There are four problems, but all are fixable by adding some code to the top, and not changing original code (marked with MVDV: in the source)
Some range check errors because integer type is too small for the 1MB stackspace that is set up. This prohibits Turbo Pascal usage, but we can workaround it by defining integer as longint.
Seems it assumes that forward functions don't need their arguments repeated while in FPC they generally do, fixed.
In the final function of ("initialize") some non standard ptc library functions are used that borrow from C (argv, argc) instead of their typical pascal equivalents. Fixed.
(reported by original author after testing) ParseFactor has a right hand recursion that is substituted by reading
the result. Enable TP mode ( {$mode tp} above the uses line), or add () to disambiguate
After these, pprolog.pp compiles with FPC:
Free Pascal Compiler version 2.6.2 [2013/02/12] for i386
Copyright (c) 1993-2012 by Florian Klaempfl and others
Target OS: Win32 for i386
Compiling pprolog.pp
pprolog.pp(487,19) Warning: unreachable code
pprolog.pp(532,19) Note: Local variable "dummy" is assigned but never used
Linking pprolog.exe
2150 lines compiled, 0.1 sec , 84400 bytes code, 13932 bytes data
1 warning(s) issued
1 note(s) issued
Some notes:
UNTESTED
I don't know if I got the range of argv/argc exactly right. (0..argc-1 while paramcount is 1-based etc) Check if necessary.
The string system predates the TP String type and is convoluted (probably because of PTC, see README), I don't know if it will work.
I've put the resulting, compiling source code at http://www.stack.nl/~marcov/files/pprolog.pp
Good luck!

Given how many different variations of Pascal have existed, my gut feeling is it's easier to get hold of a Linux environment than to adjust the Pascal source code to fit the compiler you have. And this is only the first step.
Getting a Linux environment? Try a virtualbox - https://www.virtualbox.org

Related

How is the type sf_count_t in sndfile.h defined in libsndfile?

I am trying to work with Nyquist (a music programming platform, see: https://www.cs.cmu.edu/~music/nyquist/ or https://www.audacityteam.org/about/nyquist/) as a standalone program and it utilizes libsndfile (a library for reading and writing sound, see: http://www.mega-nerd.com/libsndfile/). I am doing this on an i686 GNU/Linux machine (Gentoo).
After successful set up and launching the program without errors, I tried to generate sound via one of the examples, "(play (osc 60))", and was met with this error:
*** Fatal error : sizeof (off_t) != sizeof (sf_count_t)
*** This means that libsndfile was not configured correctly.
Investigating this further (and emailing the author) has proved somewhat helpful, but the solution is still far from my grasp. The author recommended looking at /usr/include/sndfile.h to see how sf_count_t is defined, and (this portion of) my file is identical to his:
/* The following typedef is system specific and is defined when libsndfile is
** compiled. sf_count_t will be a 64 bit value when the underlying OS allows
** 64 bit file offsets.
** On windows, we need to allow the same header file to be compiler by both GCC
** and the Microsoft compiler.
*/
#if (defined (_MSCVER) || defined (_MSC_VER))
typedef __int64 sf_count_t ;
#define SF_COUNT_MAX 0x7fffffffffffffffi64
#else
typedef int64_t sf_count_t ;
#define SF_COUNT_MAX 0x7FFFFFFFFFFFFFFFLL
#endif
In the above the author notes there is no option for a "32 bit offset". I'm not sure how I would proceed. Here is the particular file the author of Nyquist recommend I investigate: https://github.com/erikd/libsndfile/blob/master/src/sndfile.h.in , and here is the entire source tree: https://github.com/erikd/libsndfile
Here are some relevant snippets from the authors email reply:
"I'm guessing sf_count_t must be showing up as 32-bit and you want
libsndfile to use 64-bit file offsets. I use nyquist/nylsf which is a
local copy of libsndfile sources -- it's more work keeping them up to
date (and so they probably aren't) but it's a lot easier to build and
test when you have a consistent library."
"I use CMake and nyquist/CMakeLists.txt to build nyquist."
"It may be that one 32-bit machines, the default sf_count_t is 32
bits, but I don't think Nyquist supports this option."
And here is the source code for Nyquist: http://svn.code.sf.net/p/nyquist/code/trunk/nyquist/
This problem is difficult for me to solve because it's composed of an niche use case of relatively obscure software. This also makes the support outlook for the problem a bit worrisome. I know a little C++, but I am far from confident in my ability to solve this. Thanks for reading and happy holidays to all. If you have any suggestions, even in terms of formatting or editing, please do not hesitate!
If you look at the sources for the bundled libsndfile in nyquist, i.e. nylsf, then you see that sndfile.h is provided directly. It defines sf_count_t as a 64-bit integer.
The libsndfile sources however do not have this file, rather they have a sndfile.h.in. This is an input file for autoconf, which is a tool that will generate the proper header file from this template. It has currently the following definition for sf_count_t for linux systems (and had it since a while):
typedef #TYPEOF_SF_COUNT_T# sf_count_t ;
The #TYPEOF_SF_COUNT_T# would be replaced by autoconf to generate a header with a working type for sf_count_t for the system that is going to be build for. The header file provided by nyquist is therefore already configured (presumably for the system of the author).
off_t is a type specified by the POSIX standard and defined in the system's libc. Its size on a system using the GNU C library is 32bit if the system is 32bit.
This causes the sanity check in question to fail, because the sizes of sf_count_t and off_t don't match. The error message is also correct, as we are using an unfittingly configured sndfile.h for the build.
As I see it you have the following options:
Ask the nyquist author to provide the unconfigured sndfile.h.in and to use autoconf to configure this file at build time.
Do not use the bundled libsndfile and link against the system's one. (This requires some knowledge and work to change the build scripts and header files, maybe additional unexpected issues)
If you are using the GNU C library (glibc): The preprocessor macro _FILE_OFFSET_BITS can be set to 64 to force the size of off_t and the rest of the file interface to use the 64bit versions even on 32bit systems.
This may or may not work depending on whether your system supports it and it is not a clean solution as there may be additional misconfiguration of libsndfile going unnoticed. This flag could also introduce other interface changes that the code relies on, causing further build or runtime errors/vulnerabilities.
Nonetheless, I think the syntax for cmake would be to add:
add_compile_definitions(_FILE_OFFSET_BITS=64)
or depending on cmake version:
add_definitions(-D_FILE_OFFSET_BITS=64)
in the appropriate CMakeLists.txt.
Actually the README in nyquist/nylsf explains how the files for it were generated. You may try to obtain the source code of the same libsndfile version it is based on and repeat the steps given to produce an nylsf configured to your system. It may cause less further problems than 2. and 3. because there wouldn't be any version/interface changes introduced.

RV32E version of the soft-float methods such as __divdi3 and __mulsi3

I have managed to build an RV32E cross-compiler on my Intel Ubuntu machine by using the official riscv GitHub toolchain (github.com/riscv/riscv-gnu-toolchain) with the following configuration:-
./configure --prefix=/home/riscv --with-arch=rv32i --with-abi=ilp32e
The ip32e specifies soft float for RV32E. This generates a working compiler that works fine on my simple C source code. If I disassemble the created application then it does indeed stick to the RV32E specification. It only generates assembly for my code that uses the first 16 registers.
I use static linking and it pulls in the expected set of soft float routines such as __divdi3 and __mulsi3. Unfortunately the pulled in routines use all 32 registers and not the restricted lower 16 for RV32E. Hence, not very useful!
I cannot find where this statically linked code is coming from, is it compiled from C source and therefore being compiled without the RV32E restriction? Or maybe it was written as hand coded assembly that has been written only for the full RV32I instead of RV32E? I tried to grep around the source but have had no luck finding anything like the actual code that is statically linked.
Any ideas?
EDIT: Just checked in more details and the compiler is not generating using just the first 16 registers. Turns out with a simple test routine it manages to only use the first 16 but more complex code does use others as well. Maybe RV32E is not implemented yet?
The configure.ac file contains this code:
AS_IF([test "x$with_abi" == xdefault],
[AS_CASE([$with_arch],
[*rv64g* | *rv64*d*], [with_abi=lp64d],
[*rv64*f*], [with_abi=lp64f],
[*rv64*], [with_abi=lp64],
[*rv32g* | *rv32*d*], [with_abi=ilp32d],
[*rv32*f*], [with_abi=ilp32f],
[*rv32*], [with_abi=ilp32],
[AC_MSG_ERROR([Unknown arch])]
)])
Which seems to map your input of rv32i to the ABI ilp32, ignoring the e. So yes, it seems support for the ...e ABIs is not fully implemented yet.

Mingw compiling error on Linux with a program made on Windows

I've recently migrated from Windows 7 to Linux (Ubuntu 14.04) and want to compile a C program that I made. The program worked perfectly under Codeblocks 12.11 using GNU GCC compiler's basic settings. When compiling under linux under Codeblocks 13.12 using GNU GCC compiler's basic settings, I get the following error messages:
undefined reference to __mingw_vprintf
undefined reference to __chstk.ms
undefined reference to _fopen
... and so on with fscanf, malloc, etc...
I'm new to Linux and I am not used to C coding, or even programming in general. Does someone have an idea about what's going on?
you have three separate problem going on here.
(1) for _fopen, Microsoft has a nasty habit of renaming all the POSIX functions so they start with an underscore, while your Linux distribution is looking for the standard POSIX name, i.e. fopen. Welcome to the wonderfully frustrating world of cross-platform development :). On solution would be to add something along these lines:
#ifdef __WIN32
#define fopen _fopen
#endif
This in effect says, if compiling on a windows machine (which typically has __WIN32 defined as a preprocessor define; and if it is not you can always make sure that it is) replace every occurrence of fopen with _fopen. The preprocessor will do this for you.
(2) for __mingw_vprintf, I've never seen this function but from the name I would surmise that it is an implementation of vprintf specific to mingw. I personally would rewrite my code to stick with the standard C function vprintf. You can read the manual page for vprintf here; and the MSDN information can be found here. Again notice that many of the Microsoft provide functions have an underscore prepended to the name. You can do something like what you did in case (1) above.
N.B. Actually if I were to rewrite the program I would use C++ IO-streams, but I am sticking to a pure C answers.
(3) for __chstk.ms, again I've never seen this function. My suspicion is that it is something inserted into your code to perform stack checking to help prevent stack-based exploits. To the best of my knowledge there is no way you are going to get that to work on a Linux machine.

Bootstrapping A compiler [duplicate]

I've heard of the idea of bootstrapping a language, that is, writing a compiler/interpreter for the language in itself. I was wondering how this could be accomplished and looked around a bit, and saw someone say that it could only be done by either
writing an initial compiler in a different language.
hand-coding an initial compiler in Assembly, which seems like a special case of the first
To me, neither of these seem to actually be bootstrapping a language in the sense that they both require outside support. Is there a way to actually write a compiler in its own language?
Is there a way to actually write a compiler in its own language?
You have to have some existing language to write your new compiler in. If you were writing a new, say, C++ compiler, you would just write it in C++ and compile it with an existing compiler first. On the other hand, if you were creating a compiler for a new language, let's call it Yazzleof, you would need to write the new compiler in another language first. Generally, this would be another programming language, but it doesn't have to be. It can be assembly, or if necessary, machine code.
If you were going to bootstrap a compiler for Yazzleof, you generally wouldn't write a compiler for the full language initially. Instead you would write a compiler for Yazzle-lite, the smallest possible subset of the Yazzleof (well, a pretty small subset at least). Then in Yazzle-lite, you would write a compiler for the full language. (Obviously this can occur iteratively instead of in one jump.) Because Yazzle-lite is a proper subset of Yazzleof, you now have a compiler which can compile itself.
There is a really good writeup about bootstrapping a compiler from the lowest possible level (which on a modern machine is basically a hex editor), titled Bootstrapping a simple compiler from nothing. It can be found at https://web.archive.org/web/20061108010907/http://www.rano.org/bcompiler.html.
The explanation you've read is correct. There's a discussion of this in Compilers: Principles, Techniques, and Tools (the Dragon Book):
Write a compiler C1 for language X in language Y
Use the compiler C1 to write compiler C2 for language X in language X
Now C2 is a fully self hosting environment.
The way I've heard of is to write an extremely limited compiler in another language, then use that to compile a more complicated version, written in the new language. This second version can then be used to compile itself, and the next version. Each time it is compiled the last version is used.
This is the definition of bootstrapping:
the process of a simple system activating a more complicated system that serves the same purpose.
EDIT: The Wikipedia article on compiler bootstrapping covers the concept better than me.
A super interesting discussion of this is in Unix co-creator Ken Thompson's Turing Award lecture.
He starts off with:
What I am about to describe is one of many "chicken and egg" problems that arise when compilers are written in their own language. In this ease, I will use a specific example from the C compiler.
and proceeds to show how he wrote a version of the Unix C compiler that would always allow him to log in without a password, because the C compiler would recognize the login program and add in special code.
The second pattern is aimed at the C compiler. The replacement code is a Stage I self-reproducing program that inserts both Trojan horses into the compiler. This requires a learning phase as in the Stage II example. First we compile the modified source with the normal C compiler to produce a bugged binary. We install this binary as the official C. We can now remove the bugs from the source of the compiler and the new binary will reinsert the bugs whenever it is compiled. Of course, the login command will remain bugged with no trace in source anywhere.
Check out podcast Software Engineering Radio episode 61 (2007-07-06) which discusses GCC compiler internals, as well as the GCC bootstrapping process.
Donald E. Knuth actually built WEB by writing the compiler in it, and then hand-compiled it to assembly or machine code.
As I understand it, the first Lisp interpreter was bootstrapped by hand-compiling the constructor functions and the token reader. The rest of the interpreter was then read in from source.
You can check for yourself by reading the original McCarthy paper, Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I.
Every example of bootstrapping a language I can think of (C, PyPy) was done after there was a working compiler. You have to start somewhere, and reimplementing a language in itself requires writing a compiler in another language first.
How else would it work? I don't think it's even conceptually possible to do otherwise.
Another alternative is to create a bytecode machine for your language (or use an existing one if it's features aren't very unusual) and write a compiler to bytecode, either in the bytecode, or in your desired language using another intermediate - such as a parser toolkit which outputs the AST as XML, then compile the XML to bytecode using XSLT (or another pattern matching language and tree-based representation). It doesn't remove the dependency on another language, but could mean that more of the bootstrapping work ends up in the final system.
It's the computer science version of the chicken-and-egg paradox. I can't think of a way not to write the initial compiler in assembler or some other language. If it could have been done, I should Lisp could have done it.
Actually, I think Lisp almost qualifies. Check out its Wikipedia entry. According to the article, the Lisp eval function could be implemented on an IBM 704 in machine code, with a complete compiler (written in Lisp itself) coming into being in 1962 at MIT.
Some bootstrapped compilers or systems keep both the source form and the object form in their repository:
ocaml is a language which has both a bytecode interpreter (i.e. a compiler to Ocaml bytecode) and a native compiler (to x86-64 or ARM, etc... assembler). Its svn repository contains both the source code (files */*.{ml,mli}) and the bytecode (file boot/ocamlc) form of the compiler. So when you build it is first using its bytecode (of a previous version of the compiler) to compile itself. Later the freshly compiled bytecode is able to compile the native compiler. So Ocaml svn repository contains both *.ml[i] source files and the boot/ocamlc bytecode file.
The rust compiler downloads (using wget, so you need a working Internet connection) a previous version of its binary to compile itself.
MELT is a Lisp-like language to customize and extend GCC. It is translated to C++ code by a bootstrapped translator. The generated C++ code of the translator is distributed, so the svn repository contains both *.melt source files and melt/generated/*.cc "object" files of the translator.
J.Pitrat's CAIA artificial intelligence system is entirely self-generating. It is available as a collection of thousands of [A-Z]*.c generated files (also with a generated dx.h header file) with a collection of thousands of _[0-9]* data files.
Several Scheme compilers are also bootstrapped. Scheme48, Chicken Scheme, ...

Delphi dcu to obj

Is there a way to convert a Delphi .dcu file to an .obj file so that it can be linked using a compiler like GCC? I've not used Delphi for a couple of years but would like to use if for a project again if this is possible.
Delphi can output .obj files, but they are in a 32-bit variant of Intel OMF. GCC, on the other hand, works with ELF (Linux, most Unixes), COFF (on Windows) or Mach-O (Mac).
But that alone is not enough. It's hard to write much code without using the runtime library, and the implementation of the runtime library will be dependent on low-level details of the compiler and linker architecture, for things like correct order of initialization.
Moreover, there's more to compatibility than just the object file format; code on Linux, in particular, needs to be position-independent, which means it can't use absolute values to reference global symbols, but rather must index all its global data from a register or relative to the instruction pointer, so that the code can be relocated in memory without rewriting references.
DCU files are a serialization of the Delphi symbol tables and code generated for each proc, and are thus highly dependent on the implementation details of the compiler, which changes from one version to the next.
All this is to say that it's unlikely that you'd be able to get much Delphi (dcc32) code linking into a GNU environment, unless you restricted yourself to the absolute minimum of non-managed data types (no strings, no interfaces) and procedural code (no classes, no initialization section, no data that needs initialization, etc.)
(answer to various FPC remarks, but I need more room)
For a good understanding, you have to know that a delphi .dcu translates to two differernt FPC files, .ppu file with the mentioned symtable stuff, which includes non linkable code like inline functions and generic definitions and a .o which is mingw compatible (COFF) on Windows. Cygwin is mingw compatible too on linking level (but runtime is different and scary). Anyway, mingw32/64 is our reference gcc on Windows.
The PPU has a similar version problem as Delphi's DCU, probably for the same reasons. The ppu format is different nearly every major release. (so 2.0, 2.2, 2.4), and changes typically 2-3 times an year in the trunk
So while FPC on Windows uses own assemblers and linkers, the .o's it generates are still compatible with mingw32 In general FPC's output is very gcc compatible, and it is often possible to link in gcc static libs directly, allowing e.g. mysql and postgres linklibs to be linked into apps with a suitable license. (like e.g. GPL) On 64-bit they should be compatible too, but this is probably less tested than win32.
The textmode IDE even links in the entire GDB debugger in library form. GDB is one of the main reasons for gcc compatibility on Windows.
While Barry's points about the runtime in general hold for FPC too, it might be slightly easier to work around this. It might only require calling certain functions to initialize the FPC rtl from your startup code, and similarly for the finalize. Compile a minimal FPC program with -al and see the resulting assembler (in the .s file, most notably initializeunits and finalizeunits) Moreover the RTL is more flexible and probably more easily cut down to a minimum.
Of course as soon as you also require exceptions to work across gcc<->fpc bounderies you are out of luck. FPC does not use SEH, or any scheme compatible with anything else ATM. (contrary to Delphi, which uses SEH, which at least in theory should give you an advantage there, Barry?) OTOH, gcc might use its own libunwind instead of SEH.
Note that the default calling convention of FPC on x86 is Delphi compatible register, so you might need to insert proper cdecl (which should be gcc compatible) modifiers, or even can set it for entire units at a time using {$calling cdecl}
On *nix this is bog standard (e.g. apache modules), I don't know many people that do this on win32 though.
About compatibility; FPC can compile packages like Indy, Teechart, Zeos, ICS, Synapse, VST
and reams more with little or no mods. The dialect levels of released versions are a mix of D7 and up, with the focus on D7. The dialect level is slowly creeping to D2006 level in trunk versions. (with for in, class abstract etc)
Yes. Have a look at the Project Options dialog box:
(High-Res)
As far as I am aware, Delphi only supports the OMF object file format. You may want to try an object format converter such as Agner Fog's.
Since the DCU format is proprietary and has a tendency of changing from one version of Delphi to the next, there's probably no reliable way to convert a DCU to an OBJ. Your best bet is to build them in OBJ format in the first place, as per Andreas's answer.

Resources