using Make in linux - c

What exactly does the "make" command in linux do? If I have a makefile, does that correspond with it? when I execute my code, I type make in command and it runs but don't really know what exactly it does. If you could explain to me what's going on i'd be more familiar when doing it in the future, Thanks!

When using unfamiliar commands in Unix, you can usually type man <name-of-command> and you'll get output like this: http://unixhelp.ed.ac.uk/CGI/man-cgi?make (You can also google man and get an online version if not on a Unix platform).
This is called a man-page, and is one of the most predominant forms of documentation for Unix programs.
To answer your original questions, yes make uses a Makefile. Essentially make reads the Makefile, and then determines a set of commands to create new files. If you want to understand a little more about make/Makefiles, check out the documentation: http://www.gnu.org/software/make/manual/html_node/Makefiles.html

The make command offers a DSL (domain specific language), specialized for expression of algorithms for building things efficiently. The core programming model allows expression of dependencies. A programmer expresses a series of dependencies. Given these dependencies, when something changes, the make program can determine the minimal set of entities that must be built, and in what order. It then executes the rules (that the programmer also states) to build these entities.
In addition to the language that supports this core programming model for expressing dependencies, there are other constructs available like variables, functions, text transformation functions etc. that come in useful in this domain of building things. But its mostly about expressing dependencies and algorithms to build things.

Related

Is it possible to build a C standard library that is agnostic to the both the OS and compiler being used?

First off, I know that any such library would need to have at least some interface shimming to interact with system calls or board support packages or whatever (For example; newlib has a limited and well defined interface to the OS/BSP that doesn't assume intimate access to the OS). I can als see where there will be need for something similar for interacting with the compiler (e.g. details of some things left as "implementation defined" by the standard).
However, most libraries I've looked into end up being way more wedded to the OS and compiler than that. The OS assumptions seems to assume access to whatever parts they want from a specific OS environment and the compiler interactions practically in collusion with the compiler implementation it self.
So, the basic questions end up being:
Would it be possible (and practical) to implement a full C standard library that only has a very limited and well defined set of prerequisites and/or interface for being built by any compiler and run on any OS?
Do any such implementation exist? (I'm not asking for a recommendation of one to use, thought I would be interested in examples that I can make my own evaluations of.)
If either of the above answers is "no"; then why? What fundamentally makes it impossible or impractical? Or why hasn't anyone bothered?
Background:
What I'm working on that has lead me to this rabbit hole is an attempt to make a fully versioned, fully hermetic build chain. The goal being that I should be able to build a project without any dependencies on the local environment (beyond access to the needs source control repo and say "a valid posix shell" or "language-agnostic-build-tool-of-choice is installed and runs"). Given those dependencies, the build should do the exact same thing, with the same compiler and libraries, regardless of which versions of which compilers and libraries are or are not installed. Universal, repeatable, byte identical output is the target I'm wanting to move in the direction of.
Causing the compiler-of-choice to be run from a repo isn't too hard, but I've yet to have found a C standard library that "works right out of the box". They all seem to assume some magic undocumented interface between them and the compiler (e.g. __gnuc_va_list), or at best want to do some sort of config-probing of the hosting environment which would be something between useless and counterproductive for my goals.
This looks to be a bottomless rabbit hole that I'm reluctant to start down without first trying to locate alternatives.

PICK/BASIC, FlashBASIC, and C Interoperability

I stumbled across some interesting documentation regarding PICK programming:
http://www.d3ref.com/?token=flash.basic
It says FlashBASIC is a compiled, instead of interpreted, version of PICK programs that are interoperable with PICK. This is great. I am curious about how it describes Object code:
converts Pick/BASIC source code into a list of binary instructions
called object code.
Is this object code interoperable with other languages? Or is it limited to the PICK & Universe operating environment? In other words could a C program call a FlashBASIC program?
This is helpful in defining the C version, but cannot find any clear definition of the FlashBasic version:
What's an object file in C?
You're asking a few different questions which I'll try to answer.
Here is an article I wrote that might help your understanding of FlashBASIC. In short, where traditional MV BASIC is compiled and then run by assembler, the Flash compiler is C and generates an object module that sits below the standard BASIC object in frame space. At runtime that code is then interpreted by a C runtime. For our purposes here, there is no C interface, this is just an internal mechanism for getting code to run faster.
Note from the above that this is Not related to the "What's an object file in C?" topic because object modules in D3 are stored in D3 frames, completely unrelated to common OS-level object modules.
Now about C calling Pick - in your case D3: You can use the CP library - the docs are in the same area as the link you cited. Rather than binding with the database itself, you can also use your code in a client/server mode with the MVSP library if you're using Managed C (.NET). Or you can use any common web service client mechanism in C and setup D3 as a web service server with a number of technologies including MVST, mv.NET, Java, or C/C++.
I know that response is rather vague but you're asking a question which has been discussed at-length in forums over a period of years. If you ask a more specific question you'll get a specific answer. Feel free to refine your query in a comment and we can focus the answer.
Also note that you tagged this question as "u2". If you are really using the U2 variant of MV/Pick (Universe or Unidata) then the reference to the D3 docs was misleading and none of the above applies, as they do this differently in U2 and there is no FlashBASIC there. I know, you're confused. Let's work it out...
Yep, Flash BASIC just translates to C, is compiled, and resulting object files are dynamically loaded and linked, then run from the Pick OS. The feature of C programs running and interacting with BASIC was certainly possible, but we did not implement that feature.

How can I best check for C library dependencies?

I'm building something that installs a high-level stack, and to do that, I need to install the lower-level stuff.
The simplest way to look for whether, say, Java is installed, is to just shell out a which java in a shell script and check if it can find it. I'm now to the point where I need to do some libraries without an obvious binary- basically stuff that is an include from within C. libxml, for example.
I'm woefully green to C in general, so this makes things a little tricky for me. :) Ideally I could just make a shell script that calls a little C applicaiton that calls #include <xxxx>, where xxxx is the library that I'm checking the existence of. If it can't find it, it errors out. Unfortunately, of course, all that happens prior to compilation, so it's not as dynamic as I'd like.
I'm doing this on a system that probably doesn't have anything installed on it (be it high-level language or package managers or what have you), so I'm looking more for a basic shell script way of doing things (or maybe some clever C or command-line gcc options). Or maybe just manually search the include paths that gcc would look for anyway /usr/local/include, /usr/include, etc.). Any thoughts?
Autotools is really what you need. Its a huge (and bizarre) framework for dealing with this very problem:
http://www.gnu.org/software/autoconf/
You can also use pkg-config, which will work with newer software making use of that mechanism:
http://pkg-config.freedesktop.org/wiki/
this is the purpose of configure (part of automake and autoconf)

Writing application for both Unix and Windows

I'll write a program for Interactive UNIX (http://en.wikipedia.org/wiki/INTERACTIVE_UNIX). But in a year it will be ported to Windows. I'll write it in ANSI C and/or SH-script. When it runs on Windows it will be run as a Windows service. How do I make it as easy as possible for me?
I want to change as little as possible when I port it, but to make it good code.
Unfortunately, Interactive Unix is a old system and the only shell that exist is /bin/sh
If you are even considering doing this in SH script, then you should give serious consideration to Python which is already portable.
Port early and frequently
Encapsulate non portable code. (Don't spread too many #ifdefs all over your code - rather create functions implemented separately for each OS in separate source files.
Be very strict with data types (use long short in structs/classes and not int)
I.e. switch on the highest warning level and resolve all warnings.
You can use platform-dependent ifdef-include pragmas and as strict types as possible. GLib has some nice ones defined which could be used on nearly every platform or architecture.
A shell script only option is not a viable alternative as on Windows platforms, there's no Bourne shell, Bash or KSH by default, and unfortunately PowerShell seems to be rare on XP machines. But you can create both a traditional batch file and a Bourne shell script.
But as others said, it's easier if you use a higher level language that's platform independent. And why wouldn't you? :)
I would recommand using ANSI-C and Lua (an embeddable small script interpreter). Try to use this with the basic required C functions you need.
You need to port and test often. If you work one year on unix and then try to switch it will be much harder, because often the best porting solution is a different design which is implemented on all platforms.
Windows can't run sh scripts directly, you need to use cygwin for that. So if you really want to run on vanilla Windows, you better use C. Stick to C89 and be careful. If you use any system calls, stick to POSIX ones and you should find them or equivalents on Windows. Windows also has a pretty comprehensive Berkeley sockets-alike library, so you can use that too within reason.
You're still going to have to do some #ifdefing.
You'll end up compiling it with MinGW if you make it a Windows task, if you stray too far into the UNIX den, you'll have to make it a cygwin binary instead, which has some baggage associated with it.
If it is not an option to add something that is inherently portable like python, ruby, perl, java etc. then your best option is probably to use ANSI C. One reason for C's initial popularity was it's (relatively good) portability. That said, anything that is closely tied to the OS, such as graphics, networking, etc are much less portable in C than in something like Python. You should strive to make "wrappers" for OS specific functions and keep those partitioned off from the main code. This way when it comes time to port it over, you're rewriting the wrappers, and everything else should compile without many issues.
All that said, it is a LOT easier to write something in Python and have it work everywhere. Plus it is more "fun" to write. So if you can avoid "interactive unix" in the future, do so.

Is Extendible program in C possible?

I am looking into making a C program which is divided into a Core and Extensions. These extensions should allow the program to be extended by adding new functions. so far I have found c-pluff a plugin framework which claims to do the same. if anybody has any other ideas or reference I can check out please let me know.
You're not mentioning a platform, and this is outside the support of the language itself.
For POSIX/Unix/Linux, look into dlopen() and friends.
In Windows, use LoadLibrary().
Basically, these will allow you to load code from a platform-specific file (.so and .dll, respectively), look up addresses to named symbols/functions in the loaded file, and access/run them.
I tried to limit myself to the low-level stuff, but if you want to have a wrapper for both of the above, look at glib's module API.
The traditional way on windows is with DLLs. But this kind of obselete. If you want users to actually extend your program (as opposed to your developer team releasing official plugins) you will want to embed a scripting language like Python or Lua, because they are easier to code in.
You can extend your core C/C++ program using some script language, for example - Lua
There are several C/C++ - Lua integration tools (toLua, toLua++, etc.)
Do you need to be able to add these extensions to the running program, or at least after the executable file is created? If you can re-link (or even re-compile) the program after having added an extension, perhaps simple callbacks would be enough?
If you're using Windows you could try using COM. It requires a lot of attention to detail, and is kind of painful to use from C, but it would allow you to build extension points with well-defined interfaces and an object-oriented structure.
In this usage case, extensions label themselves with a 'Component Category' defined by your app, hwich allows the Core to find and load them withough havng to know where their DLLs are. The extensions also implement interfaces that are specified using IDL and are consumed by the core.
This is old tech now, but it does work.

Resources