PICK/BASIC, FlashBASIC, and C Interoperability - c

I stumbled across some interesting documentation regarding PICK programming:
http://www.d3ref.com/?token=flash.basic
It says FlashBASIC is a compiled, instead of interpreted, version of PICK programs that are interoperable with PICK. This is great. I am curious about how it describes Object code:
converts Pick/BASIC source code into a list of binary instructions
called object code.
Is this object code interoperable with other languages? Or is it limited to the PICK & Universe operating environment? In other words could a C program call a FlashBASIC program?
This is helpful in defining the C version, but cannot find any clear definition of the FlashBasic version:
What's an object file in C?

You're asking a few different questions which I'll try to answer.
Here is an article I wrote that might help your understanding of FlashBASIC. In short, where traditional MV BASIC is compiled and then run by assembler, the Flash compiler is C and generates an object module that sits below the standard BASIC object in frame space. At runtime that code is then interpreted by a C runtime. For our purposes here, there is no C interface, this is just an internal mechanism for getting code to run faster.
Note from the above that this is Not related to the "What's an object file in C?" topic because object modules in D3 are stored in D3 frames, completely unrelated to common OS-level object modules.
Now about C calling Pick - in your case D3: You can use the CP library - the docs are in the same area as the link you cited. Rather than binding with the database itself, you can also use your code in a client/server mode with the MVSP library if you're using Managed C (.NET). Or you can use any common web service client mechanism in C and setup D3 as a web service server with a number of technologies including MVST, mv.NET, Java, or C/C++.
I know that response is rather vague but you're asking a question which has been discussed at-length in forums over a period of years. If you ask a more specific question you'll get a specific answer. Feel free to refine your query in a comment and we can focus the answer.
Also note that you tagged this question as "u2". If you are really using the U2 variant of MV/Pick (Universe or Unidata) then the reference to the D3 docs was misleading and none of the above applies, as they do this differently in U2 and there is no FlashBASIC there. I know, you're confused. Let's work it out...

Yep, Flash BASIC just translates to C, is compiled, and resulting object files are dynamically loaded and linked, then run from the Pick OS. The feature of C programs running and interacting with BASIC was certainly possible, but we did not implement that feature.

Related

How can I inject or dynamically load an c function into another c program

I want to build an interface in a c program which is running on an embedded system. This should accept some bytecode that represents a c function. This code will then be loaded into the memory and executed. This will then be something like remotely inject code into a running app. The only difference here is that i can implement, or change the running code and provide an interface.
The whole thing should be used to inject test code on a target system.
My current problem is that I do not know how to build such a byte code out of an existing c function. Mapping and executing this is no problem if I would knew the start address of the function.
Currently I am working with Ubuntu for testing purposes, this allows me to try some techniques which are not possible in the embedded system (according to missing operating system libs).
I build an shared object and used dlopen() and dlsym() to run this function. This works fine, the problem is just that i do not have such functions in the embedded system. I read something about loading a shared object into memory and run it, but i could not find examples for that. (see http://www.nologin.org/Downloads/Papers/remote-library-injection.pdf)
I also took a simple byte code that just print hello world in stdout. I stored this code in memory using mmap() and execute it. This also worked fine. Here the problem is that I don't know how to create such a byte code, I just used an hello world example form the internet. (see https://www.daniweb.com/programming/software-development/threads/353077/store-binary-code-in-memory-then-execute-it)
I also found something here: https://stackoverflow.com/a/12139145/2479996 which worked very well. But here i need a additional linker script, already for such a simple program.
Further I looked at this post: https://stackoverflow.com/a/9016439/2479996
According to that answer my problem would be solved with the "X11 project".
But I did not really find much about that, maybe some of you can provide me a link.
Is there another solution to do that? Did I miss something? Or can someone provide me another solution to this?
I hope I did not miss something.
Thanks in advance
I see no easy solution. The closest that I am aware of is GCC's JIT backend (libgccjit). Here is a blog post about it.
As an alternative, you could using a scripting language for that code that needs to be injected. For instance, ChaiScript or Lua. In this question, there is a summary of options. As you are on an embedded device, the overhead might be significant, though.
If using an LLVM based backend instead of GCC is possible, you can have a look at Cling. It is a C++ interpreter based on LLVM and Clang. In my personal experience, it was not always stable, but it is used in production in CERN. I would except that the dynamic compilation features are more advanced in LLVM than in GCC.

using Make in linux

What exactly does the "make" command in linux do? If I have a makefile, does that correspond with it? when I execute my code, I type make in command and it runs but don't really know what exactly it does. If you could explain to me what's going on i'd be more familiar when doing it in the future, Thanks!
When using unfamiliar commands in Unix, you can usually type man <name-of-command> and you'll get output like this: http://unixhelp.ed.ac.uk/CGI/man-cgi?make (You can also google man and get an online version if not on a Unix platform).
This is called a man-page, and is one of the most predominant forms of documentation for Unix programs.
To answer your original questions, yes make uses a Makefile. Essentially make reads the Makefile, and then determines a set of commands to create new files. If you want to understand a little more about make/Makefiles, check out the documentation: http://www.gnu.org/software/make/manual/html_node/Makefiles.html
The make command offers a DSL (domain specific language), specialized for expression of algorithms for building things efficiently. The core programming model allows expression of dependencies. A programmer expresses a series of dependencies. Given these dependencies, when something changes, the make program can determine the minimal set of entities that must be built, and in what order. It then executes the rules (that the programmer also states) to build these entities.
In addition to the language that supports this core programming model for expressing dependencies, there are other constructs available like variables, functions, text transformation functions etc. that come in useful in this domain of building things. But its mostly about expressing dependencies and algorithms to build things.

In what languages besides C can I write a C library?

I want to write a library that is dynamically loadable and callable from C code, but I really don't want to write it in C - the code is security critical, so I want a language that makes it easier to have confidence that my code is correct. What are my options?
To be more specific, I want C programmers to be able to #include this, and -l that, and start using my library just as if I had written it in C. I'd like programmers in other languages to be able to use their favourite tools for linking to C libraries to link to it. Ideally I'd like that to be possible on every platform that supports C, but I'll settle for Linux, Windows and MacOS.
Anything that compiles to native code. So you might Google for that - "languages that compile to native code." See, e.g., Programming languages that compile to native code and have the batteries included
C++ is often the choice for this. Compiles to native code and provided you keep your interfaces simple, easy to write an adapter layer.
Objective C and Fortran are also possible.
It sounds like you are looking for a language with ABI compatibility or which can be described as resulting in native code. So long as it can be compiled to a valid object file (typically an .obj or .o file) which is accepted by the linker, that should be the main criteria. You also then want to write a header file as a convenience for any client code which is written in C (or a closely related language/variant thereof).
As mentioned by others, you need a pretty good reason for choosing a language other than C as it is the lingua-franco of low-level/systems software. Assembler is an option, although harder to port between platforms. D is a more portable - but less widespread - alternative which is designed to produce secure, efficient native code with a minimum of fuss. There are many others.
Almost every security critical application I know of is written in C. I don't believe that there are any other language that has higher real status in producing secure applications.
C is being said to be a poor language for security by people who don't understand.
If you want C programmers to use your library, use C. Doing anything else is tying one hand behind your back whilst trying to walk on a balance beam (the gymnastics equipment). Sure, there are dozens of other languages that are CAPABLE of interfacing to C, but it typically involves using a C layer and then stuffing the C data types into a language specific data type (Java Objects, Python Objects, etc, etc), and when the call is finished, you use the same conversion back to a C data type. Just makes it harder to work with, and potentially slower if you don't get all the design decisions right. And people won't understand the source code, so won't like to use it (see more about this below).
If you want security, then write very good code, wearing your "security aspects" hat firmly on at all times, find a security mailing list or website and post it there for review, take the review comments on board, understand the comments, and fix any comments that are meaningful to fix. Distribute the source code to the users, so people can see what your code does. Those that understand security will know what to look for and understand that you have done a good job (or a bad job, whichever is applicable) - and those who don't will hopefully trust the right pople. If it's good, people will use it. If it's "hidden", and not easy to access, you won't get many customers, no matter what language you use.
Don't worry, you won't reveal anything more from releasing source. If there is a flaw in the code, and it is popular (or important) enough, someone will find the flaw, even if you publish only binaries. For those skilled in reverse engineering, not having source code is only a small obstacle.
Security doesn't stem from using a specific language or a specific tool, it stems from good design and good basic understanding of the problems with security.
And remember security by obscurity (whether that means "hidden source code" or "unusual language" or something else obscure) is false security.
You might be interested in ATS, http://ats-lang.sourceforge.net/. ATS compiles via C, can be as efficient as C, and can be used in a way that is ABI-compatible with C. From the project website:
ATS is a statically typed programming language that unifies implementation with formal specification. It is equipped with a highly expressive type system rooted in the framework Applied Type System, which gives the language its name. In particular, both dependent types and linear types are available in ATS. The current implementation of ATS (ATS/Anairiats) is written in ATS itself. It can be as efficient as C/C++ (see The Computer Language Benchmarks Game for concrete evidence) and supports a variety of programming paradigms
ATS's dependent and linear type system helps produce static guarantees about your code, including various aspects of resource management safety.
Chris Double has been writing a series of articles exploring the power of ATS's type system for systems programming here: http://bluishcoder.co.nz/tags/ats/. Of particular note is this article: http://bluishcoder.co.nz/2012/08/30/safer-handling-of-c-memory-in-ats.html
This document covers aspects of calling back and forth between ATS and C code: https://docs.google.com/document/d/1W6DYQApEqKgyBzMbvpCI87DBfLdNAQ3E60u1hUiMoU0
The main downside is that dependently-typed programming is still a daunting prospect, even for non-systems programming. The syntax of the language is also a bit weird: consider lexical quirks such as the use of abst#ype as a keyword. Finally, ATS is to some degree a research project, and I personally don't know whether it would be sensible to adopt for a commercial endeavour.
Theoretically, it's going to be Fortran: less indirection (as in: my array is [here], not just a pointer to here, and this is true of most but not all of your data structures and variables).
However... There are many gotchas and quirks in Fortran: not, perhaps, as many as in C but you probably know your way around C rather better than Fortran. Which is the point behind most of the comments saying 'Know your code' - but do you really know what your compiler is doing?
Knowing you, I'm prepared to take it on trust that you do, for C. Most programers don't. You do not know and cannot know what a local JVM or JIT compiler does, and that's a black hole in your security model if you're using Java or C# r scripting languages.
Ignore anyone who tells you that the hairy-chested he-men of secure computing write their own assembler: they probably don't even know the security errors they're making in any and all nontrivial projects they release. Know your compiler, indeed.
You could write it in lua - providing a C API to a Lua library is relatively straight forward. C++ is also an option, though of course you'd have to write C wrappers and make sure no exceptions can escape your functions. But honestly, if it's security critical the minor inconveniences of the C language shouldn't be that much of a big deal. What you really should be doing is prove the correctness of your program where feasible, and test extensively where it's not.
You can write a library in Java. JNI is normally used to call C from Java, but it can be used the other way around.
There is finally a decent answer to this question: Rust.

How to use multiple development languages

I program in Delphi (D7 and D2006) on Windows XP (migrating in the near future to Windows 7). I need to use a mathematical library for some of the work I am doing and most of the math libraries (I am inclining towards Mathematica at present) I have looked at will produce compiled C code. Such code will provide specific functionality to my main programs.
I have a very basic question - given this development setup - how do I start utilising the compiled c code from Delphi? I really need baby steps to get me started on the process.
I've done quite a bit of this with my FE product OrcaFlex. You have two options to link to your C code from Delphi: static or dynamic. I link statically because it makes distribution and versioning much easier. But it's really quite a trick to get it to work statically and you have to rely on a number of undocumented aspects of Delphi.
I suspect that for your needs dynamic linking is best. Basically you need to compile and link your C code into a DLL. I recommend using the Borland C compiler to do this. You can use the free command line version BCC55 to do this. The advantage of using Borland C is that it makes the same assumptions about the 8087 floating point unit as Delphi does. If you build with MSVC then you will find that MS have elected not to raise floating point exceptions. Borland C does raise floating point exceptions. This is a bit of a corner case but it becomes relevant if you are trying to ship a product that you need to be robust.
You should know that the C code will, by default, use the C calling convention and I'd just stick with that. You bring it into Delphi by declaring the external routine as cdecl calling convention.
The other thing you need to take care on is defining a clear interface between the two modules. You need to make sure that exceptions don't cross the module boundary and that you don't pass any special types (e.g. Delphi strings) across the boundary. So for a string use a PChar (or even better PAnsiChar or PWideChar to be sure that it won't change meaning when you upgrade to Delphi 2009 and later).
I have been very happy with the SDL Library from Lohninger (http://www.lohninger.com/mathpack.html). It is written in Delphi and compiles right into your application, so there are no bundling or calling convention problems or floating point usage differences, as discussed by other responses in this thread.
Take a look at what he includes. If you're lucky, your needs will be met by his library and you'll be able to use it!
If you currently have Mathematica installed, go to the documentation centre and lookup guide/CLanguageInterface otherwise that guide is available on the web and have a good read there.
My understanding is that Mathematica can generate C-programs that link up with the Mathematica engine via MathLink if you need full function, or if you only need lower-level features then it is capable of generating code that can be statically linked with compiled Mathematica libraries. So that standalone code is possible.
See the Code Generator documentation.
If you can convert the C programs in to DLLs, then accessing such external functions from Delphi is relatively simple with external declarations.
function MathematicaRoutine(const x : double) : double; external 'MyInterface.dll';
There are bound to be a great number of complexities in getting this to work if you need to achieve a static bind, for use where Mathematica is not installed, if indeed it is possible. I have never attempted it.
You can mix your project with Delphi and C++ (Builder) code using RAD Studio. Put the automatically created C code into a C++ Builder file (.cpp) and for the rest add Delphi files.

Light C data structures and strings library?

I'm searching for something on the level of GNU extensions for C, but a little beyond (some basic data structure management). Best would be something BSD/MIT licensed.
If there is something for just strings containing GNU extensions equivalents plus adding some more it would be great.
I would prefer something that can be simply compiled into a project (no external libraries) based totally on the C standard (ANSI C89 if possible).
Edit: its for an OpenSource project that has a weird license, so no GPL code can't be added and working with plain K&R/ANSI C is pure pain.
This question already seems to be addressed here.
I actually wrote a somewhat lengthy response (recommending Glib and mentioning that Lua since 5.0 is MIT, not BSD), however my machine crashed half-way through :(
Thinking outside the box, a viable but off-the-wall approach is to use Lua. It is small, written in the subset of ANSI C that also happens to be valid C++, and supplies a rich garbage collected environment for strings, and associative arrays.
It can be built as a shared library, but it can also be statically linked.
It can admittedly feel a tad verbose when driving its data types entirely from the C side, but it is easy to move some of the higher level logic of your application into the Lua side where its data just works. Its VM is highly tuned, allowing it to perform better than one would expect for an interpreted scripting language, and there is a JIT compiler available as well for those times when its existing VM just isn't quite fast enough.
It is also open source and MIT licensed.

Resources