I'm trying to embed LuaJIT into a C application. The code is like this:
#include <lua.h>
#include <lualib.h>
#include <lauxlib.h>
#include <stdlib.h>
#include <stdio.h>
int barfunc(int foo)
{
/* a dummy function to test with FFI */
return foo + 1;
}
int
main(void)
{
int status, result;
lua_State *L;
L = luaL_newstate();
luaL_openlibs(L);
/* Load the file containing the script we are going to run */
status = luaL_loadfile(L, "hello.lua");
if (status) {
fprintf(stderr, "Couldn't load file: %s\n", lua_tostring(L, -1));
exit(1);
}
/* Ask Lua to run our little script */
result = lua_pcall(L, 0, LUA_MULTRET, 0);
if (result) {
fprintf(stderr, "Failed to run script: %s\n", lua_tostring(L, -1));
exit(1);
}
lua_close(L); /* Cya, Lua */
return 0;
}
the Lua code is like this:
-- Test FFI
local ffi = require("ffi")
ffi.cdef[[
int barfunc(int foo);
]]
local barreturn = ffi.C.barfunc(253)
io.write(barreturn)
io.write('\n')
It reports error like this:
Failed to run script: hello.lua:6: cannot resolve symbol 'barfunc'.
I've searched around and found that there're really little document on the ffi module. Thanks a lot.
ffi library requires luajit, so you must run lua code with luajit.
From the doc:
"The FFI library is tightly integrated into LuaJIT (it's not available as a separate module)".
How to embed luajit?
Look here http://luajit.org/install.html under "Embedding LuaJIT"
Under mingw your example run if i add
__declspec(dllexport) int barfunc(int foo)
at the barfunc function.
Under Windows luajit is linked as a dll.
As misianne pointed out, you need to export the function, which you can do by using extern if you are using GCC:
extern "C" int barfunc(int foo)
{
/* a dummy function to test with FFI */
return foo + 1;
}
If you are experiencing problems with undefined symbols under Linux using GCC, take care to have the linker add all symbols to the dynamic symbol table, by passing the -rdynamic flag to GCC:
g++ -o application soure.cpp -rdynamic -I... -L... -llua
For those of you trying to make this work on Windows with VC++ (2012 or later), using the C++ compiler:
make sure you use the .cpp extension, as this will do C++ compilation
make the function have external C linkage so that ffi can link to it, with extern "C" { ... }
export the function from the executable, with __declspec(dllexport)
optionally specify the calling convention __cdecl, not required because should be it by default and not portable
wrap the Lua headers in an extern "C" { include headers }, or better just #include "lua.hpp"
#include "lua.hpp"
extern "C" {
__declspec(dllexport) int __cdecl barfunc(int foo) {
return foo + 1;
}}
Related
I copied the example code in the Haskell FFI guide as a first step to exporting my Haskell program as a C library, but can't get it to compile. I have foo.hs:
module Foo where
foreign export ccall foo :: Int -> IO Int
foo :: Int -> IO Int
foo n = return (length (f n))
f :: Int -> [Int]
f 0 = []
f n = n:(f (n-1))
This successfuly compiled to foo_stub.h and foo_stub.o. Here's foo_stub.h:
#include "HsFFI.h"
#ifdef __cplusplus
extern "C" {
#endif
extern HsInt foo(HsInt a1);
#ifdef __cplusplus
}
#endif
But then my C program didn't compile:
#include "foo_stub.h"
main() { foo(1); } // I realize this is probably wrong, and would also like advice on doing this part correctly, but note the error is not here.
error:
gcc foo.c
In file included from foo.c:1:0:
foo_stub.h:1:19: fatal error: HsFFI.h: No such file or directory
#include "HsFFI.h"
^
compilation terminated.
I assume I'm missing some header files or haven't pointed gcc to them correctly. I can provide more information if necessary. Any ideas on how to fix this?
I found "HsFFI.h" at /usr/local/lib/ghc-7.10.2/include/HsFFI.h. You should be able to direct GCC to look there with the -I option. More information here.
I'm trying to call a function from a C project but I don´t know how to do it.
Here is my code ("treatments.c"):
#include <string.h>
#include "lua.h"
#include "lauxlib.h"
static int treatments_load_image (lua_State * L) {
lua_pushnumber(L,10);
return 1;
}
static const luaL_Reg RegisterFunctions[] =
{
{ "treatments", treatments_load_image },
{ NULL, NULL }
};
int luaopen_treatments(lua_State *L)
{
lua_newtable(L);
#if LUA_VERSION_NUM < 502
luaL_register(L, NULL, LuaExportFunctions);
#else
luaL_setfuncs(L, LuaExportFunctions, 0);
#endif
return 1;
}
In my .lua file, I´m trying to do something like this:
local treatments = require 'treatments'
And I get the error below:
lua: run.lua:15: module 'treatments' not found:
no field package.preload['treatments']
no file './treatments.lua'
...
The ".c" file is in the same folder than ".lua" file. I'm not using any MAKEFILE file.
If it helps, I'm using Lua 5.1 =)
Thanks!!
The ".c" file is in the same folder than ".lua" file. I'm not using
any MAKEFILE file
No, you must build your c source file into shared library (dll for windows, or so for linux), then put that shared library in lua package.cpath, see http://lua-users.org/wiki/ModulesTutorial
Lua's require deals with files containing Lua code or shared libraries in the case of C. You need to compile the C source code into a shared library and then load that, which should "return" a table (i.e. push it onto the stack), as usual.
I am in the process of refactoring some old legacy code, written in C. The code is very tightly coupled and I am struggling to refactor it into clear logical, loosely coupled modules.
In my early iterations, I have managed to ascertain logical modules - however the tight coupling is causing me problems, as many of the functions have intimate knowledge of other parts of the system.
The way I intend to fix this is to use extern declarations. The pseudocode below hopefully, explains the situation:
Assume I have two logically separate modules Foo and FooBar. Each module is to be built into a separate library (the FooBar module has a dependency on the Foo module).
/*######################################*/
/* Foo Module */
/*######################################*/
/* Foo.h */
#ifndef FOO_MODULE_H
#define FOO_MODULE_H
void foo(int id);
int do_something();
...
#endif /* FOO_MODULE_H */
/* Foo.c */
#include "Foo.h"
extern int foobar(); /* defined in FooBar module (a separate library) */
void foo(int id){
int var;
switch (id){
case 1:
var = do_something();
break;
case 2:
/* the module that gets here, has the required functions defined in it */
var = foobar();
}
return var;
}
/*###############################*/
/* FOOBar module */
/*###############################*/
/* FooBar.h */
#ifndef FOOBAR_MODULE_H
#define FOOBAR_MODULE_H
#include "Foo.h"
int foobar();
void do_something_else();
...
#endif /* FOOBAR_MODULE_H */
/* source file */
int foobar(){
return 42;
}
void do_something_else(){
int ret = foo(2);
printf("Function returned: %d", ret);
}
Is this a valid way to refactor the existing code into logically separate modules whilst allowing executables that link to libfoo.so and libfoobar.so to continue to work correctly?
My underlying assumption is that modules that link to libfoo.so only will not be able to resolve foobar() - but that should be fine since they do not need that function, so they will never have to call it. On the other hand, for modules that link to libfoobar.so, when they call foo(), `foobar() would have been resolved (the function definition is in the module).
Will the scheme I describe above work as I expect, or is there some gotcha that I am missing?
I tried your to compile your files into shared libs and then use them (I use cygwin).
Here's for Foo:
cm#Gregor-PC ~/src
$ gcc -I. -c --shared -o libFoo.so Foo.c
With the bin util nm you can check the symbols (grep for 'foo' to limit the output)
cm#Gregor-PC ~/src
$ nm libFoo.so | grep foo
0000000a T _foo
U _foobar
Where it gives an offset and says 'T' for terminated that tells you the symbol is defined in the lib.
Now the FooBar lib has to linked against Foo in order to have the foo symbol
cm#Gregor-PC ~/src
$ gcc -I. -L. --shared -o libFooBar.so FooBar.c libFoo.so
cm#Gregor-PC ~/src
$ nm libFooBar.so | grep foo
61f0111e T _foo
61f010e0 T _foobar
With this I can compile against FooBar only and get foo as a known symbol:
cm#Gregor-PC ~/src
$ gcc -I. -o tst1 tst1.c libFooBar.so
cm#Gregor-PC ~/src
$ ./tst1
Function returned: 42
So it seems to work OK.
You can improve on your approach to modularize your C code by having header files that contain only public data types, exported function prototypes (declared as extern) and maybe even public global variables or constants. Such a header file declares the interface to the module and would have to be included where the module is used.
This is explained with much more detail in the wonderful book 'Functional C' (Hartel, Muller, 1997, Addison Wesley, link ) in the chapter on modules.
The benefit is that your dependencies are clearer to see (as includes in the source files) and you don't have to have that unsightly extern declaration in the Foo source.
For your example:
/* Foo.h */
extern int foo(int id); /* exported to FooBar */
/* FooBar.h */
extern int foobar(); /* export to Foo */
extern void do_something_else(); /* export to tst1 */
/* Foo.c */
#include <Foo.h>
#include <FooBar.h>
int do_something() {
return 11;
}
int foo(int id){
int var;
switch (id){
case 1:
var = do_something();
break;
case 2:
/* the module that gets here, has the required functions defined in it */
var = foobar();
}
return var;
}
/* FooBar.c */
#include <stdio.h>
#include <Foo.h>
#include <FooBar.h>
int foobar(){
return 42;
}
void do_something_else(){
int ret = foo(2);
printf("Function returned: %d", ret);
}
I am currently working on my first "serious" C project, a 16-bit vm. When I split up the files form one big source file into multiple source files, the linker (whether invoked through clang, gcc, cc, or ld) spits out a the error:
ld: duplicate symbol _registers in register.o and main.o for inferred
architecture x86_64
There is no declaration of registers anywhere in the main file. It is a uint16_t array if that helps. I am on Mac OS 10.7.3 using the built in compilers (not GNU gcc). Any help?
It sounds like you've defined a variable in a header then included that in two different source files.
First you have to understand the distinction between declaring something (declaring that it exists somewhere) and defining it (actually creating it). Let's say you have the following files:
header.h:
void printIt(void); // a declaration.
int xyzzy; // a definition.
main.c:
#include "header.h"
int main (void) {
xyzzy = 42;
printIt();
return 0;
}
other.c:
#include <stdio.h>
#include "header.h"
void printIt (void) { // a definition.
printf ("%d\n", xyzzy);
}
When you compile the C programs, each of the resultant object files will get a variable called xyzzy since you effectively defined it in both by including the header. That means when the linker tries to combine the two objects, it runs into a problem with multiple definitions.
The solution is to declare things in header files and define them in C files, such as with:
header.h:
void printIt(void); // a declaration.
extern int xyzzy; // a declaration.
main.c:
#include "header.h"
int xyzzy; // a definition.
int main (void) {
xyzzy = 42;
printIt();
return 0;
}
other.c:
#include <stdio.h>
#include "header.h"
void printIt (void) { // a definition.
printf ("%d\n", xyzzy);
}
That way, other.c knows that xyzzy exists, but only main.c creates it.
I'm doing some test to learn how to create shared library.
The template for shared libraries in Code::Blocks is this
library.c
// The functions contained in this file are pretty dummy
// and are included only as a placeholder. Nevertheless,
// they *will* get included in the shared library if you
// don't remove them :)
//
// Obviously, you 'll have to write yourself the super-duper
// functions to include in the resulting library...
// Also, it's not necessary to write every function in this file.
// Feel free to add more files in this project. They will be
// included in the resulting library.
// A function adding two integers and returning the result
int SampleAddInt(int i1, int i2)
{
return i1 + i2;
}
// A function doing nothing ;)
void SampleFunction1()
{
// insert code here
}
// A function always returning zero
int SampleFunction2()
{
// insert code here
return 0;
}
I tried to compile it, and it compiled without any error or warning. But when i tried to use it with the ctyped.cdll.LoadLibrary("library path.dll") in python 3(that actually should work like the C function), it said that it wasn't a valid win32 application. Both python and code::blocks are 32 bit (code:blocks compile with gcc, and i tryed to use an installed version of mingw on my system, but it gives some error about a missing library) while i'm working on win 7 64bit
Do you know what the problem can be, or if i'm doing something wrong?
EDIT1:
i'm on windows 7 64bit, in the specs file of the compiler is wrote: "Thread model: win32, gcc version 3.4.5 (mingw-vista special r3)"
and i used as command
gcc.exe -shared -o library.dll library.c
in python i used
from ctypes import *
lib = cdll.LoadLibrary("C:\\Users\\Francesco\\Desktop\\C programmi\\Python\\Ctypes DLL\\library.dll")
and the error was
WindowsError: [Error 193] %1 is not a valid Win32 application
i installed both python3.1 and mingw from the binary package and not compiling them on my system
EDIT2:
After reading Marc answer.
main.h
#ifndef __MAIN_H__
#define __MAIN_H__
#include <windows.h>
#ifdef BUILD_DLL
#define DLL_EXPORT __declspec(dllexport)
#else
#define DLL_EXPORT __declspec(dllimport)
#endif
#ifdef __cplusplus
extern "C"
{
#endif
DLL_EXPORT int MySimpleSum(int A, int B);
#ifdef __cplusplus
}
#endif
#endif // __MAIN_H__
main.c
#include "main.h"
// a sample exported function
DLL_EXPORT int MySimpleSum(int A, int B)
{
return A+B;
}
compiling options
gcc -c _DBUILD_DLL main.c
gcc -shared -o library.dll main.o -Wl,--out-implib,liblibrary.a
with gcc 4.5.2
still get the same error..
I believe in the windows environment you need to use the __declspec annotation. How to create a shared library and the use of __declspec is described here: DLL Creation in MingW.