Module for python3.6.2 (Spyder environment) from c source code - c

I'm quite new to python that I always use by writing script in spyder and running them in its Ipython console with python3.6.2 .
I'm trying to write a simple module from a "swig_example.c" file following a couple of swig tutorial (http://www.swig.org/tutorial.html, http://www.swig.org/Doc1.3/Python.html#Python_nn6).
My aim is to be able to run a script "main_python.py" which should look like:
import swig_test
print(swig_test.fact(4))
where fact is a function defined in the original c source.
The source file is "swig_example.c":
/* File: swig_example.c */
#include "swig_example.h"
int fact(int n) {
if (n == 0) {
return 1;
}
else {
return n * fact(n-1);
}
}
The header file is as simple as:
/* File: swig_example.h */
int fact(int n);
and the interface one:
/* File: swig_example.i */
%module swig_test
%{
#include "swig_example.h"
%}
%include "swig_example.h"
When in the terminal I run:
swig -python swig_example.i
a "swig_example_wrap.c" and a "swig_test.py" files are created.
How should I proceed to have my "main_python.py" working?
(It now returns a "No module named '_swig_test' error).
I would like to have some script (maybe using distutils?) so that each time I modify the .c source I can easily update the module without changing the "main_python.py" file.
If you have any solution which uses Xcode instead of spyder it would be well accepted.
I think that the question could be useful to many that are new to python (and Mac actually...) and try to use it while not throwing away their previous works...
EDIT:
I -partially- solved the problem. Now the main point remain Spyder.
I create the files ".c", ".h" and ".i" the way I described. Then, following this post (Python.h not found using swig and Anaconda Python) I create, in the same folder, my "setup.py" file:
from distutils.core import setup, Extension
example_module = Extension('_example', sources=['example.c','example.i'])
setup(name='example', ext_modules=[example_module], py_modules= .["example"])
Then, In anaconda navigator I open the terminal of the environment I'm working in, move to the right folder and run:
python setup.py build_ext --inplace
If now I open spyder everything works the desired way. But If I now want to modify my C source, say add a new function, problems arises. I modify the ".c", ".h" and ".i" files annd thenn re-run in the terminal the previous line. The "example.py" file result to e correctly modified (it innncludes the attribute of the new function), but when try to import the module in spyder (import example) changes are nnot registered and an error message "_example has no attribute "new function" is given in the Ipython console unless I restart Spyder itself.
Is there faster way to fix it? (maybe this is the interaction mentioned in the comments... )
Thank you all :-)

Related

GCC cannot recognize the directory path inside a file

The problem I encountered in using GCC is that I cannot use the command make to build my program because some files contain the paths of their actual location.
Say I have a file named "machine.h", its content is target-pisa/pisa.h. At the same time, in the same working directory, I have a folder named "target-pisa", in which there is a file named "pisa.h"; the actual code of the header file "machine.h" is actually inside the file "pisa.h", which is inside the folder named "target-pisa" located in the same working directory as "machine.h".
Assume for some reason I cannot simply copy and paste the code from "pisa.h" to "machine.h"; that is, I have to stick with what is provided by the prof. The make command does not work in this case in my laptop because it cannot interpret target-pisa/pisa.h as a directory path and open the actual header file "pisa.h" according to the path target-pisa/pisa.h provided in the file "machine.h". Instead, git bash interprets target-pisa/pisa.h as C code (if I am not mistaken); see the figure below.
Some additional info that may be helpful:
In machine.h, there is only one line of code as shown below:
target-pisa/pisa.h
I have checked that almost all .c files in the working directory have #include "machine.h".
How can I solve this problem? Please help, I have been stuck in this for a long time. By the way, my friend also used git bash to do this lab and this problem doesn't happen to him.
I tried to reinstall git bash in order to see if the problem can be solved, but it didn't.
All in all, I want to build the program successfully by using make command in git bash.
machine.h needs to have an #include directive to tell the compiler to pull in the nested header.
#include "target-pisa/pisa.h"
Just writing target-pisa/pisa.h by itself isn't valid C code.

Adding custom C library in Arduino 1.5.7 IDE

Context:
I would like to add a custom library to a piece of Arduino code in the Arduino 1.5.7 IDE to ensure code is decentralized and readable
Attempted solution:
I make a folder called "mathsfunctions". In it I put two text files, one with a .c and another with a .h name extension.
The .c file is called "mathsfunctions.c" and has the following code in it:
#include "mathsfunctions.h"
int multiply (int a, int b)
{
return a*b;
}
The .h file is called "mathsfunctions.h" and has the following code in it:
int multiply (int, int);
In the main file, I add in the following include preprocessor directive:
#include "mathsfunctions.h"
//The rest of the code
After the above was coded, I imported the library. To do this, I did the following:
Toolbar -> Sketch -> Add Library -> c:.....\mathsfunctions
I can confirm that this is indeed imported because after doing such action, the same mathsfunctions folder appears in the Arduino libraries folder:
C:.....\Arduino\libraries\mathsfunctions
Problem: Upon compiling, the error dialogue box gives the following error:
mathsfunctions.h: No such file or directory
Assistance Required: Any idea on what the problem could be?
You should only have put the header and the source in the same directory as your main file. Also I would suggest putting the implementation in the header since this is the way that people generally include extra functions in C. I am unsure if C supports extra source files but it does support extra headers.

How do I generate included files using cmake?

I've got a tool that generates files that contain definitions and declarations. These files need to be included from other source files or headers - they aren't usable standalone.
The obvious thing to do is have a custom command to generate them. My CMakeLists.txt that does this is as follows. I'm currently using this with the GNU makefile generator.
project(test_didl)
cmake_minimum_required(VERSION 3.0)
add_custom_command(
OUTPUT test_didl_structs.h test_didl_structs.c
COMMAND python ${CMAKE_CURRENT_SOURCE_DIR}/didl.py --decls=test_didl_structs.h --defs=test_didl_structs.c ${CMAKE_CURRENT_SOURCE_DIR}/test_didl_structs.py
DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/didl.py ${CMAKE_CURRENT_SOURCE_DIR}/test_didl_structs.py
MAIN_DEPENDENCY ${CMAKE_CURRENT_SOURCE_DIR}/test_didl_structs.py)
add_executable(test_didl test_didl.c)
target_include_directories(test_didl PRIVATE ${CMAKE_CURRENT_BINARY_DIR})
target_link_libraries(test_didl shared_lib)
test_didl.c is very simple:
#include <stdint.h>
#include <stdio.h>
#include <string.h>
#include "test_didl_structs.h"
#include "test_didl_structs.c"
int main(void) {
}
But on the first build, make tries to build test_didl.c, which of course fails, because test_didl_structs.* haven't been generated yet. Naturally, before the first successful build of test_didl.c, the dependency information isn't known, so make doesn't know to run the python command first.
I tried a custom target, but that's no good, because custom targets are assumed to be always dirty. This means the C file is recompiled on every build and the EXE is linked. This approach won't scale.
My eventual solution was to make the output .h file an input to the executable:
add_executable(test_didl test_didl.c test_didl_structs.h)
.h file inputs are treated as dependencies, but don't otherwise do anything interesting for makefile generators. (I am not currently interested in other generators.)
So that works, but it feels a bit ugly. It doesn't actually state explicitly that the custom commands need to be run first, though in practice this seems to happen. I'm not quite sure how, though (but I'm not up to speed on reading the CMake-generated Makefiles just yet).
Is this how it's supposed to work? Or is there something neater I'm supposed to be doing instead?
(What I'm imagining, I suppose, is something like a Visual Studio pre-build step, in that it's considered for running on every build, before the normal dependency checking. But I want this pre-build step to have dependency checking, so that it's skipped if its inputs are older than its outputs.)
My eventual solution was to make the output .h file an input to the executable.
This way is correct.
It actually states, that building executable depends on given file, and, if that file is OUTPUT for some add_custom_command(), this command will be executed before building executable.
Another way is to generate needed headers at configuration stage using execute_process(). In that case there is no need to add header files as sources for add_executable(): CMake has notion of autodetecting dependencies for compiling, so test_didl will be rebuilt after regeneration of test_didl_structs.h.
execute_process(COMMAND python ${CMAKE_CURRENT_SOURCE_DIR}/didl.py --decls=test_didl_structs.h --defs=test_didl_structs.c ${CMAKE_CURRENT_SOURCE_DIR}/test_didl_structs.py)
# ...
add_executable(test_didl test_didl.c)
Drawback of this approach is that you need manually rerun configuration stage after changing your .py files. See also that question and answer to it.
Another problem is that header file will be updated every time configuration is run.
You can try tell cmake that you are using an external source, see docs about set_source_files_properties, see this past post

How to open files from a NaCl Dev Environment application?

I'm trying to get a simple command line application to run in the NaCl Development Environment. But I don't understand why it doesn't want to open files:
#include <stdio.h>
#include <ppapi_simple/ps_main.h>
int my_main (int argc, char ** argv) {
FILE * f = fopen ("out.txt","w");
if (f) {
fputs ("output to the file", f);
fclose(f);
} else {
puts("could not open file");
}
}
PPAPI_SIMPLE_REGISTER_MAIN(my_main)
Running:
bash.nmf-4.3$ gcc -I"$NACL_SDK_ROOT/include" test.c -lppapi_simple -lnacl_io -lppapi
bash.nmf-4.3$ ./a.out
could not open file
bash.nmf-4.3$
It's clearly possible for an application to open files in arbitrary locations within the dev environment - I'm using nano to edit the test code! But the naclports version of nano doesn't look like it's been changed in ways that are immediately connected to file manipulation..?
Lua is another app that appears to have only been modified very slightly. It falls somewhere in between, in that it can run test files but only if they're placed in /mnt/html5, and won't load them from the home folder. My test program shows no difference in behaviour if I change it to look in /mnt/html5 though.
NB. my goal here is to build a terminal application I can use within the dev environment alongside Lua and nano and so on, not a browser-based app - I assume that makes some difference to the file handling rules.
Programs run in the NaCl Dev Environment currently need to linked with -lcli_main (which in turn depends on -lnacl_spawn) for an entry point which understands how to communicate with the javascript "kernel" in naclprocess.js. They need this to know what current working directory they were run from, as well as to heard about mounted file systems.
Programs linked against just ppapi_simple can be run, but will not setup all the mount points the dev environment may expect.
There is a linker script in the dev env that simplifies linking a command line program -lmingn. For example the test program from the question can be compiled with:
gcc test.c -o test -lmingn
NOTE: This linker script had a recently resolved issue, a new version with the fix was published to the store on 5/5/2015.
In the near future, we have plans to simplify things further, by allowing main to be the entry point.
Thanks for pointing out the lua port lacks the new entry point!
I've filed an issue and will look into fixing it soon:
https://code.google.com/p/naclports/issues/detail?id=215
I found a solution to this, although I don't fully understand what it's doing. It turns out that the small changes made to nano are important, because they cause some other functions elsewhere in the NaCl libraries to get pulled in that correctly set up the environment for file handling.
If the above file is changed to:
#include <stdio.h>
int nacl_main (int argc, char ** argv) {
FILE * f = fopen ("out.txt","w");
if (f) {
fputs ("output to the file", f);
fclose(f);
} else {
puts("could not open file");
}
}
...and compiled with two more libraries:
gcc -I"$NACL_SDK_ROOT/include" test.c -lppapi_simple -lnacl_io -lppapi -lcli_main -lnacl_spawn
...then it will work as expected and write the file.
Instead of registering our own not-main function with PPAPI_SIMPLE_REGISTER_MAIN, pulling in cli_main causes it to do so with an internal function that sets some things up, presumably including what is needed for file writing to work, and expects to then be able to call nacl_main, which is left to the program to define with external visibility (several layers of fake-main stacking going on). This is why the changes to nano look so minimal.
nacl_spawn needs to be linked because cli_main uses it for ...something.

Installing a new library in Linux, and accessing it from my C code

I am working on a project which requires me to download and use this. Inside the downloaded folder, when extracted I am presented with three things:
A folder called "include"
A folder called "src"
A file called "Makefile"
After some research, I found out that I have to navigate to the directory which contains these files, and just type in the command make.
It seemed to install the library in my system. So I tried a sample bit of code which should use the library:
csp_conn_t * conn;
csp_packet_t * packet;
csp_socket_t * socket = csp_socket(0);
csp_bind(socket, PORT_4);
csp_listen(socket, MAX_CONNS_IN_Q);
while(1) {
conn = csp_accept(socket, TIMEOUT_MAX);
packet = csp_read(conn, TIMEOUT_NONE);
printf(ā€œ%S\r\nā€, packet->data);
csp_buffer_free(packet);
csp_close(conn);
}
That's all that was given for the sample server end of the code. So I decided to add these to the top:
#include <csp.h>
#include <csp_buffer.h>
#include <csp_config.h>
#include <csp_endian.h>
#include <csp_interface.h>
#include <csp_platorm.h>
Thinking I was on the right track, I tried to compile the code with gcc, but I was given this error:
csptest_server.c:1: fatal error: csp.h: No such file or directory
compilation terminated.
I thought I may not have installed the library correctly after all, but to make sure, I found out I could check by running this command, and getting this result:
find /usr -iname csp.h
/usr/src/linux-headers-2.6.35-28-generic/include/config/snd/sb16/csp.h
/usr/src/linux-headers-2.6.35-22-generic/include/config/snd/sb16/csp.h
So it seems like the csp.h is installed, maybe I am referencing it incorrectly in the header include line? Any insight? Thanks a lot.
The make command is probably only building the library, but not installing it. You could try sudo make install. This is the "common" method, but I recommend you to check the library's documentation, if any.
The sudo command is only necessary if you have no permissions to write the system's include and library directories, which may be your case.
Another possibility (instead of installing the library) is telling GCC the location of the library's source code and generated binaries (by means of the -I and -L options of the gcc command.
That Makefile will not install anything, just translate the source into a binary format.
The csp.h in the Linux kernel has nothing to do with your project, it's just a naming collision, likely to happen with three letter names.
In your case, I would presume you need to add the include directory to the compilation flags for your server, like gcc -I/path/to/csp/include/csp csptest_server.c.
(Next, you'll run into linker errors because you'll also want to specify -L/path/to/csp -lcsp so that the linker can find the binary code to link to.)

Resources