This question has been asked before (there are questions that are 5 or 10 years old) but without any real answer, usually a different approach has been used.
I'm working on a project where a different approach is simply not possible. We are using a third-party post-build step that needs some arguments (version) as part of the input. The version is set inside the C code using #define as some settings are set based on different parts of the version.
After some major changes, we have to recompile the code with different versions so I rather keep the version in a single location (in main.h preferably). Is there any way to do it in eclipse or do I have to bear the pain and just change it at multiple locations manually?
I'm using Eclipse Neon.3 Release (4.6.3), since I'm using system workbench and that's their default version.
You have some tool that does:
Your build
This post build step
Extract the version #define from your C project (in 1) and store it instead in a build system variable. Then pass it as a -D parameter to the necessary files (in 1), and as a parameter in whatever way it's expected by step 2.
Using -D parameter (in project properties > C/C++ Build > settings > Tool Settings > Preprocessor) did not do the job for me as the macros or the build variables defined based on them were not expanding in the post-build step.
My workaround was to write a shell script to read the version from the header file and then pass it to the post-build. So I'm calling the other script inside my script that extracts the version. This way I can change the versions inside the code rather than the labyrinth of the eclipse settings.
This line extracts the version:
fw_version=$(cat "$projectdir/../Inc/main.h" | grep "FW_VERSION" | cut -d ' ' -f 3-)
Related
I have a simple CMake project with CTest and CPack. It uses the Lua C API to load and execute an script file called script.lua.
This script will be in different location when built vs when installed/packed, it's location would be:
[build] : ${CMAKE_CURRENT_SOURCE_DIR}/src/scripts
[install]: ../scripts (relative to app which is in bin directory)
What I'm trying to achieve here is to have install step regenerate configure_file then rebuild using new configure_file and only then proceed to do the normal install step and of course revert the configure_file back to it's original state afterwards.
Any help regarding this issue is appreciated.
My understanding is that CMake's configure_file command has its full effect during the execution of the cmake program. It has no representation in generated makefiles, or whatever other build system components cmake generates. Thus, if you want to configure a file differently for installation than for pre-installation testing,
You would need to perform completely separate builds (starting with executing cmake) for the two cases, and
You would need to use some attribute of the cmake command line or execution environment to convey the wanted information, such as using a -D option to define a CMake variable on the command line.
I advise you not to pursue this route. Aside from being overcomplicated, it's also poor form to install a different build of the software than is tested.
You have a variety of alternatives that could serve better. Among those are
Give the program itself the ability to accept a custom location for the Lua script. That is, make it recognize a command-line argument or environment variable that serves this purpose. Make use of that during pre-installation testing.
If indeed the program is using a relative path to locate the script at runtime, then just (have CMake) put a copy of the script at the appropriate location in the build tree, so that the program will find it normally during testing.
C: Clarity needed on Cython, Cythonize, setup.py/MSVC, GCC issues and creating/using header files
Hello all,
I am relatively new to Cython and very new to C but do have some programming experience with Python, Visual Basic and Java.
For my current project, my machine is running on Windows Pro 10 1909 x64, Python 3.7.9 x64 and Cython 0.29.21 and my ultimate goal is to create an EXE file with all the modules included.
I have not included any cdef statements or such like at this time and I plan to add these incrementally. Essentially what I am doing at the moment is at the proof-of-concept stage to show that I can compile and run current and future code without issues.
I have a __main__ module stored in the root project folder and my other (some very large) Python modules renamed as .pyx files which handle different types of files (.csv, .json, .html, .xml, etc) each with their own characteristics and method of extraction, stored in an 'includes' folder.
As I understand it, the header files contain function definitions which are then called upon as needed to act as a bridge between the subroutines and the main module. I have not created any header files at this time as I need clarity on a few points.
I am also having trouble Cythonizing with setup.py (setuptools) through MSVC and GCC.
Below are a discussion of the steps outlined so far to reach this point regarding setup.py, GCC and running directly from the prompt with my main questions at the end.
Step 1
My first attempt at compiling the code is to prepare a setup.py file from an elevated command prompt.
from setuptools import Extension, setup
from Cython.Build import cythonize
extensions = [
Extension("main", ["__main__.pyx"],
include_paths=r"C:\path\to\parsing_company_accounts_master_cython\includes"),
]
setup(
name="iXBRLConnect",
ext_modules=cythonize(extensions, compiler_directives={'language_level' : "3"}),
)
However, in Python 3.7.9 x64, I get the following output.
python setup.py bdist
running bdist
running bdist_dumb
running build
running build_ext
building 'main' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
I have this same error with all versions of Python and installing many variants of the build tools starting from 2015, when running in an elevated x64 Native Tools Command Prompt (any version of VS or standalone Build Tools).
Searching on this site points to many different SDKs, libraries and so forth that need to be added but after following many answers and numerous restarts I still am unable to get setup.py to run.
I CAN compile with VS Community Edition as a GUI, but all efforts seem to be confounded when using the command line (no other reason but for keeping a lean installation). It's not clear why the prompt route does not work.
Step 2
Not to be outdone, I attempt to install GCC - MinGW-w64 (https://wiki.python.org/moin/WindowsCompilers), an alternative compiler that is supported up to Python 3.4.
Noting that Python 3.4 is past end of life I uninstall Python 3.7.9 x64, install 3.4 and reinstall my pip site-packages.
However, installing BeautifulSoup4 gives me this message:
RuntimeError: Python 3.5 or later is required
I would take the EOL issue for Python 3.4 with a large pinch of salt but BS4 is a key library for my project so this is pretty much a showstopper.
Step 3
Finally, I attempt to build the files directly on the command line.
First, I move my other .pyx modules into "c:\path_with_spaces\to\includes" (9 in total), keeping __main__.pyx in the main project folder then run the next command from the project folder.
cython -3 --annotate --embed=main __main__.pyx --include-dir "c:\path_with_spaces\to\includes"
Questions
So, all the above said and done (phew!), here are the points I need clarity on: -
Q1: It seems to me that the 'include_paths'/'include-dir' arguments specify other additional directories only to create new C files - I presume the reason that this is because there are no header files alongside the existing *.pyx modules? [Initially, I naively thought Cython would automatically raise the headers and .c files? Instead, nothing at all - .c or .h - is generated for them.] Is there something wrong with my command line syntax for '--include_dirs' as the .c files should have been raised regardless and I just 'slot' the header files in? There is no error to say so. Or are the included files just meant to be read and no other action being taken on them, as you would expect from a library file?
Q2: As I continue to learn more It is increasingly clear that the header files need to be prepared in advance, according to this: https://cython.readthedocs.io/en/latest/src/userguide/external_C_code.html and this: http://hplgit.github.io/primer.html/doc/pub/cython/cython-readable.html However as far as I can ascertain from their examples (unless I am looking at the wrong thing), they only call their modules from the main module at some point. Taking the last link, I am not clear about 'dice6_cwrap' in the dice6_cwrap.pyx example (I think it should be referenced in the main module but it is not directly shown in this example). Also, may I also need other files perhaps a manifest of some sort?
Q3: In part answer to Q2, I think I can 'chain' modules together as explained here: How does chain of includes function in C++?? This is important to me because the way my code has worked up to now is to load each module (depending on what files are found) and then run through the modules in a 'chain' sequence to first parse all elements in a soup object, run through each line element and finally extract each attribute and insert them into a common database. In actual practice that can mean up to 8 'links' in total counting from the 'start' method in the submodule and depending on the attribute in question. FYI, some of the modules also include pandas, numpy and multiprocessing modules too. Thinking aloud - including header files, that means prepping 16 files? Eww! (BUT, with a little luck and fingers crossed - speed gains from C compilation vs Python interpretation...other bottlenecks permitting).
Apologies for my waffle, I welcome your thoughts on how I can move forward on this.
Thanks in advance.
Original
I am looking for a way to create a non-isolated development environment for a C-library.
I will most likely use cmake to build the library and my IDE is a simple text editor.
The problem now is that I do not only create the library but also some sample "applications" using the library.
Therefore I need to install the library's headers and the shared object (I'm using GNU/Linux) somewhere and I do not want to install it to /usr/local/lib or (the even worse) /usr/lib.
Is there a way to create a virtual environment similar to python's pyvenv (and similar) where I can install the everything to but still have access to the host libraries?
Also I do not want to rewrite my $PATH/$LD_LIBRARY_PATH, setup a VM, container, or chroot.
The usage would then look like:
# switch to environment somehow
loadenv library1
# for library
cd library
make && make install
# for application
cd ../application1
make && ./application1
Is this possible?
Edit 1
So basically my directory structure will look like this:
library/
library/src/
library/src/<files>.c
library/include/<files>.h
application/
application/src/
application/src/<files>.c
First I need to compile the library and install the binary and header files.
These should be installed in a fake system-location.
Then I can compile the application and run it.
Edit 2
I thought a bit about it and it seems all I need is a filesystem sandbox.
So basically I want to open up a shell where every write to disk is not committed to the filesystem but rather temporarily saved in e.g. a ramfs/tmpfs just to be dropped when the shell exits.
This way I can exactly test how everything would behave if compiled, deployed and executed on the real machine without any danger to existing files or directories and without accidentally creating files or directories without cleaning them up.
You don't really need to 'install' the library, you can work in the development tree.
(1) for compilation all you need to do is use -I flag to specify where the libraries header files are, and this can be a relative path, for example in your case you could do -I../../library/include
(2) for linking you need to tell the linker where the library is located at, you can use the -L flag append to the library search order.
(3) for testing the application, you are correct that the application needs to be able to find the library. You have a couple of options:
(a) make sure the library and the executable are in the same directory
(b) you can temporarily modify your LD_LIBRARY_PATH, in your current shell only, for testing:
export LD_LIBRARY_PATH=abs_path_to_library:$LD_LIBRARY_PATH
note that this will only effect the current shell (command terminal) you are working in. Any other shells you may have open, or open later will have your normal LD_LIBRARY_PATH. I know you specified that you don't want to modify your PATH or LD_LIBRARY_PATH, but being local to the shell that the command is executed it is a nice, easy way to do this.
(c) embed the path to the library in the client executable. To do this you need to pass an option to the linker. The command for gcc is:
-Wl,-rpath,$(DEFAULT_LIB_INSTALL_PATH)
see this how-to
I have a set of Debian packaging scripts and I would like the version number of the package to be incremented each time it is built. (i.e. debian_revision as specified in the Debian Policy Manual) That is, the first build should be PACKAGE-1.0-0, then PACKAGE-1.0-0, and so on (where 1.0 is the upstream_version). Is there an easy way to specify this "extra" version number without having to create a new entry in the changelog?
I'm looking to have this automatically done by the Makefile for the project whenever a particular target (i.e. deb is built)
dh_* scripts read debian/changelog to build a changes file and set the versions, among other things. You should not change the version without editing the changelog, but if your problem is changes made manually you can make a script that invokes
dch -i
or if your problem is changes made at the debian/changelog, you can make a bash script to change the version automatically.
I'm in the process of converting a project to Eclipse CDT, it consists of 2 (static) libraries and produces about 12 binaries, a few of the binaries have 2-3 different build configurations, and is built by scons
How should I structure this in an Eclipse workspace ? 1 project for everything ? 1 project for each of the binaries/libs ? Something else ?
I'd suggest you use CMAKE for this problem, It should be able target target the Eclipse build system. If not, it can generate a normal 'make' config for you. It is far better to go down this route since its more portable in the long term, and writing a hierarchical build system is quite straight forward.
I personally have used Eclipse CDT before, but only in makefile mode i.e. to build anything I'd manually run the makefile. Basically I used Eclipse as a glorified editor. Here's how I worked things:
Everything part of the otherall solution came under the same workspace. Each library/binary was its own directory and project, so that I could make each as required. I also had a separate folder (project) for tests with a makefile that built all the test exes I wanted to run so I could do valgrinds on simple bits of it.
As I said, I used make and not Eclipse CDT's built-in building routines - to that end I'd say it really doesn't matter how you structure it - do whatever makes sense / conforms best to the UNIX principles.