Why am I getting the error "cannot find -lncurses"? - c

I have just started a university course in C and we have been instructed to run a makefile through Cygwin (which uses the GCC compiler), however I have very little knowledge about computers and am out of ideas as to how to solve this. When I run the makefile it says:
cannot find -lncurses.
I understand ncurses is a library and the compiler is looking for it as some of the files in the makefile need it, but I don't understand how it is missing, where it is, or how I point the compiler to it. Can anyone offer me any advice?

You need aditional packages.
From https://cygwin.com/ml/cygwin-announce/2001/msg00124.html
The ncurses package has been updated to ncurses-5.2-7. ncurses is a
package that provides character and terminal handling libraries,
including 'gui-like' panels and menus. It is often used instead of
termcap.
MAJOR CHANGES to the ncurses package:
The ncurses package has been split into three separate packages:
ncurses-5.2-7 (contains the static libs, header files,
man pages, etc)
libncurses6-5.2-2 (contains the new DLL's)
terminfo-5.2-1 (contains the terminfo database)
libncurses5-5.2-1 is a new package containing the old
DLLs from ncurses-5.2-5, for backward compatibility.
ncurses is now built using the 'auto-import' features of
recent binutils.
ncurses-5.2-5a if it's necessary to rollback, this package
contains the files from ncurses-5.2-5
(post splitup) Thus, this package +
terminfo + libncurses5 = old ncurses-5.2-5.
See NOTES below for additional information.
INSTALLATION:
To update your installation, click on the "Install Cygwin now" link on
the http://cygwin.com/ web page. This downloads setup.exe to your
system. Then, run setup TWICE and answer all of the questions EACH
TIME. The FIRST time, update ONLY the ncurses package. The SECOND
time, install the terminfo, libncurses5, and libncurses6 packages.
You MUST do BOTH steps.

Related

C app deployment and managing dependencies in c

I'm new to c development, but I have some experience in other modern languages .so the first thing that I found hard is dependencies and deployment, while we got Gradle, maven, NuGet and pipy and... but in c I find it a bit difficult to manage this process.
for example, I have an app that should use mongo-c-library, log4c,libarchive so basically, in my development environment, I download and unzip all of the tar files of the above libraries and then followed their instruction(usually some make stuff) and installed them in order to include them in code make the code work.
I have studied a bit about CMake but I couldn't get a clear picture of how that could actually solve the problem.
at this moment my best solution is to create an install bash script and zip all dependencies unzipped folder with that install script and then send it to the production server to deploy it.
1.The first question is : is it possible to just copy and past all of .so .h and etc files in /path/of/installed/dependencies/include
and /path/of/installed/dependencies/lib in the destination server libary path.
2.if not what is the faster way?
while I was surfing the CMake source file I found that its developers just use this package source code directly.
cmxxx contains the xxx sources and headers files.
3.how can apt-get and Linux package manager help in the deployment process?
2 first question was more about dependencies. imagine we have a simple c app and we want to install(build and make a useable executable file) quickly. how it can be related to .deb packages.
1.The first question is : is it possible to just copy and past all of .so .h and etc files in /path/of/installed/dependencies/include and /path/of/installed/dependencies/lib in the destination server libary path.
Yes, technically it's possible. That's essentially what package managers do under the hood. However, doing that is a colossal mistake and screams bad practices. If that's what you want then in the very least you should look into package managers to build up your own installer, which handles this sort of stuff already for you.
2.if not what is the faster way?
You're actually asking an entirely different question, which is: how should I distribute my code, and how do I expect users to use/deploy it?
If you want users to access your source code and build it locally, as you've mentioned cmake then you just to set up your project right as cmake already supports that usecase.
If instead you just want to distribute binaries for a platform then you'll need to build and package that code. Again, cmake can also help you on that one, as cmake's cpack supports generating some types of packages like DEB packages used by Debian and Ubuntu, and which are handled by apt.
3.how can apt-get and Linux package manager help in the deployment process?
apt is designed to download and install packages from a repository.
Under the hood, apt uses DEB packages, which can be installed with dpkg.
If you're targeting a system that uses apt/deb, you can build DEB packages whenever you release a version to allow people to install their software.
You can also go a step beyond and release your DEB packages in a Personal Package Archive.
You would typically NOT download and install source packages. Instead you should generally rely on the libraries and development packages of the distribution. When building your own package you would typically just reference the packages or files that your package is dependent on. Then you build your own package and you're done. Upon installation of your package, all dependencies will automatically be resolved in an appropriate order.
What exactly needs to be done is dependent on the package management system, but generally the above statements apply. Be advised, package management apparently is pretty hard, because so many 3rd party developers screw it up.

C: Clarity needed on creating/using header files and Cython, Cythonize setup.py, MSVC, & GCC issues?

C: Clarity needed on Cython, Cythonize, setup.py/MSVC, GCC issues and creating/using header files
Hello all,
I am relatively new to Cython and very new to C but do have some programming experience with Python, Visual Basic and Java.
For my current project, my machine is running on Windows Pro 10 1909 x64, Python 3.7.9 x64 and Cython 0.29.21 and my ultimate goal is to create an EXE file with all the modules included.
I have not included any cdef statements or such like at this time and I plan to add these incrementally. Essentially what I am doing at the moment is at the proof-of-concept stage to show that I can compile and run current and future code without issues.
I have a __main__ module stored in the root project folder and my other (some very large) Python modules renamed as .pyx files which handle different types of files (.csv, .json, .html, .xml, etc) each with their own characteristics and method of extraction, stored in an 'includes' folder.
As I understand it, the header files contain function definitions which are then called upon as needed to act as a bridge between the subroutines and the main module. I have not created any header files at this time as I need clarity on a few points.
I am also having trouble Cythonizing with setup.py (setuptools) through MSVC and GCC.
Below are a discussion of the steps outlined so far to reach this point regarding setup.py, GCC and running directly from the prompt with my main questions at the end.
Step 1
My first attempt at compiling the code is to prepare a setup.py file from an elevated command prompt.
from setuptools import Extension, setup
from Cython.Build import cythonize
extensions = [
Extension("main", ["__main__.pyx"],
include_paths=r"C:\path\to\parsing_company_accounts_master_cython\includes"),
]
setup(
name="iXBRLConnect",
ext_modules=cythonize(extensions, compiler_directives={'language_level' : "3"}),
)
However, in Python 3.7.9 x64, I get the following output.
python setup.py bdist
running bdist
running bdist_dumb
running build
running build_ext
building 'main' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
I have this same error with all versions of Python and installing many variants of the build tools starting from 2015, when running in an elevated x64 Native Tools Command Prompt (any version of VS or standalone Build Tools).
Searching on this site points to many different SDKs, libraries and so forth that need to be added but after following many answers and numerous restarts I still am unable to get setup.py to run.
I CAN compile with VS Community Edition as a GUI, but all efforts seem to be confounded when using the command line (no other reason but for keeping a lean installation). It's not clear why the prompt route does not work.
Step 2
Not to be outdone, I attempt to install GCC - MinGW-w64 (https://wiki.python.org/moin/WindowsCompilers), an alternative compiler that is supported up to Python 3.4.
Noting that Python 3.4 is past end of life I uninstall Python 3.7.9 x64, install 3.4 and reinstall my pip site-packages.
However, installing BeautifulSoup4 gives me this message:
RuntimeError: Python 3.5 or later is required
I would take the EOL issue for Python 3.4 with a large pinch of salt but BS4 is a key library for my project so this is pretty much a showstopper.
Step 3
Finally, I attempt to build the files directly on the command line.
First, I move my other .pyx modules into "c:\path_with_spaces\to\includes" (9 in total), keeping __main__.pyx in the main project folder then run the next command from the project folder.
cython -3 --annotate --embed=main __main__.pyx --include-dir "c:\path_with_spaces\to\includes"
Questions
So, all the above said and done (phew!), here are the points I need clarity on: -
Q1: It seems to me that the 'include_paths'/'include-dir' arguments specify other additional directories only to create new C files - I presume the reason that this is because there are no header files alongside the existing *.pyx modules? [Initially, I naively thought Cython would automatically raise the headers and .c files? Instead, nothing at all - .c or .h - is generated for them.] Is there something wrong with my command line syntax for '--include_dirs' as the .c files should have been raised regardless and I just 'slot' the header files in? There is no error to say so. Or are the included files just meant to be read and no other action being taken on them, as you would expect from a library file?
Q2: As I continue to learn more It is increasingly clear that the header files need to be prepared in advance, according to this: https://cython.readthedocs.io/en/latest/src/userguide/external_C_code.html and this: http://hplgit.github.io/primer.html/doc/pub/cython/cython-readable.html However as far as I can ascertain from their examples (unless I am looking at the wrong thing), they only call their modules from the main module at some point. Taking the last link, I am not clear about 'dice6_cwrap' in the dice6_cwrap.pyx example (I think it should be referenced in the main module but it is not directly shown in this example). Also, may I also need other files perhaps a manifest of some sort?
Q3: In part answer to Q2, I think I can 'chain' modules together as explained here: How does chain of includes function in C++?? This is important to me because the way my code has worked up to now is to load each module (depending on what files are found) and then run through the modules in a 'chain' sequence to first parse all elements in a soup object, run through each line element and finally extract each attribute and insert them into a common database. In actual practice that can mean up to 8 'links' in total counting from the 'start' method in the submodule and depending on the attribute in question. FYI, some of the modules also include pandas, numpy and multiprocessing modules too. Thinking aloud - including header files, that means prepping 16 files? Eww! (BUT, with a little luck and fingers crossed - speed gains from C compilation vs Python interpretation...other bottlenecks permitting).
Apologies for my waffle, I welcome your thoughts on how I can move forward on this.
Thanks in advance.

Linker directory for Qt5

I want to run an application based on Qt5 shared objects.
Although I have apt installed qt5-default, qttools5-dev and qttools5-dev-tools I get the error bellow:
/usr/lib/x86_64-linux-gnu/libQt5Core.so.5: version `Qt_5.7' not found
/usr/lib/x86_64-linux-gnu/libQt5Core.so.5: version `Qt_5' not found
/usr/lib/x86_64-linux-gnu/libQt5Gui.so.5: version `Qt_5' not found
/usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5: version `Qt_5' not found
I have also tried to change some environment variables as LD_LIBRARY_PATH and DYLD_LIBRARY_PATH, resulted in no success!
What do you suggest?
When you built your application, which version of Qt5 did you build against? You can see this in QtCreator by looking at the currently selected kit:
If you just installed QtCreator from binary, it is shipped with it's own set of Qt5 shared libraries that your application is linked against, however your OS' version of those libraries (those installed from apt-get and similar) may not match.
When you try to run the application on it's own outside QtCreator, it may try to link against the OS version of the libs which are usually much older.
There are many ways to resolve this. One way, which would be preferred if you don't care for the newest version of Qt, is simply building towards the Qt libs supploed by the OS. You can do this by creating a new kit that specifies to build against the OS' libraries following this procedure.
Another way is shipping the shared libraries that you used from QtCreator together with the application so that those will override the OS ones. Usually just chucking them into the same folder as the executable will do the trick, as they will be found before the ones under /usr/lib/whatever etc.
Yet another way is to build your own static version of Qt and link with that. This has some benefits and some drawbacks. This is an advanced topic, so I won't go into detail (you can see here). But in this case the Qt libs are built into your app and will not depend on any external Qt libs version.

cmake install multiple version of the same library

I am trying to have a scheme with my library that is coherent and usable/reusable.
I work in a team where we work with continuous integration but sometimes I need to use old version of the same library. That's because some part of the software are not updated for using the new version.
I'm actually in the middle of a headache understanding how to use cmake for having something like this:
PATH/Library/Processor/Library_X/Version/static_library_and_includes
Where Library is a common name where to put my stuff
Processor could be attiny24, atmega, lxpXXXX, etc
Library_X is the name of the library
Version a progressive number from 0 to X
static_library_and_includes the static libraries built within that cmake module and the include files needed for using it.
How can I do this using cmake?
I work with different microprocessor crosscompiling with gcc. This is not a problem.
I work with static library, this is not a problem.
I can install them in the right directory. Not a problem
I can't ask the executable to link to the right .a file. Sometimes cmake pick the right one, sometimes not.
Can you please give me a hint on how you guys do it?
thank in advance
Andrea
See the search paths here: https://cmake.org/cmake/help/latest/command/find_package.html#search-procedure .
CMake will find packages in directories named name*, so you can install to <prefix>/FizzBuzz-1.0.0 and <prefix>/FizzBuzz-2.0.0.
As long as each as a correct ConfigVersion.cmake file, it should do what you want.

What is better downloading libraries from repositories of or installing from *.tar.gz

gcc 4.4.4 c89 Fedora 13
I am wondering what is better. To give you a compile of examples: apache runtime portable and log4c.
The apr version in my fedora repository is 1.3.9. The latest stable version on the apr website is 1.4.2.
Questions
Would it be better to download from the website and install, or install using yum?
When you install from yum sometimes it can put things in many directories. When installing from the tarball you can put the includes and libraries where you want.
The log4c the versions are the same, as this is an old project.
I downloaded log4c using yum. I copied all the includes and libraries to my development project directory.
i.e.
project_name/tools/log4c/inc
project_name/tools/log4c/libs
However, I noticed that I had to look for some headers in the /usr/include directory.
Many thanks for any suggestions,
If the version in your distribution's package repository is recent enough, just use that.
Advantages are automatic updates via your distribution, easy and fast installs (including the automatic fetching and installing of dependencies!) and easy removals of packages.
If you install stuff from .tar.gz by yourself, you have to play your own distribution - keep track of security issues and bugs.
Using distribution packages, you have an eye on security problems as well, but a lot work does the distributor for you (like developing patches, repackaging, testing and catching serious stuff). Of course each distributor has a policy how to deal with different classes of issues for different package repositories. But with your own .tar.gz installs you have nothing of this.
It's an age-old question I think. And it's the same on all Linux distributions.
The package is created by someone - that person has an opinion as to where stuff should go. You may not agree - but by using a package you are spared chasing down all the dependencies needed to compile and install the software.
So for full control: roll your own - but be prepared for the possible work
otherwise use the package.
My view:
Use packages until it's impossible to do so (conflicts, compile parameters needed, ..) . I'd much rather spend time getting the software to work for me, than spend time compiling.
I usually use the packages provided by my distribution, if they are of a new enough version. There is two reasons for that:
1) Someone will make sure that I get new packages if security vulnerabilities in the old ones are uncovered.
2) It saves me time.
When I set up a development project, I never create my own include/lib directories unless the project itself is the authorative source for the relevant files I put there.
I use pkg-config to provide the location of necessary libraries and include files to my compiler. pkg-config use some .pc-files as a source of information about where things are supposed to be, and these are maintained by the same people who create the packages for your distribution. Some libraries does not provide this file, but an alternative '-config'-script. I'll provide two examples:
I'm not running Fedora 13, but an example on Ubuntu 10.04 would be;
*) Install liblog4c-dev
*) The command "log4c-config --libs" returns "-L/usr/lib -llog4c" ...
*) The command "log4c-config --cflags" returns "-I/usr/include"
And for an example using pkg-config (I'll use SDL for the example):
*) Install libsdl1.2-dev
*) The command "pkg-config sdl --libs" returns "-lSDL"
*) The command "pkg-config sdl --cflags" returns "-D_GNU_SOURCE=1 -D_REENTRANT -I/usr/include/SDL"
... So even if another distribution decides to put things in different paths, there are scripts that are supposed to give you a reliable answer to where things is - so things can be built on most distributions. Autotools (automake, autoconf, and the likes) amd cmake are quite helpful to make sure that you don't have to deal with these problems.
If you want to build something that has to work with the Apache that's included with Fedora, then it's probably best to use the apr version in Fedora. That way you get automatic security updates etc. If you want to develop something new yourself, it might be useful to track upstream instead.
Also, normally the headers that your distro provides should be found by gcc & co. without you needing to copy them, so it doesn't matter where they are stored by yum/rpm.

Resources