I am writing a simple application that links to libxml2 on the system. It works for almost all users, but one user reported this error when reading a particular xml file from the web:
Unsupported encoding ISO8859-1
This error typically indicates that libxml2 was built --without-iconv. Is there any way I can explicitly test if the libxml2 dynamic library on the system has iconv support?
I can think of two ways to do this:
Write a short, simple test program that uses the iconv feature of xml. It should behave differently if it is not present. This is what the GNU configure software does - it tests for features being present by exercising them.
This is a hack - find a common iconv symbol present in libxml with iconv but not if iconv is missing. Use a utility like nm to list the symbols in the library file.
Or just avoid the issue by packaging a working libxml with your application.
Related
I have a program written in C that uses dlopen for loading plug-in modules. When the library is dynamically loaded, it runs constructor code which register pointer to structure with function implementations with the main application by use of exported function. I want to use absolute path for specifying the file to dlopen.
Then I have other part of the program with takes file, determine if it is ELF, then looks into the ELF header for specific ELF section, read this section and extract from it pertinent information. This way it filters only shared libraries which I have previously tagged as a plug-in module.
However, I am solving a problem how to discover them on the fly (in portable Linux way, i.e. it will run on Debian and on Fedora too and so on) from the main program. I have been thinking about using ldconfig for this. (As the modules will be installed by way of distro packaging system, APT for example.) Is there any way how to programmatically get the string list of known libraries from C program other than directly reading the /etc/ld.co.cache file? I was thinking that maybe there is some header library which will give char** when I ask.
Or, maybe is there any better solution to my problem?
(I am proponent of using standard system components that programming one-off solutions which will need support in the future.)
we are using libxml2 with zlib. There is an option to build libxml2 WITHOUT zlib as well. What is the difference? is performance affected?
Yes you can do it.
There is an explicit compiling configuration option for excluding zib.
from libxml2 FAQ
What other libraries are needed to compile/install libxml2 ?
Libxml2 does not require any other library, the normal C ANSI API
should be sufficient (please report any violation to this rule you may
find).
However if found at configuration time libxml2 will detect and use the
following libs:
libz : a highly portable and available widely compression library.
iconv: a powerful character encoding conversion library.
The option itself depends by the compilation environment.
Take a look at this article
http://www.tuan.nguoianphu.com/LibXML2_compile_for_Linux_Solaris_Windows
libxml uses zlib for reading or writing to compressed files directly. So if you do not need this functionality then the library can be removed.
I am developing a tool in C which takes in a file as a input. I need to compute the SHA1 sum for the file. This tool needs to be platform compatible with Linux, Mac and windows. The files are huge in size ranging from 150MB to 2G. I need the tool to be able to compute the SHA at least as fast as the sha1sum Linux utility.
Any suggestions how I could go about incorporating SHA1? I am wary about using the openssl interface, since the clients for the tool would not necessary have openssl installed.
OpenSSL is the de facto standard in C. It needn't be installed since you can supply the library along with your program (or even statically compile it).
OpenSSL is BSD-style licensed, so you can even use just their SHA1 code directly in your program (giving credit), though it may be tricky to decouple it from the library.
My digest package for R includes short standalone C sources for md5, sha1, sha256, ... which were written by Christophe Devine. It is nice code, and it includes a few tests replicating the reference results from the specifications.
I believe his site no longer exists, but when I eg checked Google Code Search for it, the same functions seem to have been included in a number of other open source projects. You can easily extract these sources from my tarballs or even directly from the SVN directory at R-Forge's digest repo.
I'm a bit naive when it comes to application development in C. I've been writing a lot of code for a programming language I'm working on and I want to include stuff from ICU (for internationalization and unicode support).
The problem is, I'm just not sure if there are any conventions for including a third party library. for something like readline where lots of systems are probably going to have it installed already, it's safe to just link to it (I think). But what about if I wanted to include a version of the library in my own code? Is this common or am I thinking about this all wrong?
If your code requires 3rd party libraries, you need to check for them before you build. On Linux, at least with open-source, the canonical way to do this is to use Autotools to write a configure script that looks for both the presence of libraries and how to use them. Thankfully this is pretty automated and there are tons of examples. Basically you write a configure.ac (and/or a Makefile.am) which are the source files for autoconf and automake respectively. They're transformed into configure and Makefile.in, and ./configure conditionally builds the Makefile with any configure-time options you specify.
Note that this is really only for Linux. I guess the canonical way to do it on Windows is with a project file for an IDE...
If it is a .lib and it has no runtime linked libraries it gets complied into you code. If you need to link to dynamic libraries you will have to assure they are there provide a installer or point the user to where they can obtain them.
If you are talking about shipping your software off to end users and are worried about dependencies - you have to provide them correct packages/installers that include the dependencies needed to run your software, or otherwise make sure the user can get them (subject to local laws, export laws, etc, etc, etc, but that's all about licensing).
You could build your software and statically link in ICU and whatever else you use, or you can ship your software and the ICU shared libraries.
It depends on the OS you're targeting. For Linux and Unix system, you will typically see dynamic linking, so the application will use the library that is already installed on the system. If you do this, that means it's up to the user to obtain the library if they don't already have it. Package managers in Linux will do this for you if you package your application in the distro's package format.
On Windows you typically see static linking, which means the application bundles the library and it will use that specific version. many different applications may use the same library but include their own version. So you can have many copies of the library floating around on your system.
The problem with shipping a copy of the library with your code is that you don't get the benefit of the library's maintainers' bug fixes for free. Obscure, small, and unsupported libraries are generally worth linking statically. Otherwise I'd just add the dependency and ensure that whatever packages you ship indicate it appropriately.
I am using Ubuntu 10.04, and studying programming of kernel objects.
I have come across some rather complicated structs which I have difficulties reading, so I thought I'd try to find some tool that can help me visualise them.
Only thing I could find so far is VCG, which has a C Struct Visualization Example, which looks like this:
which looks like something I'd like to use.
First thing, is that the last VCG packaged for Ubuntu is vcg (1.30debian-6) in hardy - but the .deb package can be downloaded and installed in Ubuntu Lucid without problems.
However, it seems this package is only a VCG viewer (similar to vcgviewer, I'd guess). The vcgviewer page notes:
To generate compiler graph data with newest gcc compilers use:
gcc -g -da -dv -fdump-tree-original-raw -fdump-tree-all-all
So, apparently I'd have to use those switches along with gcc while compiling, to generate .vcg graph files from the C source.
The problem, however, is that I'm building a kernel module, which only references the Linux headers - as I try to avoid as much as I can the recompilation of entire kernel. And it seems, as soon as I try to use -fdump-tree-... switches in that context (kernel module), gcc wants to start compiling the rest of the kernel too! (and obviously fails, in both compilation and generation of .vcg graphs - as I don't have the kernel sources, only headers)
So my question is - is there a tool, that would produce .vcg or .dot graphs of structs - simply using a plain text header file as input? (it would not have to resolve all dependencies - simply those in header files in same directory)
EDIT: it is actually not that important for me that the backend is .vcg or .dot in particular, I mentioned them just because I've found them so far; any sort of software that would allow similar struct visualization, regardless of backend, is welcome :)
PS: Note that if you do not want to use VCG viewers for viewing .vcg graphs, you can convert the .vcg format to a .dot format, and use graphviz instead for the visualisation. What worked for me is to use graph-easy - search.cpan.org for perl - which first got packaged in Ubuntu with Maverick edition, as libgraph-easy-perl (however, the .deb file can - again - be downloaded and installed without problems in Lucid). libgraph-easy-perl installs a graph-easy script, which then allows to do stuff like:
graph-easy test.vcg --as_dot | dot -Tpng -o test.vcg.png
See also "[graphviz-interest] VCG files" and "Diego Novillo - Re: can't find VCG viewer" for another vcg-to-dot script (which, unfortunately, didn't work for me).
I have had good experiences with using doxygen for that task. It is designed to create documentation out of annotated source files, but it can already give you a lot of things without the annotations, including various graphs.
Doxygen uses dot for the graph creation.
I've managed to successfully build a kernel module with vcg generation by doing the following:
Creating a linked copy of the kernel source or header directory using cp -al /usr/src/linux-srcdir /tmp/tmp-srcdir since gcc wants to write to the current working directory.
Adding EXTRA_CFLAGS="-g -da -dv -fdump-tree-original-raw -fdump-tree-all-all" to the make command line eg. -C /tmp/tmp-srcdir M=pwdEXTRA_CFLAGS="-g -da -dv -fdump-tree-original-raw -fdump-tree-all-all". the vcg files are generated in /tmp/tmp-srcdir