LoadString function defined in windows can be used to load strings from a resource like dll or exe.
What is the LoadString equivalent function in Linux?
As noted in the comments, there is no single Linux operating system support for extracting resources from executables. There are multiple option in Linux for Internationalization (i18n), Localization (l10n) which may address your requirements.
Depending on your goals (externalization of messages, support for i18n, ...), similar functionality exists in different programming languages:
Java has resources (which can be added into JAR files),
LIBC provides gettext (via external message file ".po" files). See https://en.wikipedia.org/wiki/Gettext
Many scripting environment (python, perl) provide interfaces to gettext via modules.
Most GUI based frameworks have support for external resources (Gnome, Xt/X11, ...)
As a side note, it is possible to implement "LoadString", assuming messages are complied into the executable (as "C" code, or similar), using the dlsym dynamic lookup. Probably a last resort option.
Related
This might be an odd ball question: I have a C library that needs to read in a relative large configuration file (10MB). The configuration files are static and preferably not to be read or viewed by casual library users. Suppose I don't care about the distribution size, what would be the best way to embed such info? I thought about encrypt it in some form and decrypt it on the fly, but then I have to dealt with clean up and doesn't seem more obfuscating. Any suggestions would be appreciated.
C does not, itself, provide any support for this at all. You can do it using third-party libraries or operating system features, but it's highly platform/environment-specific.
In Linux, with ELF executables, you can use the ELF Resource Tools to embed resources.
In Windows, the Windows Resource Compiler embeds resources into PE executables. It is integrated very well with Visual Studio, but can be used separately.
Both of the above (and any embedded resource solution on any platform) rely on:
Resource support in the executable format (ELF, PE, etc.)
Library support to extract resources at runtime
Compiler/linker support to embed resources at compile time
I am developing a simple tool that will be used across a variety of platforms (mostly Solaris, Linux and HP-UX). The tool relies on the module Proc::ProcessTable however I would like to avoid having to build/install the module across all the systems it will be used on.
Rather, I would like to 'embed' the Proc::ProcessTable code inside my tool. The result I am seeking is to have a single file that will work in all systems, without having to install the module separately.
Is this possible at all? Embedding a Perl-only module would be trivial, but this module compiles some OS-specific C code. Assuming I could compile that code on each of the OS I need, how would I go about including that pre-compiled C code inside my Perl script in order to make the embedded module work?
I would like to avoid having to build/install the module across all the systems it will be used on
Set up a local build system/farm, and produce packages (e.g. RPM) for the target operating systems. One prerequisite is that you turn your tool into a CPAN-ready distribution, and mark Proc::ProcessTable as a run-time dependency.
I'm a bit naive when it comes to application development in C. I've been writing a lot of code for a programming language I'm working on and I want to include stuff from ICU (for internationalization and unicode support).
The problem is, I'm just not sure if there are any conventions for including a third party library. for something like readline where lots of systems are probably going to have it installed already, it's safe to just link to it (I think). But what about if I wanted to include a version of the library in my own code? Is this common or am I thinking about this all wrong?
If your code requires 3rd party libraries, you need to check for them before you build. On Linux, at least with open-source, the canonical way to do this is to use Autotools to write a configure script that looks for both the presence of libraries and how to use them. Thankfully this is pretty automated and there are tons of examples. Basically you write a configure.ac (and/or a Makefile.am) which are the source files for autoconf and automake respectively. They're transformed into configure and Makefile.in, and ./configure conditionally builds the Makefile with any configure-time options you specify.
Note that this is really only for Linux. I guess the canonical way to do it on Windows is with a project file for an IDE...
If it is a .lib and it has no runtime linked libraries it gets complied into you code. If you need to link to dynamic libraries you will have to assure they are there provide a installer or point the user to where they can obtain them.
If you are talking about shipping your software off to end users and are worried about dependencies - you have to provide them correct packages/installers that include the dependencies needed to run your software, or otherwise make sure the user can get them (subject to local laws, export laws, etc, etc, etc, but that's all about licensing).
You could build your software and statically link in ICU and whatever else you use, or you can ship your software and the ICU shared libraries.
It depends on the OS you're targeting. For Linux and Unix system, you will typically see dynamic linking, so the application will use the library that is already installed on the system. If you do this, that means it's up to the user to obtain the library if they don't already have it. Package managers in Linux will do this for you if you package your application in the distro's package format.
On Windows you typically see static linking, which means the application bundles the library and it will use that specific version. many different applications may use the same library but include their own version. So you can have many copies of the library floating around on your system.
The problem with shipping a copy of the library with your code is that you don't get the benefit of the library's maintainers' bug fixes for free. Obscure, small, and unsupported libraries are generally worth linking statically. Otherwise I'd just add the dependency and ensure that whatever packages you ship indicate it appropriately.
I am looking into making a C program which is divided into a Core and Extensions. These extensions should allow the program to be extended by adding new functions. so far I have found c-pluff a plugin framework which claims to do the same. if anybody has any other ideas or reference I can check out please let me know.
You're not mentioning a platform, and this is outside the support of the language itself.
For POSIX/Unix/Linux, look into dlopen() and friends.
In Windows, use LoadLibrary().
Basically, these will allow you to load code from a platform-specific file (.so and .dll, respectively), look up addresses to named symbols/functions in the loaded file, and access/run them.
I tried to limit myself to the low-level stuff, but if you want to have a wrapper for both of the above, look at glib's module API.
The traditional way on windows is with DLLs. But this kind of obselete. If you want users to actually extend your program (as opposed to your developer team releasing official plugins) you will want to embed a scripting language like Python or Lua, because they are easier to code in.
You can extend your core C/C++ program using some script language, for example - Lua
There are several C/C++ - Lua integration tools (toLua, toLua++, etc.)
Do you need to be able to add these extensions to the running program, or at least after the executable file is created? If you can re-link (or even re-compile) the program after having added an extension, perhaps simple callbacks would be enough?
If you're using Windows you could try using COM. It requires a lot of attention to detail, and is kind of painful to use from C, but it would allow you to build extension points with well-defined interfaces and an object-oriented structure.
In this usage case, extensions label themselves with a 'Component Category' defined by your app, hwich allows the Core to find and load them withough havng to know where their DLLs are. The extensions also implement interfaces that are specified using IDL and are consumed by the core.
This is old tech now, but it does work.
What is the relationship between the Windows API and the C run time library?
In a nutshell: The Windows API contains all the functions defined specifically for Windows. The C run-time library contains all the functions that are required by standard C.
The physical libraries that implement these functions may be a single file (library), split across two separate libraries or split into many libraries, depending on the operating system and the actual API/service you are using.
For example, when creating files, the C standard includes the function:
fopen
to open and create files, etc., while the Win32 API (for example) defines functions like:
CreateFile
to create and manipulate files. The first one will be available wherever a standard C run-time library is available while the second one will only be available on a Windows machine that supports the Win32 API.
If you mean the standard C library (msvcrt.dll I assume). Then not much at all. The majority of the windows API is implemented in separate dlls (very much of it is in user32.dll or kernel32.dll). In fact, some of these functions in the Windows API are just thin wrappers around system calls where the actual work is done in the kernel itself.
Also, as ocdecio said, it is entirely reasonable to assume that certain parts of the C standard library are implemented using windows APIs. And for certain cases like string manipulations, vice versa.
EDIT: since which dlls are implemented in terms of others has come into question, i've checked with dependancy walker and here is my findings:
kernel32.dll depends on:
ntdll.dll
user32.dll depends on:
gdi32.dll
kernel32
ntdll.dll
advapi.dll
msimg32.dll
powerprof.dll (this dll references msvcrt.dll for some string functions)
winsta.dll
msvcrt.dll depends on:
kernel32.dll (yes it does have imports for CreateFileA)
ntdll.dll
based off of this, I believe that msvcrt is build on top of the win32 API.
Win32 is a completely different beast to the CRT.
CRT is something that needs to be linked into your project when you use C or C++ functions/features (such as printf or cout).
Win32 is a set of libraries that need to be linked into your project when you use Windows features (like GetWindowText).
What they are:
The Windows API is the API exported by the Microsoft Windows[TM] Operating System
The C run time library is the "standard library" which is shipped with the C compiler by the compiler vendor, and which is available on whichever/any operating system (for example, Unix) is targetted by the compiler
What their relationship is:
They are distinct, but both equally available to C++ applications running on Windows
On Windows, the C standard library is implemented by invoking the underlying Windows API (to allocate memory, open files, etc.).
C run time library is based on the Windows API
Unix System calls are analogy with Windows API.