I have created portable class library with following configuration:
After this I have added my library into SmartAssebly 6.8, and try to build, but following error occured:
How to avoid this? I see that SA found right mscorelib.dll, but why it need System.Console?
It's most likely an issue with SmartAssembly. I'd contact their customer service.
Depending on the selected platforms, Portable Class Libraries expose different assemblies. In many cases, tools like SmartAssembly that inspect or rewrite the assemblies make hard coded assumptions about the assembly identities types are declared in. In the past, that often worked because those assumptions matched the reality by happenstance.
The correct way would be to resolve types against the same set of assemblies the IDE/the compilers are referencing.
Related
First off, I know that any such library would need to have at least some interface shimming to interact with system calls or board support packages or whatever (For example; newlib has a limited and well defined interface to the OS/BSP that doesn't assume intimate access to the OS). I can als see where there will be need for something similar for interacting with the compiler (e.g. details of some things left as "implementation defined" by the standard).
However, most libraries I've looked into end up being way more wedded to the OS and compiler than that. The OS assumptions seems to assume access to whatever parts they want from a specific OS environment and the compiler interactions practically in collusion with the compiler implementation it self.
So, the basic questions end up being:
Would it be possible (and practical) to implement a full C standard library that only has a very limited and well defined set of prerequisites and/or interface for being built by any compiler and run on any OS?
Do any such implementation exist? (I'm not asking for a recommendation of one to use, thought I would be interested in examples that I can make my own evaluations of.)
If either of the above answers is "no"; then why? What fundamentally makes it impossible or impractical? Or why hasn't anyone bothered?
Background:
What I'm working on that has lead me to this rabbit hole is an attempt to make a fully versioned, fully hermetic build chain. The goal being that I should be able to build a project without any dependencies on the local environment (beyond access to the needs source control repo and say "a valid posix shell" or "language-agnostic-build-tool-of-choice is installed and runs"). Given those dependencies, the build should do the exact same thing, with the same compiler and libraries, regardless of which versions of which compilers and libraries are or are not installed. Universal, repeatable, byte identical output is the target I'm wanting to move in the direction of.
Causing the compiler-of-choice to be run from a repo isn't too hard, but I've yet to have found a C standard library that "works right out of the box". They all seem to assume some magic undocumented interface between them and the compiler (e.g. __gnuc_va_list), or at best want to do some sort of config-probing of the hosting environment which would be something between useless and counterproductive for my goals.
This looks to be a bottomless rabbit hole that I'm reluctant to start down without first trying to locate alternatives.
I'm looking for a way to introduce a formal public API into a program (PostgreSQL) that presently lacks any formal boundary between extension-accessible interfaces and those for internal use only.
It has a few headers that say they're for internal use only, and it uses a lot of statics, but there's still a great deal that extensions can just reach into ... but shouldn't. This makes it impractical to offer binary compatibility guarantees across even patch releases. The current approach boils down to "it should work but we don't promise anything," which is not ideal.
It's a large piece of software and it's not going to be practical to introduce a hard boundary all at once since there's such a lively extension community outside the core codebase. But we'd need a way to tell people "this interface is private, don't use it or speak up and ask for it to be declared public if you need it and can justify it".
Portably.
All I've been able to come up with so far is to add macros around gcc's __attribute__ ((deprecated)) and MSVC's __declspec(deprecated) that are empty when building the core server, but defined normally when building extensions. That'll work, but "deprecated" isn't quite right, it's more a case of "use of non-public internal API".
Is there any better way than using deprecated annotations? I'd quite like to actually be able to use them for deprecating use of functionality in-core too, as the server grows and it becomes impractical to always change everything all in one sweep.
Basically, I want to seperate some common functionality from existing projects into a seperate library project, but also allow a project to remain cross-platform when I include this library.
I should clarify that when I say "cross-platform" I'm primarily concerned with compiling for multiple CPU architectures (x86/x86_64/ARM).
I have a few useful functions which I use across many of my software projects. So I decided that it was bad practice to keep copying these source code files between projects, and that I should create a seperate library project from them.
I decided that a static library would suit my needs better than a shared library. However, it occurred to me that the static library would be plaform dependent, and by including it with my projects that would cause these projects to also be platform dependent. This is clearly a disadvantage over including the source code itself.
Two possible solutions occur to me:
Include a static library compiled for each platform.
Continue to include the source code.
I do have reservations about both of the above options. Option 1 seems overly complex/wasteful. Option 2 seems like bad practice, as it's possible for the "library" to be modified per project and become out-of-sync; especially if the library source code is stored in the same directory as all the other project source code.
I'd be really grateful for any suggestions on how to overcome this problem, or information on how anyone else has previously overcome this problem?
You could adopt the standard approach of open source project (even if your project is not open source). There would be one central point where one can obtain the source code, presumably under revision control (subversion, git...). Anyone who wishes to use the library should check out the source code, compile it (a Makefile or something similar should be included), and then they are all set. If someone needs to change something in the library, they do so, test their changes, and send you a patch so that you can apply the change to the project (or not, depending on your opinion on the patch).
Context: C language, 8 bit microprocessor
We have identified components which can be reused between projects (products). But I can not find which is the best infrastructure to handle the reusable components.
Two possibilities I found up to now:
Static libraries
Shared files in subversion
Both shared libraries and shared source let you share the common code among projects. Libraries present a better of the two alternatives, so you should use them if they are available on your platform. This lets you guard the source of the library from inadvertent modifications, which could happen if the code from source control is changed locally.
The only problem with sharing code through libraries may be lack of support for source-level debugging of library code by some of the tools in your embedded tool chain (e.g. debuggers attached to in-circuit emulators). In this case reusing code through the source may be acceptable. If possible, you should guard the source from modification through the file system access controls.
If you have reusable components, libraries are the way to go.
It's easier to maintain and you have a clear interface. It's also easier to incorporate into new projects.
You can easily do individual unit tests on library code
Lesser risk to copy and paste code.
Programmers are more aware that this code is shared when they have to use it from a library.
Several good arguments have been made for the library approach.
However, there's at least one good argument for re-building (perhaps from the same source repository) each time you build a dependent project, and that would be the ability to apply target- project- or development stage- unique compile settings to all of the code, including the shared portion.
At my company, we used both approaches at the same time:
We do two checkouts: one for the project, the other for the library.
When the project needs to be compiled (via Makefile), we compile the library first.
The library is then linked as if it was a binary-only library.
When we release a project, we check whether the other projects still compile against the new library.
When we release a project, we tag the library along with the project.
This way you get the best of both worlds:
common code is shared: all projects benefit from bug fixes and improvements
source code is always fully available for understanding and debugging
source code availability encourages library maintenance (fixings, improvements, and experiments)
the library boundaries impose a more API-like approach: clearer interface and project embedding
you can pass compile-time flags to the library to build a different flavors
you can always go back in time if needed without library-vs-project mismatching hassles
if you are in a hurry, you can put off the library check.
The only drawback to this approach is that developers have not know what they are doing. If they modify the library, they should know that the change will impact on all projects. But you are already using a version control system and, if you use branches and the communication within your team is good, there should be no problem at all.
I am playing around with some C code, writing a small webserver. The purpose of what I am doing is to write the server using different networking techniques so that I can learn more about them (multithread vs multiprocess vs select vs poll). Much of the code stays the same, but I would like the networking code to be able to be "swapped out" to do some performance testing against the different techniques. I thought about using ifdefs but that seems like it will quickly ugly up the code. Any suggestions?
Dynamic library loading? e.g. dlopen in Linux.
Just craft an API common to the component that requires dynamic loading.
I prefer pushing "conditional compilation" from C/C++ source to makefiles, i.e. having same symbols produced from multiple .c/.cpp files but only link in the objects selected by the build option.
Also take a look at nginx if you haven't already - might give you some ideas about web server implementation.
Compile the networking part into its own lib with a flexible interface. Compile that lib as needed into the various wrappers. You may even be able to find a preexisting lib that meets your requirements.
Put the different implementations of the networking related functions into different .c files sharing a common header and than link with the one you want to use. Starting from this you can make your makefile create x different executables this way for each of the different implementations you have done, so you can just say "make httpd_select" or "make httpd_poll" etc.
Especially for benchmarking to find the best approach it will probably give you more reliable results to do it at the compiler/linker level than via shared libraries or function pointers as that might introduce extra overhead at runtime.