I'm trying to port my project to another platform and I've found a few differences between this new platform and the one I started on. I've seen the autotools package and configure scripts which are supposed to help with that, but I was wondering how feasible it would be to just have a separate branch for each new platform.
The only problem I see is how to do development on the target platform and then merge in changes to other branches without getting the platform-dependent changes. If there is a way to do that, it seems to me it'd be much cleaner.
Has anyone done this who can recommend/discourage this approach?
I would definitely discourage that approach.
You're just asking for trouble if you keep the same code in branches that can't be merged. It's going to be incredibly confusing to keep track of what changes have been applied to what branches and a nightmare should you forget to apply a change to one of your platform branches.
You didn't mention the language, but use the features available in the language to separate code differences between platforms, but using one branch. For example, in C++, you should first use file-based separation. For example, if you have sound code for Mac, Linux and Windows platforms, create a sound_mac.cpp, sound_windows.cpp and sound_linux.cpp file, each containing the same classes and methods, but containing very different platform-specific implementations. Obviously, you only add the appropriate file to the IDE on the particular platform. So, your Xcode project gets sound_mac.cpp file, while your Visual Studio project uses the sound_windows.cpp file. The files which reference those classes and methods will use #ifdef's to determine which headers to include.
You'll use a similar approach for things like installer scripts. You may have a different installer on the Mac than on Windows, but the files for both will be in the branch. Your build script on the Mac will simply utilize the Mac-specific installer files and ignore the Windows-specific files.
Keeping things in one branch and just ignoring what doesn't apply to the current platform allows you merge back and forth between topic branches and the master, making your life much more sane.
Branching to work out compatibility for a target platform is doable. Just be sure to separate out changes that don't have to do with the target platform specifically into another branch.
Related
I have a radio chip (connected to an embedded processor) which I have written a library for. I want to develop the protocol to use with the rf chip on a PC (Ubuntu). In order to do so I have copied the header file of my library into a new folder, but created an entirely new implementation in a new c file and compile for the PC with gcc. This approach has worked better than expected and I'm able to prototype code that calls the rf lib on the PC and simply copy it right over to the real project with little or no changes.
I do have one small problem. Any changes I make in the the library's header file need to be manually copied between the two project folders. Not a big deal, but since this has worked so well, I can see doing things like this again in the future, and would like to link the API headers between the real and "emulated" environments when doing so. I have thought about using git submodules to do so, but I'm not fond of lots of folders in my projects especially if most of them only contain one or two files each. I could use the c preprocessor to swap in the right code at compile time, but that doesn't cover the changes in my Makefile to call the right compiler with the right fags.
I'm wondering if anyone else has ever done something similar, and what their approach was?
Thanks guys!
maybe you should create a "rflib" and treat it as an external library that you use within your embedded project.
develop on one side and update to the newest version on the other.
An obvious (but fairly hacky) solution is to use a symlink.
I think the best solution, since they will share so much code, would be to just merge the two projects and have two different makefile targets for the binaries.
Basically, I want to seperate some common functionality from existing projects into a seperate library project, but also allow a project to remain cross-platform when I include this library.
I should clarify that when I say "cross-platform" I'm primarily concerned with compiling for multiple CPU architectures (x86/x86_64/ARM).
I have a few useful functions which I use across many of my software projects. So I decided that it was bad practice to keep copying these source code files between projects, and that I should create a seperate library project from them.
I decided that a static library would suit my needs better than a shared library. However, it occurred to me that the static library would be plaform dependent, and by including it with my projects that would cause these projects to also be platform dependent. This is clearly a disadvantage over including the source code itself.
Two possible solutions occur to me:
Include a static library compiled for each platform.
Continue to include the source code.
I do have reservations about both of the above options. Option 1 seems overly complex/wasteful. Option 2 seems like bad practice, as it's possible for the "library" to be modified per project and become out-of-sync; especially if the library source code is stored in the same directory as all the other project source code.
I'd be really grateful for any suggestions on how to overcome this problem, or information on how anyone else has previously overcome this problem?
You could adopt the standard approach of open source project (even if your project is not open source). There would be one central point where one can obtain the source code, presumably under revision control (subversion, git...). Anyone who wishes to use the library should check out the source code, compile it (a Makefile or something similar should be included), and then they are all set. If someone needs to change something in the library, they do so, test their changes, and send you a patch so that you can apply the change to the project (or not, depending on your opinion on the patch).
Context: C language, 8 bit microprocessor
We have identified components which can be reused between projects (products). But I can not find which is the best infrastructure to handle the reusable components.
Two possibilities I found up to now:
Static libraries
Shared files in subversion
Both shared libraries and shared source let you share the common code among projects. Libraries present a better of the two alternatives, so you should use them if they are available on your platform. This lets you guard the source of the library from inadvertent modifications, which could happen if the code from source control is changed locally.
The only problem with sharing code through libraries may be lack of support for source-level debugging of library code by some of the tools in your embedded tool chain (e.g. debuggers attached to in-circuit emulators). In this case reusing code through the source may be acceptable. If possible, you should guard the source from modification through the file system access controls.
If you have reusable components, libraries are the way to go.
It's easier to maintain and you have a clear interface. It's also easier to incorporate into new projects.
You can easily do individual unit tests on library code
Lesser risk to copy and paste code.
Programmers are more aware that this code is shared when they have to use it from a library.
Several good arguments have been made for the library approach.
However, there's at least one good argument for re-building (perhaps from the same source repository) each time you build a dependent project, and that would be the ability to apply target- project- or development stage- unique compile settings to all of the code, including the shared portion.
At my company, we used both approaches at the same time:
We do two checkouts: one for the project, the other for the library.
When the project needs to be compiled (via Makefile), we compile the library first.
The library is then linked as if it was a binary-only library.
When we release a project, we check whether the other projects still compile against the new library.
When we release a project, we tag the library along with the project.
This way you get the best of both worlds:
common code is shared: all projects benefit from bug fixes and improvements
source code is always fully available for understanding and debugging
source code availability encourages library maintenance (fixings, improvements, and experiments)
the library boundaries impose a more API-like approach: clearer interface and project embedding
you can pass compile-time flags to the library to build a different flavors
you can always go back in time if needed without library-vs-project mismatching hassles
if you are in a hurry, you can put off the library check.
The only drawback to this approach is that developers have not know what they are doing. If they modify the library, they should know that the change will impact on all projects. But you are already using a version control system and, if you use branches and the communication within your team is good, there should be no problem at all.
We have a project that is going to require linking against libcurl and libxml2, among other libraries. We seem to have essentially two strategies for managing these depencies:
Ask each developer to install those libraries under the "usual" locations, e.g. /usr/lib, or
Include the sources to these libraries under a dedicated folder in the project's source tree.
Approach 1 requires everyone to make sure those libraries are installed on their system, but appears to be the approach used by many open source projects. On such projects, the build will detect that those libraries are missing and will fail.
Approach 2 might make the project tree unmanageably large in some instances and make the compilation time much longer. In addition, this approach can obviously be taken too far. I wouldn't put the compiler under the project tree, for instance (right?).
What are the best practices wrt external dependencies? Can/should one require of every developer to have certain libraries installed to build the project? Or is considered better to include all the dependencies in the project tree?
Don't bother about their exact location in your code. Locating them should be handled by the used compiler/linker (or the user by setting variables) if they're common. For very uncommon dependencies (or ones with customized/modified files) you might want to include them in your source (if possible due to licensing etc.).
If you'd like it more convenient, you should use some script (e.g. configure or CMake) to setup/create the build files being used. CMake for example allows you to set different packages (libcurl and libxml2 in your example) as optional and required. When building the project it will try to locate those, if that fails it will ask the user. This IS an additional step and might make building a bit more cumbersome but it will also make downloading faster (smaller source) as well as updating easier (as all you have to do is rebuild your program).
So in general I'd follow approach 1, if there's special/rare/customized stuff being used, approach 2.
The normal way is to have the respective dependencies and have the developer install them. Later, if the project is packeted into .deb or .rpm, these packets will require the respective libraries to be installed, the source packets will have the -devel packets as dependencies.
Best practice is not to include the external libraries in your source tree - instead, include a text file called INSTALL in your project root, which gives instructions on building the project and includes a list of the library dependencies, including minimum versions.
Introduction:
I am currently developing a document classifier software in C/C++ and I will be using Naive-Bayesian model for classification. But I wanted the users to use any algorithm that they want(or I want in the future), hence I went to separate the algorithm part in the architecture as a plugin that will be attached to the main app # app start-up. Hence any user can write his own algorithm as a plugin and use it with my app.
Problem Statement:
The way I am intending to develop this is to have each of the algorithms that user wants to use to be made into a DLL file and put into a specific directory. And at the start, my app will search for all the DLLs in that directory and load them.
My Questions:
(1) What if a malicious code is made as a DLL (and that will have same functions mandated by plugin framework) and put into my plugins directory? In that case, my app will think that its a plugin and picks it and calls its functions, so the malicious code can easily bring down my entire app down (In the worst case could make my app as a malicious code launcher!!!).
(2) Is using DLLs the only way available to implement plugin design pattern? (Not only for the fear of malicious plugin, but its a generic question out of curiosity :) )
(3) I think a lot of softwares are written with plugin model for extendability, if so, how do they defend against such attacks?
(4) In general what do you think about my decision to use plugin model for extendability (do you think I should look at any other alternatives?)
Thank you
-MicroKernel :)
Do not worry about malicious plugins. If somebody managed to sneak a malicious DLL into that folder, they probably also have the power to execute stuff directly.
As an alternative to DLLs, you could hook up a scripting language like Python or Lua, and allow scripted plugins. But maybe in this case you need the speed of compiled code?
For embedding Python, see here. The process is not very difficult. You can link statically to the interpreter, so users won't need to install Python on their system. However, any non-builtin modules will need to be shipped with your application.
However, if the language does not matter much to you, embedding Lua is probably easier because it was specifically designed for that task. See this section of its manual.
See 1. They don't.
Using a plugin model sounds like a fine solution, provided that a lack of extensibility really is a problem at this point. It might be easier to hard-code your current model, and add the plugin interface later, if it turns out that there is actually a demand for it. It is easy to add, but hard to remove once people started using it.
Malicious code is not the only problem with DLLs. Even a well-meaning DLL might contain a bug that could crash your whole application or gradually leak memory.
Loading a module in a high-level language somewhat reduces the risk. If you want to learn about embedding Python for example, the documentation is here.
Another approach would be to launch the plugin in a separate process. It does require a bit more effort on your part to implement, but it's much safer. The seperate process approach is used by Google's Chrome web browser, and they have a document describing the architecture.
The basic idea is to provide a library for plugin writers that includes all the logic for communicating with the main app. That way, the plugin author has an API that they use, just as if they were writing a DLL. Wikipedia has a good list of ways for inter-process communication (IPC).
1) If there is a malicious dll in your plugin folder, you are probably already compromised.
2) No, you can load assembly code dynamically from a file, but this would just be reinventing the wheel, just use a DLL.
3) Firefox extensions don't, not even with its javascript plugins. Everything else I know uses native code from dynamic libraries, and is therefore impossible to guarantee safety. Then again Chrome has NaCL which does extensive analysis on the binary code and rejects it if it can't be 100% sure it doesn't violate bounds and what not, although I'm sure they will have more and more vulnerabilities as time passes.
4) Plugins are fine, just restrict them to trusted people. Alternatively, you could use a safe language like LUA, Python, Java, etc, and load a file into that language but restrict it only to a subset of API that wont harm your program or environment.
(1) Can you use OS security facilities to prevent unauthorized access to the folder where the DLL's are searched or loaded from? That should be your first approach.
Otherwise: run a threat analysis - what's the risk, what are known attack vectors, etc.
(2) Not necessarily. It is the most straigtforward if you want compiled plugins - which is mostly a question of performance, access to OS funcitons, etc. As mentioned already, consider scripting languages.
(3) Usually by writing "to prevent malicous code execution, restrict access to the plugin folder".
(4) There's quite some additional cost - even when using a plugin framework you are not yet familiar with. it increases cost of:
the core application (plugin functionality)
the plugins (much higher isolation)
installation
debugging + diagnostics (bugs that occur only with a certain combinaiton of plugins)
administration (users must know of, and manage plugins)
That pays only if
installing/updating the main software is much more complex than updating the plugins
individual components need to be updated individually (e.g. a user may combine different versions of plugins)
other people develop plugins for your main application
(There are other benefits of moving code into DLL's, but they don't pertain to plugins as such)
What if a malicious code is made as a DLL
Generally, if you do not trust dll, you can't load it one way or another.
This would be correct for almost any other language even if it is interpreted.
Java and some languages do very hard job to limit what user can do and this works only because they run in virtual machine.
So no. Dll loaded plug-ins can come from trusted source only.
Is using DLLs the only way available to implement plugin design pattern?
You may also embed some interpreter in your code, for example GIMP allows writing plugins
in python.
But be aware of fact that this would be much slower because if nature of any interpreted language.
We have a product very similar in that it uses modules to extend functionality.
We do two things:
We use BPL files which are DLLs under the covers. This is a specific technology from Borland/Codegear/Embarcadero within C++ Builder. We take advantage of some RTTI type features to publish a simple API similar to the main (argv[]) so any number of paramters can be pushed onto the stack and popped off by the DLL.
We also embed PERL into our application for things that are more business logic in nature.
Our software is an accounting/ERP suite.
Have a look at existing plugin architectures and see if there is anything that you can reuse. http://git.dronelabs.com/ethos/about/ is one link I came across while googling glib + plugin. glib itself might may it easier to develop a plugin architecture. Gstreamer uses glib and has a very nice plugin architecture that may give you some ideas.