I'd like to know what is the best way to organize your workspace and more specifically how to setup your unit-test projects (I use google framework) when working with a small group of people so my code:
Is easily portable
Doesn't need extra setup to be compiled by others
Is not dependant of a specific IDE or compiler
Real life example for clarity, even if I'd prefer general "good practices" :
I'm working as an intern this summer and we're making a communication protocol for 1 of our product. We're 3 people developping it with different IDE, OS and compilers. Right now the project is in a SVN repository with 1 folder having the project settings for our relative IDEs and another folder containing the source code.
This allows us to link the source code with relative paths to our projects so everyone can import where he pleases with his settings but the changes happen on the same .c and .h files for everyone when we commit. Is it an acceptable practice?
Also, I'm making the unit tests and right now :
Each module Foo has a separate FooTester project
Each FooTester has access to the .h files through a compiled static library
The internals are exposed via static libs
Is it the right way to do it? Making separate projects and linking the library "manually" seems to make it dependant on my personnal settings.
Related
We are currently exploring using thrift in our project. One of my question is how to handle the thrift source files and specific language generated files in version control (git) ?
Let's say I have project server A implemented using java, and project B an mobile application using Objective-C, and project C which is the thrift files. What currently on top of my mind is having all project as different git project, and project C as submodule of project A and project B. The pro is we can have consistent thrift source, and we don't need to put generated source files into git project.
Then let's say I have another thrift file that differs with project C, and being used only with project A and project D. Should I also put those files in project C? How project B knows that some files in project C not for him in case we put those files in project C?
Another approach might be committing the generated source files into each project. Or maybe another approach that I don't know.
Thanks!
Irrespective of the musings about whether to split the projects in your highly specific case into Git submodules or not, these are the general guidelines that apply to all kinds of generated code, including Thrift, but not limited to it.
General rules
The source documents (IDL, DSL, ...) belong into the repository.
Any code that can be easily generated out of these sources does not.
Exceptions
As with nearly every rule, there are exceptions. The exceptions come in various flavours, such as
the generated artifacts are not easily producable (for whatever reason)
the generated code needs hotfixes to work around bugs 1)
etc.
Additional notes
Strive to have one source, and one source only (of course not counting branches etc.) for these files. A good idea could be to set it up as a library to be used across projects. Treat that library like you would do with any other 3rdparty module, including proper versioning and all. If you use Git, indeed Git submodules may be an approach to achieve that.
Then let's say I have another thrift file that differs with project C
If it differs, it is either a entirely different IDL, or a different version of the IDL in question.
1) In the real world, such things may happen. Unfortunately.
I have a radio chip (connected to an embedded processor) which I have written a library for. I want to develop the protocol to use with the rf chip on a PC (Ubuntu). In order to do so I have copied the header file of my library into a new folder, but created an entirely new implementation in a new c file and compile for the PC with gcc. This approach has worked better than expected and I'm able to prototype code that calls the rf lib on the PC and simply copy it right over to the real project with little or no changes.
I do have one small problem. Any changes I make in the the library's header file need to be manually copied between the two project folders. Not a big deal, but since this has worked so well, I can see doing things like this again in the future, and would like to link the API headers between the real and "emulated" environments when doing so. I have thought about using git submodules to do so, but I'm not fond of lots of folders in my projects especially if most of them only contain one or two files each. I could use the c preprocessor to swap in the right code at compile time, but that doesn't cover the changes in my Makefile to call the right compiler with the right fags.
I'm wondering if anyone else has ever done something similar, and what their approach was?
Thanks guys!
maybe you should create a "rflib" and treat it as an external library that you use within your embedded project.
develop on one side and update to the newest version on the other.
An obvious (but fairly hacky) solution is to use a symlink.
I think the best solution, since they will share so much code, would be to just merge the two projects and have two different makefile targets for the binaries.
Basically, I want to seperate some common functionality from existing projects into a seperate library project, but also allow a project to remain cross-platform when I include this library.
I should clarify that when I say "cross-platform" I'm primarily concerned with compiling for multiple CPU architectures (x86/x86_64/ARM).
I have a few useful functions which I use across many of my software projects. So I decided that it was bad practice to keep copying these source code files between projects, and that I should create a seperate library project from them.
I decided that a static library would suit my needs better than a shared library. However, it occurred to me that the static library would be plaform dependent, and by including it with my projects that would cause these projects to also be platform dependent. This is clearly a disadvantage over including the source code itself.
Two possible solutions occur to me:
Include a static library compiled for each platform.
Continue to include the source code.
I do have reservations about both of the above options. Option 1 seems overly complex/wasteful. Option 2 seems like bad practice, as it's possible for the "library" to be modified per project and become out-of-sync; especially if the library source code is stored in the same directory as all the other project source code.
I'd be really grateful for any suggestions on how to overcome this problem, or information on how anyone else has previously overcome this problem?
You could adopt the standard approach of open source project (even if your project is not open source). There would be one central point where one can obtain the source code, presumably under revision control (subversion, git...). Anyone who wishes to use the library should check out the source code, compile it (a Makefile or something similar should be included), and then they are all set. If someone needs to change something in the library, they do so, test their changes, and send you a patch so that you can apply the change to the project (or not, depending on your opinion on the patch).
We have a project that is going to require linking against libcurl and libxml2, among other libraries. We seem to have essentially two strategies for managing these depencies:
Ask each developer to install those libraries under the "usual" locations, e.g. /usr/lib, or
Include the sources to these libraries under a dedicated folder in the project's source tree.
Approach 1 requires everyone to make sure those libraries are installed on their system, but appears to be the approach used by many open source projects. On such projects, the build will detect that those libraries are missing and will fail.
Approach 2 might make the project tree unmanageably large in some instances and make the compilation time much longer. In addition, this approach can obviously be taken too far. I wouldn't put the compiler under the project tree, for instance (right?).
What are the best practices wrt external dependencies? Can/should one require of every developer to have certain libraries installed to build the project? Or is considered better to include all the dependencies in the project tree?
Don't bother about their exact location in your code. Locating them should be handled by the used compiler/linker (or the user by setting variables) if they're common. For very uncommon dependencies (or ones with customized/modified files) you might want to include them in your source (if possible due to licensing etc.).
If you'd like it more convenient, you should use some script (e.g. configure or CMake) to setup/create the build files being used. CMake for example allows you to set different packages (libcurl and libxml2 in your example) as optional and required. When building the project it will try to locate those, if that fails it will ask the user. This IS an additional step and might make building a bit more cumbersome but it will also make downloading faster (smaller source) as well as updating easier (as all you have to do is rebuild your program).
So in general I'd follow approach 1, if there's special/rare/customized stuff being used, approach 2.
The normal way is to have the respective dependencies and have the developer install them. Later, if the project is packeted into .deb or .rpm, these packets will require the respective libraries to be installed, the source packets will have the -devel packets as dependencies.
Best practice is not to include the external libraries in your source tree - instead, include a text file called INSTALL in your project root, which gives instructions on building the project and includes a list of the library dependencies, including minimum versions.
Hey guys,
I want to create a self-contained C project to be machine-independent.
An example? I want to "make all" my project on a machine where external libraries are not installed (but included in my project) and I want all keep working :)
The library I'm talking about is the GSL, you can find it in the libgsl0-dev ubuntu package.
Now, I want to include all the header and .c files in my project, uninstall the packages and the project must build and run as before :)
Ideas?
Thanks!
Bye!
Don't forget about dependencies.
There are reasons why libraries like GSL are distributed as independant entities:
Users can upgrade the library independantly of the software that uses it saving you from having to constantly update your project when the GSL version changes.
Licensing issues.
Dependancies. If GSL has dependencies and you want to build GSL as part of your project then you will also need to include ALL the source code for ALL dependencies...and their dependencies...and their dependencies...and so on and so on. If you are going to make it a requirement that some sub-dependency need to already be installed then you may as well make it a requirement that GSL is already installed.
Other reasons I can't be bothered to think up because I have other things to do.
Just copy the library's source code somewhere into your project's hierarchy, and start either creating or modifying Makefiles (or whatever GSL uses) to get it to build.
For instance, you could have it in a directory external/libgsl, and then set up a Makefile target for your project that does the building. Then you make your project's code dependent on the library's, so that the library is always built first.
Of course, you also need to think about any license issues that might arise if/when you distribute your project.