I have to develop a patch for the linux kernel (2.6) for an university course I'm attending. I have to develop this patch with a friend of mine and we need to edit the same files on our computers (2-3 PCs). So, we want to use something like a collaborative editor and/or a version control system. The problem is that we never used something like that and that we cannot release our software as open source till we take the exam. Obviously, we're using linux. I'm here to ask you suggestions in order to manage our work in the best way.
Thank you
You'll definitely want to use a version control system.
Linux kernel itself is developed with Git version control system. You'll probably find tons of resources for how to use Git in kernel development.
If you are looking for more traditional version control software, the Subversion is a safe bet.
With both system you can host git or subversion on your computers, and nobody else could access it. This way you do not need to publish to code before you are ready.
There are also hosting services (for example Github) that provide hosting for vcs repositories. However, in case of github, if you want a private repository, then you need pay for it.
Ask your professor or university IT department what version control systems are installed on the computers. Use whatever they have installed, they're all pretty much serve the same purpose.
When I was in school the computers had CVS installed, so I set a repository for myself and team members in my home directory.
As Juha said, if you want to stay linux-y use the distributed version control system Git. I think you'll be able to use git without a central "head" repository server by pulling patches.
git does not require a git server. Simply having a shared network file folder is sufficient.
I personally use Subversion, I pay for a host (http://versionshelf.com) which gives me the option of creating and managing Subversion or Mercurial repositories through a web interface with ease.
I've not looked into Mercurial much but it seems to be very well supported and has lots of IDE integration and GUI clients (https://www.mercurial-scm.org/wiki/OtherTools).
Related
I'm looking for some advice on how to properly handle versioning when managing a distro using Yocto.
I have several embedded systems in production and have to rely on a third party to apply updates. Updates include one or more .ipk packages that get installed via opkg. I have no control over when the third parties apply the update(s) I send them. This introduces the issue I am trying to find a solution to. The systems are likely to be in various different states as some updates are applied and others are not. Is there a common solution to tell what state a system is in?
One problem I'm not clear on is how ensure the embedded system has certain updates applied before applying further updates. How do distros such as Ubuntu or Redhat handle this? Or do they?
Ubuntu and RedHat have a remote repositories. The systems have a internal list of installed packages. When you update the repository you get a new list of packages. You can then compare this list of installed packages against the new package list and install them. This is basically done with apt-get update && apt-get upgrade and the yum equivalent.
Yocto actually supports rpm and deb package format and their remote repositories. I am not familiar with opkg and if they have the option of a remote repository.
When I implemented a system I narrowed it down to the following options:
have a repository (deb and rpm definitely work here)
have a version package
using images
Version packages have the big disadvantages since you have to get your own logic on which packages to install. But you can require that version-1.deb needs software-v2.deb and tool-v1_5.deb. That works well with you own software but is a big manual overhead for the entire Yocto package stack.
Repository would be the usual way such as: apt-get update && apt-get -y upgrade. Works well and easy, but lacks also a risk free path to newer Yocto version.
The image way is more complicated and depends on your used hardware, size of image and transfer of the image. If you have a huge image and a slow network you might not want to do this. Basically you push your new image (or automatically pull it based on a version) d push it then to the device. The device dd's it to a secondary partition and then you flip the bootload to use the other partition. Depending on bootloader and/or hardware you can automatically flip back in case the partition you booted in is broken. Helpful for remote systems. Advantage is: you can also easily upgrade the entire OS without manually picking versions. And you can have (automatic) fail-over.
I've been investigating software artifact repositories like Artifactory, Nexus, and Archiva. Artifact repositories like these perform well in a continuous integration environment, with dependency management and automated build tools. They are largely influenced by Maven.
I am also aware of Linux package repositories, like what Debian or RedHat use. Downloading and installing software, with all necessary dependencies, is very easy with these systems.
What are the major differences between Maven-like repositories and Linux package repositories? Are there any fundamentally different design goals? Platform implementations aside, could they be interchangeable?
Artifactory's got the YUM addon which enables it to act just like any standard HTTP exposed YUM server (for both deployment and resolution).
Support for Debian packages is on the roadmap, too.
The various artifact creation applications out there don't do anything that rpm or deb don't do, in fact they are in general far less capable and much more convoluted. I suspect the biggest issue with avoidance of deb or rpm is ignorance. The other is that the artifact is OS agnostic (although the build certainly is not) and distribution agnostic, so there is that, but I don't see any significant advantage to that.
Linux package repositories
Primarily serve end-user software, but can also serve developer libraries
Release builds are prevalent, snapshot builds are sometimes available
Transitive dependencies commonly installed or removed automatically
Used to install packages frequently, and remove them infrequently
Packages are installed in a single, system-wide location
Serve only distribution-specific Linux software
Servers run on Linux
Command-line or graphical tools for browsing and downloading
Software artifact repositories
Primarily serve developer libraries, but can also serve end-user software
Release and snapshot builds are both common
Transitive dependencies handled various ways, depending on the package
Packages are essentially cached on the user's machine, and may be cleared frequently
Packages are cached per-project, not system-wide
Serve software for any operating system
Servers run on any desktop or server operating system
Web interface for browsing and downloading
Support a variety of protocols (Maven, Ivy, NuGet, P2, etc.)
I am writing a C program. What I have seen from my earlier experiences is that I make some changes on a correct version of my program, and after that change, the program is computing incorrectly.
Now, for one occasion it may be easy to detect where I made that change and undo it or do it in some other way, and for other occasions I find it hard (with labor) to detect where exactly the problem is.
Can you suggest some platform or tool which allows you to put the new version and old version of the program side by side and mark the changes that were employed on the new version.
I am using gcc 4.3.2 to compile c programs on Ubuntu 10.04 OS.
Any suggestion is welcome.
regards,
Anup
Use a version control system. I recommend Subversion. This will allow you to compare your newer version with the older one to see exactly what changed and you can revert to the older working version if you break your code.
If you want a tiny, small, portable, one-file personal control version system, I can suggest fossil. A documentation is available here.
What you are requesting is some diff-like tool. There is a plethora of such tools, ranging from the diff command line utility to GUI frontend such as http://meld.sourceforge.net/. Most of these tools can be combined with (or have counterparts in) revision control systems as Subversion.
I would do this with some version control system, like Git. Refer to Git for beginners: The definitive practical guide if you're new to version control systems and/or git.
Thats the main purpose of all the version control management software.
I recommand to look at Subversion, GIT or Mercurial.
There are thousands of internet resources for this programs.
And they all have good comparision programs integrated in their GUI Tools.
Also, try to use Git/Mercurial/whatever together with Dropbox, it will allow you to continue developing on other computers. It will also teach you how to collaborate with others and with yourself.
This is slightly easier to do with the newer VCS systems like Git and Mercurial.
If you would use a version control system, such as Git or Mercurial, and follow a few good practices:
committing small, self contained changes (which includes committing, i.e. saving changes to version control, often)
committing known good state (or at least trying to, e.g. ensuring that code compiles before you commit)
you would be always able to go back to known good state.
If bug is/was in your current changes, you can compare current state with previous version: somewhere in changed lines there is bug... or something in changed lines uncovered an existing bug.
If bug was introduced by some earlied, unknknown commit, you can use bisect to search through history to find first commit that exhibits given bug: see for example git bisect manpage.
HTH
We have two web servers with load balancing. We need to share some files between those servers. These would be uploaded files, session files, various files that php applications create.
We don't want to use a heavyweight, no longer maintained or a commercial solution. We're looking for some lightweight open-source software that would work as shared file system. It should be really easy to set up, must be HA available, must be very fast. It should work with RedHat Linux.
We looked at such solutions like drbd with synchronous file sharing but we can't use them because it can't work on an underlying filesystem like ext3.
OCFS may be up to snuff by now; it's worth checkout out at least. It's in the mainline linux kernel tree, http://oss.oracle.com/projects/ocfs2/ has some info on it. I've set it up before, it was pretty easy to get going.
DRBD is good for syncing over a network (direct crossover connection if at all possible), but EXT3 is not designed to be aware of changes that occur underneath it, at the block device level. For that reason you need a filesystem designed for such purposes such as the Global File System (GFS). To the best of my knowledge Red Hat has support for GFS.
The DRBD manual will give you an overview of how to use GFS with DRBD.
http://www.drbd.org/users-guide/ch-gfs.html
Don't take this as a final answer - I have not researched or used a multi-master system before, but at least this might give you something to go on.
Ideally, you would only sync the part of the data that's shared between the webservers.
I'm working on a free software (bsd license) project with others. We're searching for a system that check out our source code (svn) and build it also as test it (unit tests with Check / other tools).
It should have a webbased interface and generate reports.
I hope we don't have to write such a system from null by ourselves...
You surely do not have to code this yourself - there are a lot of continuous integration systems which are able to check out source code from systems such as SVN and they are generally easy to extend with your own tasks, so running custom test scripts/programs should not be a problem.
While these CI systems are probably not written in C, this does not matter, since they just need to be able to access and compile your source code, for which they will use an external compiler anyways.
Just to list some of the well known CI tools:
CruiseControl
Hudson
TeamCity
You might also be interested in other questions on Stack Overflow tagged as continuous-integration. :)
I don't think that there's a buildsystem that is capable of doing all this tasks - but what about combining them?
SCons is a nice buildsystem that runs on every machine that has Python. It can even build directly from SVN. For automatic building you can try Buildbot.
Check out buildbot
My vote would be CruiseControl.NET, it has everything you are asking for. It is open source so the costs are low, and it has a very active user community on google groups to help you with your problems as you grow accustomed to it. Also, although .NET based, using MONO it is very nice on Linux and Mac build servers as well so you have everything covered.