Adding system' actions should change the rule history - shake-build-system

I have this kind of rule:
"foo" *> \out do
need something
create "foo" somehow
It's built correctly, and running the build twice won't build the target.
Then I add a system' to this rule:
"foo" *> \out do
...
system' something
Running shake now doesn't rebuild the "foo" target, because no dependencies changed. Anyway, the rule changed. So I expect the newly added system' action to change the history of the rule, and in turn force a rebuild of "foo", but it wasn't the case.
Usually in autoconf/automake systems, or even in non-trivial makefiles, the rules depend on Makefile itself, so that whenever it changes the project is rebuilt.
In Shake I expect this to work and to be fine grained.
In the source code of system' I can't see anything that adds an implicit dependency on the command being run.
Am I doing something wrong? Is it intentional to not support this kind of dependencies, or it's simply not implemented?

The behaviour you are seeing is expected, and the ICFP 2012 paper S6.2 outlines some strategies for coping with it. It would be ideal if each output depended on the rule used to build it, but that requires some equality over rules (which are just functions whose source code is unavailable during execution). One day I hope Shake will work as you describe, but I don't think it's possible at the moment in GHC.
To minimize the number of build system changes, it is best to keep anything you think will change on a regular basis out of the build system. Any list of files (for example which files are required to link a C executable) can be put in configuration files or depended upon properly through the use of the oracle mechanism.
Of course, the rules will still change from time to time, and there are two approaches you can take:
Rebuild everything if anything changes
A simple conservative approach, as adopted by many Makefiles, is to rebuild everything if anything changes. An easy way to implement this strategy is to hash the Shake scripts, and use that as the shakeVersion field. For projects that rebuild reasonably quickly this can work well.
Rebuild with manual control
For larger projects, the build system is often in daily flux, but most parts of the build system have no impact on most rules. For some rules I explicitly need some source files in the rule, for example if there is a large generator in a file, I would need that in the rule producing the generated output. If you also use writeFileChanged to write the generated file, then many modifications to the generator will cause only minimal rebuilding.
If the changes are more invasive, I manually increment the shakeVersion field, and force a complete rebuild. In a mature build system with proper dependency tracking, this event may happen only a few times a year.

Related

Derived objects in Clearcase

I want to ask what is exactly the derived object in ClearCase and how is work.
Additional i want to ask if there is an other program with the same function, because in Git, MKS or in IBM® Rational Team Concert™ i cannot find something similar, is it obsolete ?
This is quite linked to dynamic views, which are very specific to ClearCase and not found in other more recent VCS.
See "ClearCase Build Concepts"
Developers perform builds, along with all other work related to ClearCase, in views. Typically, developers work in separate, private views. Sometimes, a team shares a single view (for example, during a software integration period).
As described in Developing Software, each view provides a complete environment for building software that includes a particular configuration of source versions and a private work area in which you can modify source files, and use build tools to create object modules, executables, and so on.
As a build environment, each view is partially isolated from other views. Building software in one view never disturbs the work in another view, even another build of the same program at the same time. However, when working in a dynamic view, you can examine and benefit from work done previously in another dynamic view. A new build shares files created by previous builds, when appropriate. This sharing saves the time and disk space involved in building new objects that duplicate existing ones.
You can (but need not) determine what other builds have taken place in a directory, across all dynamic views. ClearCase includes tools for listing and comparing past builds.
The key to this scheme is that the project team's VOBs constitute a globally accessible repository for files created by builds, in the same way that they provide a repository for the source files that go into builds.
A file produced by a software build is a derived object (DO). Associated with each derived object is a configuration record (CR), which clearmake or omake uses during subsequent builds to determine whether the DO can be reused or shared.
A derived object (DO) is a file created in a VOB during a build or build audit with clearmake or omake.
Each DO has an associated configuration record (CR), which is the bill of materials for the DO. The CR documents aspects of the build environment, the assembly procedure for a DO, and all the files involved in the creation of the DO.
The build tool attempts to avoid rebuilding derived objects.
If an appropriate derived object exists in the view, clearmake or omake reuses that DO.
If there is no appropriate DO in the view, clearmake or omake looks for an existing DO built in another view that can be winked in to the current view.
The search process is called shopping.
This is relevant for very large C or C++ makefile-based projects.
I think the TL;DR version of this is:
Derived objects contain information that describes
What was accessed to build the object, including dependencies that may bot be in your build files.
Other files created during the build process ("Sibling Derived Objects")
The commands used to build the object (The "build script") assuming that clearmake, omake, or the ANT listener were used to run the build.
In the case of clearmake and omake, this information is used to avoid rebuilds, potentially speeding builds. The lookup is referred to DO "shopping" and the build avoidance is "winkin."
If you have regulatory or security compliance or needs where this level of auditing is critical, there really isn't anything else that does this.

Software versioning in large-scale systems

My company is developing a system for 10 years. This system has 15 subsystems that are almost independent (they may use same libraries or packages or DBs), and these subsystems are building locally in separate teams, also a main simple system is developed to read subsystem configs and build a page with menu and submenu from configs(with shortcuts). Our company's product is the exe of this main system.
Unfortunately, we do not use a standard version numbers in our company. Now, we decided to force a standard in company, and I found Semantic Versioning a satisfactory standard, but I have some questions in our case:
How would the changes on subsystem version increase the main system version? Often, after even Major changes in a subsystem, the code of main system remains unaffected. I think changes in the main system should shape the version number of that system, but in this case it does not make sense. Is there any solution for versioning of large applications that consist of multiple subsystems?
You gave a link to MAJOR.MINOR.PATCH, and increment the
MAJOR version when you make incompatible API changes,
MINOR version when you add functionality in a backwards-compatible manner,
PATCH version when you make backwards-compatible bug fixes
Any change in a subsystem might make the main system incompatable: Too much to keep track of.
Each subsystem should use their own version. In the main system you should have an overview of the dependencies and the own versioning.
You can manage this in different ways. One way would be using Maven and pom.xml files. Another way would be using a config file with the different versions.
Each team can develop their own code independent and assign Semantic versions.
Maybe some day you will put the shared libraries, packages and DBs in a repository and all teams can refer to them with their own pom.xml.
You are flexible this way. You might even want to make a second main system (for top clients or for free accounts?) and can change the pom.xml including/excluding subsystems. The second main system will have its own versioning.
From the SemVer FAQ:
What should I do if I update my own dependencies without changing the public API?
That would be considered compatible since it does not affect the public API. Software that explicitly depends on the same dependencies as your package should have their own dependency specifications and the author will notice any conflicts. Determining whether the change is a patch level or minor level modification depends on whether you updated your dependencies in order to fix a bug or introduce new functionality. I would usually expect additional code for the latter instance, in which case it's obviously a minor level increment.
I would apply this to your scenario as follows: If your main application takes upgrades of its dependent subsystems without having to make major changes in itself, it should increase either the minor or the patch version. You shouldn't need to increment the major version, as its own public API is still considered compatible.
If the main application makes use of new features added to the subsystems, it should be a minor version increase.
Otherwise, it should be a patch version increase.
I think you're on the right track for this - literally treat everything with a SemVer version as it's own thing.
If you need to rebuild an updated linked dependency then at the least it'd be a patch version update - if they're simply soft links (ie, someone can update the subsystem without rebuilding or needing fixes) then it's purely a case of wanting a version number of when it was changed.
Keeping to the SemVer idea, if things cause incompatible changes then pulling the highest change out could make sense. Ie - updating 5 subsystems at the same time, 4 are patch updates, and one a minor - you'd update the minor version of the main system only (and put them all into that single change). Same with major changes etc.
The other similar idea is to drop the changes slightly if the main build is purely an interface on top - any minor or patch changes would only cause a minor on it, and any major subsystem changes would cause a minor - that way you can keep the major for breaking front end changes etc.
Something to bear in mind is that you don't have to follow SemVer perfectly - consistency is the key for something like this - so you could update the patch level for when subsystem changes are merged in, and never reset it, while updating the minor and major versions only for the main system. (There are a couple of open source projects that work this way, I just can't remember which ones off the top of my head).
Not sure if it's any use, but you could look at some of the nodejs packages around and how they update their versions with differing dependency versions - they all include a list of what they use in a package.json file with the version listed in a certain format (ie, exactly x.y.z or >x.y.z etc - https://docs.npmjs.com/files/package.json#dependencies)

How should spec files be organised in a javascript application using MVC

I would like to know your opinion about how you would organize the files/directores in a big web application using MVC (backbone for example).
I would make the following ( * ). Please tell me your opinion.
( * )
js
js/models/myModel.js
js/collections/myCollection.js
js/views/myView.js
spec/model/myModel.spec.js
spec/collections/myCollection.spec.js
spec/views/myView.spec.js
This is how I've traditionally organized my files. However, I've found that with larger applications it really becomes a pain to keep everything organized, named uniquely, etc. A 'new' way that I've been going about it is organizing my files by feature rather than type. So, for example:
js/feature1/someView.js
js/feature1/someController.js
js/feature1/someTemplate.html
js/feature1/someModel.js
But, oftentimes there are global "things" that you need, like the "user" or a collection of locations that the user has built. So:
js/application/model/user.js
js/application/collection/location.js
This pattern was suggested to me because then you can work on feature sets, package and deploy them using requirejs with relatively little effort. It also reduces the possibility of dependencies occurring between feature sets, so if you want to remove a feature or update it with brand new code, you can just replace a folder of 'stuff' rather than hunting for every file. Also, in IDE's, it just makes the files you're working on easier to find.
My two cents.
Edit: What about the spec files?
A few thoughts - you'll just have to pick the one that seems most natural to you I think.
You could follow the same 'feature folder' pattern with the spec files. The upside being that all of the specs are in one place. The downside is that now, much like what you're currently doing, you have to places for one feature's files.
You could put the specs in a 'spec' folder of the feature folder. The upside is that you now have actual packages that can be wrapped up in a single zip file with no chance of clobbering other work. It's also easier to find directly related files for writing tests - they're all in the same parent folder. The downside is that now your production code and test code is in the same folder, publishing it (possibly) to the world. Granted you'll probably end up compiling the production javascript down to one file at some point.. so I'm not sure that's much of an issue.
My suggestion - if this is a large application and you figure you're going to have a few hands touching the files, leave something like a 'package.json/yml/xml' file in the folder. In there, list out the production, spec, and any data files you need for testing (you can most likely write a quick shell script to do this for you). Then write out a quick script to look through your source folder for 'package.whateverYouChose' files, get the test files and then build your unit testing page with it. So, let's say you add another package.. run 'updateSpecRunner' or whatever you name the script, and it'll generate you another SpecRunner.html file (or whatever you named the file your running the specs on). Then you can manually test it in a browser, or automate it using phantomjs/rhino.
Does that make sense?
You can find a good example how to organize your application to this link
Backbone Jasmine examples
It looks more or less like your implementation.

Centralized Config Spec for ClearCase Snapshot Views

We have a config spec that we use for our builds that we encourage all developers in our organization to use so that they can run any task in our build without fear of failure. Every now and again we need to update that config spec to include new elements or exclude old elements.
When we do this, the process is to write a quick mail to all of our developers telling them to manually update any views that they use to build our system with the current config spec.
This is annoying and error prone and thus leads many developers to just ignore those mails and then we get called because the build's broken.
I'm very interested in defining the config spec centrally somehow so that all views can use that config spec and we can update it underneath of people. This may seem draconian but when you have hundreds of developers and they're all supposed to be running the same builds, it seems to make sense.
I've already investigated the idea of using a share to store the config spec and then include it in the developer's views using an include line, but as the documentation states: "Include files are re-read on each execution of setcs and edcs." This appears in testing to mean what it seems to mean, that the only time the rules are re-evaluated are in the context of editing the config spec in some way.
The solution I'm looking for would re-evaluate the config spec every time you interact with clearcase, or at the very least update. In that way, I could manage the config spec for everyone.
Thoughts?
I can work, especially if your included config spec doesn't change too often.
Each time it will change, your users will have to run
cleartool setcs -current
(as explained in the example#2 of this technote)
You will then need to decide where to store that common config spec:
on a share drive
on a ClearCase view in order to benefit from the history feature for that common config spec content.
You can see a full debate in this thread:
However, I have encountered situations where a version controlled
include file was necessary because it referred to plenty of elements
from a legacy code which users had to use to continue their work on some
of the new code. It was pain and we had to live with it.
Just like with any other 'process', this too needs some 'education' to
the users.

Version Control for a total newbie

I'm totally new to the world of programming and understand very little in terms of jargon and typical methodology.
A while ago I was writing some code, but accidentally deleted some good code while I was deleting bad code. From then on I started creating versions of my files, I would name each file with the date and a version number.
However, this is a pain in the ass, having to give an unique name to each file and then going to my core file and changing the reference to the name of the new file.
And then, just the other day I accidentally over wrote something important even with this method, probably because of a typo in naming.
Needless to say, this method sucks.
I'm looking for suggestions on better practices, better tools. I've been looking at version control, but a lot of them, git svn look really complicated. The idea is to speed up the whole versioning process, not make it harder by having to do command line.
Right now I'm hoping that there's a tool that would save an unique version of the file every time I hit ctrl-s, and give me one button to create a finalized version.
Of course if there are suggestions for totally different ways of doing things, that would be more awesome.
Thanks everyone.
There are two approaches to this problem:
Versioning on demand. This is the model used by subversion, CVS, etc., etc. When you have made a 'significant' change, you decide to tell the system "keep this version".
Automatic versioning. This is the model used by some old VAXen, Eclipse, IDEA, every wiki ever, and a few writer's tools. Every time you save, a new version is implicitly created. At some remove, old versions may be culled (e.g., only one version is kept from work performed a week ago, rather than every save).
It sounds like you would prefer #2, because it is "fool-proof" -- you never have to go, "oops, I should have 'checked in' / 'kept' my work before making this change." You can always roll back. One downside is that you have to manually step through the old versions to find something, because unlike with #1 you generally are not giving a description of each change.
Another downside is that for large files, or ones that are not easily diff'd/patched (i.e. binary files), you will start burning through disk space pretty fast..
As an aside, it sounds like you don't need 90% of the features in a standard SCM system -- branching, labeling, etc. -- but you might find uses for them eventually. So learning one may be a win in the long run. You can do this with svn, etc. but it will take some customizing. If you use a scriptable editor (emacs, vi, TextMate, whatever) you could redefine the "Save" command as "Save and make a new version".
Subversion is more or less the gold standard.
I'd suggest (especially for a newbie) that you check out BeanStalk (www.Beanstalkapp.com) to run your subversion server and TortoiseSVN for your client.
Good luck!
Whatever you do, if someone mentions Visual SourceSafe -- run as fast as you can. VSS was created by Satan himself and handed down to torment developers the world over.
I think you're in a position where you have to get a little bit out of your comfort zone and take some time to learn git. It's pretty easy to learn and use.
Believe me, it's really worth it. Time spent learning git is time well spent.
If you are not working in a team, you could use something like Eclipse's local history feature. It stores versions of your files locally, and you can revert to previous versions whenever you feel like it. More details here: http://help.eclipse.org/ganymede/index.jsp (Search for "local history"). I am pretty sure other IDEs have such a feature too.
If you are collaborating with others on your code, there probably is no way around learning one of the standard tools like SVN, CVS or git. For most of them, there are plugins for many IDEs available, so you don't have to use the command line.
I currently use Subversion, but my source control experience is limited.
I would however suggest reading the tutorial by Eric Sink.
http://www.ericsink.com/scm/source_control.html
Its best to learn how to use an existing 'industry standard' versioning tool like Subversion. Even if you're new to programming and version control, SVN isn't that hard to learn and will serve you well. I personally use and recommend VisualSVN Server and TortoiseSVN for Windows. Both are free and quite simple to use.
For a system that creates a revision on every save, perhaps you should look into a Versioning File System.
I think TortoiseSVN would be a good Subversion client for you to try if you're in Windows. It won't do what you're looking for with every-time-I-save-I-get-a-new-version--you'll have to manually "commit" versions to the repository. When you do a commit, that creates a new version, essentially saving your progress at that point. TortoiseSVN is pretty user-friendly, and it's a GUI, so you won't be working at the command line. You'll be able to do things like right-click a file in Windows Explorer and choose Commit to save your progress. Plus, TortoiseSVN is free and open source.
Subversion is not really complicated. If you are using Windows, TortiseSVN will help a lot, if you are using Eclipse, subclipse plug-in is awesome. (You probably should be using eclipse regardless :) )
Some of the others are a bit complicated, but you just have to know the pattern with eclipse. Maybe you could "Try it out" with an open source project or some existing subversion server.
The cycle would be:
First you "Check out" a repository. This fills up your specified directory with the contents from the repository.
If you are doing it from the command line--it's "svn co"--there is enough help there to figure out the rest.
Second you edit your files. You don't have to lock them or anything.
if you add a new file, you use "svn add filename" as soon as you add it. This won't actually change the repository until you commit your changes.
When a group of edits are done, you check them in with "svn ci" (also svn commit works).
This one has a SLIGHT twist that you'll always forget--every commit needs a comment. You don't have to specify the files you are committing or anything, but you do need to be in the top level of your project (it will commit everything below your directory.
So the procedure here is, go to the "root" of your project tree and type:
svn ci -m "comment"
piece of cake.
Finally, IF someone else is checking stuff in things get SLIGHTLY stranger. before you commit, you should "update" and get their changes. "svn up" is all it takes, but it may warn you that there were merges. This only happens when both of you edited the same file, and 90% of the time, the merges will go okay. the rest of the time, it will put little markers in your file telling you what you changed and what they changed. The "up" command will tell you which files it did this to. Go look at them and clean the file up before you check the file in.
Always test between "svn up" and "svn ci", you never know if their crappy changes busted your pristine code.
That's really it. It's so easy from the CLI, that the graphics environments are hardly worth it (but subclipse is really nice if you are in eclipse anyway because it will visually show you modified files that need to be checked in).
If you ever forget, svn's command line help is extremely terse and useful, tells you JUST what you need to know, and has help on all the sub-commands and options.
If you're looking for an easy-to-set-up version control system for Windows, I highly recommend TortoiseHg, an easy-to-use Mercurial frontend for Windows. You don't have to worry about setting up and keeping track of a repository separate from your files, but you always can do so if you'd like to. Mercurial is a great tool because it can grow with your needs. It has all the usual features like easy merging, etc. and is quite a bit easier to wrap your head around than Git in my experience.
I think Git is really easy to use especially when you use GitHub. They also provide lots of good guides to get up and running.
http://github.com
http://github.com/guides/home
I've used Git, SVN, CVS, and Perforce. On both Windows and Unix environments.
My vote is definitely for SVN, as it's ease of use, and flexibility. I prever to use command-line now, but at one time I was using TortoiseSVN for Windows, which we were able to get non-technical people to use without a hitch.
Use SVN.
You're definitely on the right track with recognizing the need for version control, but sound unsure what that might mean to you and your work. Once you learn the concepts behind version control systems, you will really come to appreciate them.
The concepts are simple: a source code control system is a piece of software designed to help you store and manage your code. How you get code into and out of it differ based on which system you choose: one paradigm is that you deliberately "check out" a file, make your changes to it, test it and make sure it's good, then check it back in. Another is that you simply save every change you make because disk space is dirt cheap, much cheaper than your time and effort spent to create the source in the first place.
Another important concept is the "baseline" or "label". When your product is in a ready-to-ship state, you tell the source code control system to create a "label" and tag every current item your entire source code base with that label. That way, when someone reports a bug in version 4.1 you can go to your system, request all the files with the "Version 4.1" label and get exactly the source code they're having a problem with.
Having a source control tool integrated with your development environment makes the whole process much easier than having to mess with command lines. (Don't discount command line because of their complexity, they deliver elegant control to an experienced user, and you eventually will become an experienced user.) But for now, I'd recommend a source code tool that can automate the process as much as possible.
Some things to consider: are you now, or are you planning to share the development with another developer? That might make a difference on how you want to set up a server. If you're developing alone on your own box, you can set it all up locally, but that's probably not the best approach for a team. (If you're unsure, git is very flexible in that arena.) Are you going to be storing large multimedia files, or just source code? Some source code systems are designed to efficiently store only text files, and will not handle movies, sounds or image files very well.
Something else to know is that most newer source control systems require some kind of "daemon" program running on the server (Subversion, git, Perforce, Microsoft Team Foundation Server) while the older, simpler systems just use the file system directly (Visual Source Safe, cvs) and don't require a server program.
If you don't want to learn much and your demands are low, the simpler solutions should suffice. Microsoft's Visual Source Safe used to come with their visual studio products, and was a very simple to use tool. It's not very robust, it's Microsoft-only, and it can't handle large files well, but it's very, very easy to set up and use. If you don't want to spend money, Subversion and git are two stellar open source solutions, and there is a lot of documentation for both on the web.
If you like to spend money, Perforce is considered an excellent choice for professional development teams (and I believe they have a free single-developer version.) If you really like to spend lots of money and want to make Bill Gates happy, Microsoft's Team Foundation Server is a complete software development lifecycle manager, is extremely easy to use in the Windows environment, and very powerful; but you'd probably want to devote an entire Windows server (plus SQL Server) instance to host it, and it will cost you several thousand dollars just on licenses. Unfortunately it is not the right tool for a one-man shop, or if you have no Windows admin experience.
If you have the budget or the connections, bringing in an experienced software engineer to help you get things started might be the quickest path to success. Otherwise, you'll have to do some more research to learn which systems best fit your situation.
Whatever VCS you use, if you choose versioning on demand instead of automatic versioning (to borrow terms from Alex's post), you will have to go through some ceremony to:
-create,
-rename,
-move,
-copy, or
-delete
a file that is under source control.
When you create a new file, you have to Add it to source control before you Commit your changes to the repository.
When you rename, move, copy, or delete a file under source control, do so with your VCS client. In TortoiseSVN and TortoiseGit, the move and copy operations are done with a right-click-and-drag, whereas the rename and delete operations are available via a right-click.
As you can imagine, changing things like the name of a project can be quite the hassle, hence the case for automatic versioning.
Ordinary file edits and any changes to files not under source control, do not require you to tell your VCS client about them.
Finally, for one-man projects, I prefer git over SVN because SVN requires at least 2 copies of everything: a repository (the "master" copy of the files and history) and a working copy (the copy you do your work on). With git, the repository and working copy are the same thing, which makes my experience simpler.
We use SourceGear Vault, which has great integration with Visual Studio, and is free for a single user. Depending on what framework and languages you're using, though, Subversion is a great free solution.
First of all, please read these articles by Eric Sink. Eric sink runs a company that creates a Source Control system called Vault. He explains in a newbie friendly manner how to do source control, best practices etc:
Introduction to Source Control
I found it invaluable when I first wanted to understand Source Control.
SourceGear Vault is FREE for a single user. It's interface is intuitive and integrates well with Visual Studio.
If it's just you, you might want to try Bazaar. It's distributed like Git (so it'll be nice for a single person--no server to deal with), but one of their main goals was to make it it much easier to use than Git.
Also, there is a handy gui tool that should make it amazingly easy to use called ToroiseBzr. http://bazaar-vcs.org/TortoiseBzr
There is in fact such a tool. It is called emacs.
Just create yourself a "~/.emacs" file and put the following lines in it:
(setq kept-new-versions 5)
(setq kept-old-versions 5)
And then restart emacs.
This tells emacs to save your 5 oldest and 5 newest versions of that file. They will be kept in files named filename~n~ where "filename" is your file's normal name, and "n" is the backup number.
If you develop your project alone (don't need ane server for collaboration) Mercurial might be you system of choice. I personally value one of its features: it only uses one place to save its information, it is the .hg directory in the root of your project. It doesn't put its data into every directory (like SVN). This way the archive and the project directory is easy to manage.
I've used Visual Source Safe, Perforce, and Subversion. They were all fine, but I would have to say that the support and extensions for Subversion just seemed slightly better. If you're planning on entering/staying in the software industry, you MUST know the fundamentals to source control, and I would highly recommend setting up one of the source control services. Subversion would be my recommendation and is free as well. It will be complicated at first, but you really should use a SVN client to add a GUI to increase utility and cut down on all the complication you're observing.
I quick google of "dreamweaver svn" reveals that many people are working with Subversion in Dreamweaver. I'm a advocate of version control, and SVN in particular, so I would recommend you look into that :)
If you don't want to use a full on version control system (as noted above), you may be able to improve your lot by refining and automating the procedure you described originally. Depending on your comfort with the tools you should be able to put together a script in DreamWeaver itself or in Windows Scripting ( Powershell, VBA, Perl, etc ) that will at least make date-named copies of the folder you are working in every so often. This will keep you from having to do it and make sure there aren't any typo-related problems. Further down that path you can have your script put a copy of your work on a backup drive or remote server, and then you'd have a back up, too.
I'm afraid I don't know much about DreamWeaver, but if it has much scripting support built-in you may even be able to "hook" into the Save/ Auto-Save functions and have them do exactly what you want.
Hope this helps,
adricnet

Resources