There are many different version schemes, and it seems like every major software company uses a different scheme, but I would like to know which scheme is best for mISV.
Also, if you can, I would like you to write which scheme do you use in your company, pros and cons of such scheme, and why you have chosen that scheme.
Related Questions
Deciding on version numbers
How to do version numbers?
How do you know what version number to use?
What is your preferred style of product version number and why? (this answers my second question)
Versioning Style Guide
Semantic Versioning (this is probably the best versioning scheme for components, not sure about applications for end-users)
Hints
This is a list of hints I've found on the internet:
Use at least three-part version numbers (2.9.0, 2.10.0) so it is obvious they are not decimals.
Use date/time based versiong (source)
Method to extract compile time from .NET automatic version scheme
Automatic versioning using MSBuild.Community.Tasks
mISV programs can be characterize by a high release rate, and at least by a high build rate (up to several builds a day).
In that context, it can be interesting to monitor:
a build Id (which can be the SVN id or a Git or Mercurial SHA1)
a classic Major.Minor.Build version
For the team, you can communicate in term of build id, while in term of release management for the client, a more classic version schemes is in order.
Quite frankly, I think this one is solved. Use date and time stamps. You're version numbers then turn into readable strings that people understand.
In the off chance that you have many different builds out in the wild, each should be tagged as such. Besides a version number (which is a date and time stamp) you'd have a simple tag. Mostly these are simply debug and release. But there are a variety of other such tags that make sense, such as staging, testing, or feature/branch specific tags.
If you wanna embed metadata about your build process/environment I consider that going beyond simple versioning numbers but sometimes, very helpful.
That's about it.
Related
I want to do some automatic story generation demonstration and the approach I am taking is using AI planning. I have been reading several relevant papers and have figured out that PDDL is perhaps the most widely used language to form the planning problem. I have been looking at the syntax and several example codes to learn how to use it.
The part where I am stuck is how to get the planner to work. I have found out some popular planners (fast-forward, MBP, IPP) but am not being able to make them work, using the instructions even from the sources itself.
I am using Gnome Terminal on Ubuntu 13.04.
I am very new to planning and this may be a very naive question but I assure that I have been searching for more than 3-4 days without any luck. Also, suggestions on using some other planning system are welcome.
If you are using Linux then I strongly suggest to use Fast Downward (it has its own web page - just google it). First of all, it is currently one of the best-known planning systems in the AI planning community and, further, it is really easy to get it to run. Well, you still need half an hour or so, but there is an easy-to-follow step-by-step description telling you where to check out the code and which commands you need to run.
It has also implemented most of the known planning heuristics that are required to solve problems fast or even optimal (planning requires search and heuristics make the search "goal-oriented" rather than blind and, if the heuristic is admissible and/or monotone (depending on the kind of search algorithm that is chosen -- see fast forward and pddl: is the computed solution the best?), it guarantees to find optimal solutions).
Concerning literature, I suggest to read/skip through the following two journal articles:
Porteous, J.; Cavazza, M.; and Charles, F. 2010. Applying planning to interactive storytelling: Narrative control using state constraints. ACM Trans. Intell. Syst. Tech. 10:1-10:21.
http://dl.acm.org/citation.cfm?id=1869399
Patrik Haslum. "Narrative Planning: Compilations to Classical Planning". Journal of AI Research, vol. 44, p. 383-395, 2012
http://www.jair.org/papers/paper3602.html
Well, both MBP and IPP are really, really old systems. If you're just looking for a ready-made planner to use in an off-the-shelf manner, I'd suggest you to follow the pointers leading to the authors (and software) that took part in the last International Planning Competition (2011):
http://www.plg.inf.uc3m.es/ipc2011-deterministic/ParticipatingPlanners.html
I want to create a new data type and new operators in PostgreSQL.
I saw that in the documentation, it is possible to incorporate new source files in C (for example) and create a new data type and operators. PostgreSQL is extensible in that direction. More information at: documentation
But also the PostgreSQL has open source, and I could alter the source code and add a new data type, compiling a new version.
With that, I want to know what the differences, advantages and disadvantages of each method of including a new data type in PostgreSQL. I'm very concerned about the performance in query processing.
Thank you.
If you modify PostgreSQL you have to maintain the whole code-base, and you have to do your patching every time you want to upgrade even between minor versions. If you make an extension you only have your little extension to maintain. And it's also much more easy to distribute a small extension program if you ever want to do that.
I fully agree with Jachim's answer.
Another thing is:
Developing your own C-Language extension in PostgreSQL is (rather) well documented - simple programs can be done by compiling just one function of code and writing the corresponding functions. Adding a custom datatype is a bit more complicated, but still doable. The extension I developed was even written in C++ with just a bit of wrapper glue between PostgreSQL's plain C - that made developing much more flexible.
Altering the PostgreSQL core however is more complicated in terms of how do you start and what do you do. And in the end, you archive the same.
To sum it up: C-Language functions gives you all the advantages:
High performance by utilising PostgreSQLs internal datatypes
Simple programming interface
Just a small bit of code with a documented and presumably very stable function interface
I can not see any advantages of altering PostgreSQLs core, but many disadvantages:
Long compilation times
Maintaining your own code branch and regularly reapplying your patch to the current release
Higher risk of bugs.
If you need an example of a lot of different ways to use the C-Language interface, have a look at the PostGis source code - they use nearly all function types and have a lot of fancy tricks in their code.
There are no differences between internal operators and datatypes and custom operators and datatypes - and then it has same performance. Outer and inner implementation respects same rules and patters. So there is not reason for hacking Postgres for it.
We have patched PostgreSQL in GoodData - and we have a own extensions too - everywhere where it is possible and practical, we use a custom extensions - and where it is not possible, we use a own hacks - backports from 9.2, 9.3, some enhancing pg_dump and psql, statistics - but we have a active PostgreSQL's hackers in company. It is not usual. For users without experience with PostgreSQL hacking are creating extensions safe and good performance solution.
I was thinking of starting a project that very clearly needs a persistent store. I was about to reluctantly decide on a RDBMS, when I came across an article which briefly mentions CouchDB. Seems some advancements in DB technology have happened since I last looked, so I thought I would ask here about databases before I got into it.
Here are my criteria. ( I list the criteria again at the end, so if you want to skip the explanations just scroll down. )
The project is open source and I will not be asking anything for it, so preferably the database is open source and free. Furthermore the software has to run on both Linux and Windows.
There are parts of the project that have to be in C++. The project is not large enough code wise to justify using a second language. So basically the whole thing will be C++.
This project will not have anything to do with the web, so preferably
the database will not require the detritus of a web library.
The objects I want to store fall into one of two categories: a basic object and a container object. The difference being objects which are containers will contain even more objects, ie: a parts of parts problem. I need a database that can handle such cases cleanly and efficiently.
I also expect the schema to evolve rapidly, at least initially. I alse suspect that some of the old data simply will not fit into the new schemas. So I would like to keep different versions of the schema around. Win possible, I would like to be able to transform data in one to schema into another schema.
For the application to work the way intended, people would have to exchange large chunks of database with each other. So I would want simple ways of importing and exporting data, which I could automate to some degree.
Finally it would be nice if the database could in someway be simulated in unit tests.
THose are my requirements. I have replicated them below to make it easier for people answering.
Thank you
Non Technical requirements
1. Open source preferably free.
2. Run on Windows and Linux
Has a C++ interface.
Is able to handle a non-web application, preferably without REST.
Can handle a "parts of parts" problem fairly well.
Can handle multiple indexes.
Has sort of concept of schema version, can handle multiple schema versions, and can migrate tables from one schema to another.
Should have a simple mechanism for move data from one instance of the database to another.
Preferably has some mechanism for testing.
HDF5 is a binary format which behaves like an hierarchical database. It has binding and libraries for C++ and python (I only use the latter) and it is used to store big amounts of data, like the ones produces in certain physics and astronomy experiments.
http://www.hdfgroup.org/HDF5/
I've looked at a few nosql databases some time ago (had an different requirement than than you though - needed it to be a standalone server). The ones that I remember as particularly interesting are Redis and Kyoto Cabinets. Have a look.
BTW, you don't mention any performance requirement. If so, have you considered SQLite? Simple, embedded, stable, and with the flexibility of SQL after all. With prepared statement the performance penalty of SQL should not be very high.
EDIT: ooops, just noticed that you asked this more than a year ago... Well, perhaps you can tell us what you've chosen :)
I see this time and time again. The UAT test manager wants the new build to be ready to test by Friday. The one of the first questions asked, in the pre-testing meeting is, "what version will I be testing, against?" (which is a fair question to ask). The room goes silent, then someone will come back with, "All the assemblies have their own version, just right-click and look at the properties...".
From the testing managers point-of-view, this is no use. They want a version/label/tag across everything that tells them what they are working on. They want this information easily avaialble.
I have seen solutions where the version of diffierent areas of a system being stored in a datastore, then shown on the main application's about box. Problem is, this needs to be maintained.
What solutions have you seen that gets around this on going problem?
EDIT. The distributed system covers VB6, Classic ASP, VB.Net, C#, Web Services (accross departments, so which version are we using ?), SQL Server 2005.
I think the problem is that you and your testing manager are speaking of two different things. Assembly versions are great for assemblies, but your test manager is speaking of a higher-level version, a "system version", if you will. At least that's my read of your post.
What you have to do in such situations is map all of your different component assemblies into a system version. You say something along the lines of "Version 1.5 of the system is composed of Foo.Bar.dll v1.4.6 and Baz.Qux.dll v2.6.7 and (etc.)". Hell, in a distributed system, you may want different versions for each of your services, which may in and of themselves, be composed of different versions of .dlls. You might say, for example: "Version 1.5 of the system is composed of the Foo service v1.3, which is composed of Foo.dll v1.9.3 and Bar.dll v1.6.9, and the Bar service v1.9, which is composed of Baz.dll v1.8.2 and Qux.dll v1.5.2 and (etc.)".
Doing stuff like this is typically the job of the software architect and/or build manager in your organization.
There are a number of tools that you can use to handle this issue that have nothing to do with your language of choice. My personal favorite is currently Jira, which, in addition to bug tracking, has great product versioning and roadmapping support.
Might want to have a look at this page that explains some ways to integrate consistent versioning into your build process.
There are a number of different things that contribute to the problem. Off of the top of my head, here's one:
One of the benefits of a distributed architecture is that we gain huge potential for re-use by creating services and publishing their interfaces in some form or another. What that then means is that releases of a client application are not necessarily closely synchronized with releases of the underlying services. So, a new version of a business application may be released that uses the same old reliable service it's been using for a year. How shall we then apply a single release tag in this case?
Nevertheless, it's a fair question, but one that requires a non-trivial answer to be meaningful.
Not using build based version numbering for anything but internal references. When the UAT manager asks the question you say "Friday's*".
The only trick then is to make sure labelling happens reliably in your source control.
* insert appropriate datestamp/label here
We use .NET and Subversion. All of our application assemblies share a version number, which is derived from a manually updated major and minor revision numbers and the Subversion revision number (<major>.<minor>.<revision>). We have a prebuild task that updates this version number in a shared AssemblyVersionInfo.vb file. Then when testers ask for the version number, we can either give them the full 3-part number or just the subversion revision. The libraries we consume aren't changing or the change is not relevant to the tester.
Our company has a point of sale system with many extras, such as ordering and receiving functionality, sales and order history etc. Our main issue is that the system was not designed properly from the ground up, so it takes too long to make fixes and handle requests from our customers. Also, the current technology we are using (Progress database, Progress 4GL for the language) incurs quite a bit of licensing expenses on our customers due to mutli-user license fees for database connections etc.
After a lot of discussion it is looking like we will probably start over from scratch (while maintaining the current product at least for the time being). We are looking for a couple of things:
Create the system with a nice GUI front end (it is currently CHUI and the application was not built in a way that allows us to redesign the front end... no layering or separation of business logic and gui...shudder).
Create the system with the ability to modularize different functionality so the product doesn't have to include all features. This would keep the cost down for our current customers that want basic functionality and a lower price tag. The bells and whistles would be available for those that would want them.
Use proper design patterns to make the product easy to add or change any part at any time (i.e. change the database or change the front end without needing to rewrite the application or most of it). This is a problem today because the Progress 4GL code is directly compiled against the database. Small changes in the database requires lots of code recompiling.
Our new system will be Linux based, with a possibility of a client application providing functionality from one or more windows boxes.
So what I'm looking for is any suggestions on which database and/or framework or programming language(s) someone might recommend for this sort of product. Anyone that has experience in this field might be able to point us in the right direction or even have some ideas of what to avoid. We have considered .NET and SQL Express (we don't need an enterprise level DB), but that would limit us to windows (as far as I know anyway). I have heard of Mono for writing .NET code in a Linux environment, but I don't know much about it yet. We've also considered a Java and MySql based implementation.
To summarize we are looking to do the following:
Keep licensing costs down on the technology we will use to develop the product (Oracle, yikes! MySQL, nice.)
Deliver a solution that is easily maintainable and supportable.
A solution that has a component capable of running on "old" hardware through a CHUI front end. (some of our customers have 40+ terminals which would be a ton of cash in order to convert over to a PC).
Suggestions would be appreciated.
Thanks
[UPDATE]
I should note that we are currently performing a total cost analysis. This question is intended to give us a couple of "educated" options to look into to include in or analysis. Anyone who could share experiences/suggestions about client/server setups would be appreciated (not just those who have experience with point of sale systems... that would just be a bonus).
[UPDATE]
For anyone who is interested, we ended up going with Microsoft Dynamics NAV, LS Retail (a plugin for the point of sale and various other things) and then did some (and are currently working on) customization work on top of that. This setup gave us the added benefit of having a fully integrated g/l system, which our current system lacked.
Java for language (or Scala if you want to be "bleeding edge", depending on how you plan to support it and what your developers are like it might be better, but also worse)
H2 for database
Swing for GUI
Reason: Free, portable and pretty standard.
Update: Missed the part where the system should be a client-server setup. My assumption was that the database and client should run on the same machine.
I suggest you first research your constraints a bit more - you made a passing reference to a client using a particular type of terminal - this may limit your options, unless the client agrees to upgrade.
You need to do a lot more legwork on this. It's great to get opinions from web forums, but we can't possibly know your environment as well as you do.
My broad strokes advice would be to aim for technology that is widely used. This way, expertise on the platform is cheaper than "niche" technologies, and it will be easier to get help if you hit a brick wall. Of course, following this advice may not be possible if you have non-negotiable technology already in place at customers.
My second suggestion would be to complete a full project plan, with detailed specs and proper cost estimates, before going with the "rewrite from scratch" option. Right now, you're saying that it would be cheaper to rewrite the system than maintain it, and you don't really know how much it would cost to re-write.
I suggest you use browser for the UI.
Organize your application as a web application.
There are tons of options for the back-end. You can use Java + MySQL. Java backend will save you from windows/linux debate as it will run on both platforms. You won't have any licensing cost for both Java and MySQL. (Edit: Definitely there are a lot of others languages that have run-times for both linux & windows including PHP, Ruby, Python etc)
If you go this route, you may also want to consider Google Web Toolkit (GWT) for creating the browser based front-end in a modular fashion.
One word of caution though. Browsers can be pesky when it comes to memory management. In our experience, this was the most significant challenge in doing browser based POS You may want to checkout Adobe Flex that runs in browser but might be more civil in its memory management.
What is CHUI? Character-UI, as in VT terminals? Or even 3270 style?
It sounds like you need a 3-tier system - the database backend, a middle-layer that runs the bulk of the back-end business processes, and a front-end layer for the CHUI / GUI / data-gateway.
All three layers can reside on one machine; or you can distribute the tiers out to various servers. The front-end layer would control the actual terminals, whether they are VT-terminals, or a web-browser, or a custom-written 'client' application.
Make sure you have considered the hardware needs here -- are you going to have barcode scanners, cash drawers, POS debit/credit terminals, et cetra? If you are using a standard browser, it might be hard to reliably integrate those items. (At the very least, you're likely going to have to write special applets to handle them.)
Finally, consider the possibility of a thin-client technology on Windows. It greatly simplifies system management, since you only have to upgrade the software centrally. Thin-client PC's are cheap -- sub $200.
Golden Code Development (see www.goldencode.com) has a technology that does automated conversion of Progress 4GL (the schema and code... the entire application) to a Java application with a relational database backend (e.g. PostgreSQL). They currently support a very complete CHUI environment and they do refactor the code. For example, the conversion separates the UI, the data model and the business logic into separate Java classes. The entire result is a drop-in replacement that is compatible with the original (users don't need retraining, processes don't need to be modified, the data is migrated too). This is possible because they provide an application server and a set of runtime classes that provide that compatibility. The result of the automated conversion is not something that needs further editing before you can compile and run it. True terminal support is included so hardware terminals still work (it requires a small JNI library to access NCURSES from Java). All the rest of the code in the runtime is pure Java. No Progress Software Corp technology is used in the resulting system and it runs on Linux.
At least one converted system is already in production, running a 24 by 7 mission critical environment. It is a converted ERP system that their mid-sized pilot customer uses to run their entire business.