Cross Platform, Open Source, Data Synchronization Strategy - mobile

The goal is to have occasionally connected clients belonging to one user running on an array of different platforms be able to share a common dataset. Server will be GNU/Linux. Platforms and devices initially targeted include: GNU/Linux, Android and Windows on x86_64, i686, ARMv7, ARMv8. Possible consideration given to OSX and iOS devices. The UI toolkit will likely be Qt5. The non-interface logic will be in c++. My code will be licensed under the GPL or AGPL.
Many applications like Google's Maps, for instance, require a reliable connection to work effectively. My design decision is that each client should be fully functional without a network connection, assuming that the user has not used a different device/platform to change the data. In such cases, only a synchronization with network access should be necessary.
Some applications are written with hand-rolled data persistence and synchronization layers customized for each platform or subset of platforms. While this is quite effective, it is beyond the available skill and manpower available (just me :) ).
Also, though I'm somewhat conditioned by past experience to think of things from a relational database perspective, the data I'm expecting is probably better suited to a JSON/NoSQL database. Though there is UNQLite, it has no more synchronization capabilities than SQLite. And things like MongoDB, CouchDB, etc. are, as far as I have found, solely targeting the data center or cloud at the moment. Really, the same seems to be true of SQL solutions. Are there truly cross platform databases that have native synchronization/replication capabilities?
I've spent the past weeks looking at a variety of options, but have found little truly useful. The closest I've seen to something useful is CouchBase; however, it seems to be rather unstable (from an API standpoint, not relating to crashes), has a huge set of dependencies, seems to be nothing but a SQLite wrapper on mobile platforms and has limited developer docs at the moment.
If there is an applicable library or toolkit, please point it out. If not, or if this is excessively "discussion oriented", please kindly point me to an appropriate resource to find help before closing this question.
Notes:
How would you design & implement a Cross Platform synchronization mechanism? Talks about hand rolling solutions. If this is the only way to go, I would appreciate references specifically applicable to SQLite3 since I know that it is stable and available on just about everything but the kitchen sink.
Cross platform data synchronization Does not really address anything significant. One of the main concerns I have is effectively synchronizing data while minimizing network bandwidth requirements. Even in the US, there are vast areas where high speed data is simply not available. To me, usability in the widest range of circumstances is important.
How to Synchronize Android Database with an online SQL Server? This post is somewhat helpful but I am a total, absolute novice at developing for anything but *NIX on servers and workstations. Again, if it is necessary to do SQL based persistence for this set of requirements, more pointers to guidance would be helpful.

Related

JitterBit vs Dell Boomi vs Celigo

We've narrowed our selection for an ipaas down to the above 3.
Initially we're looking to pass data from a cloud based HR system to Netsuite, and from Netsuite to Salesforce, and sometimes JIRA.
i've come from a Mulesoft background which I think would be too complex for this. On the other hand it seems that Celigo is VERY drag and drop, and there's not much room for modification/customisation.
Of the three, do you have any experience/recommendations? We aren't looking for any code heavy custom APIs, most will just be simple scheduled data transfers but there may be some complexity within the field mapping, and we want to set ourselves up for the future.
I spent a few years removing Celigo from NetSuite and Salesforce. The best way I can describe Celigo is that it is like the old school anti-virus programs which were often worse than the viruses... lol... It digs itself into the end system, making removing it a nightmare.
Boomi does the job, but is very counter-intuitive, and overly complex. You can't do everything from one screen, you can't easily bounce back and forth between tasks/operations/etc. And, sometimes it is very difficult to find where endpoints are used, as they are not always shown in their "where is this used" feature. Boomi has a ton of endpoint connectors pre-built (the most, I believe), but I have not seen an easy way to just create your own. Boomi also has much more functionality than just the integrations, if that is something that may be needed.
Jitterbit, my favorite, is ridiculously simple to use. You can access everything from one main screen, you can connect to anything (as long as it can reach out to the network, or you can reach it via the network - internal or external). Jitterbit has a lot of pre-built endpoint connectors. It is also extremely easy to just create a connection to anything you want. The win with Jitterbit is that it is super easy to use, super easy to learn, it always works, they have amazing support (if you need it). I have worked with Jitterbit the most (about 6 years), and I have never been unable to complete an integration task in less that a couple of day, max.
I have extensive experience with Dell Boomi platform but none with JitterBit or Celigo. Dell Boomi offers very versatile and well supported iPaaS solution. The technical challenges of Boomi are some UI\usability issues (#W3BGUY mentioned the main ones) and the lack of out-of-the-box support for CI/CD and DevOps processes (code management, versioning, deployments etc.)
One more important component to consider here is the pricing of the platform. Boomi does charge their clients yearly connection prices. Connection is defined as a unique combination of URL, username and password. The yearly license costs vary and can range anywhere between ($1,000 - $12,000) per license per year. The price depends greatly on your integration landscape and the discounts provided so I would advise on engaging with vendor early to understand your costs. Would be great to hear from others on pricing for JitterBit and Celigo.
Boomi is also more than just an iPaaS platform. They offer other modules of their platform to customers: API Management, Boomi Flow (workflow and automation module), Master Data Hub (master data management). Some of these modules are well developed and some are in their infancy (API Management).
From my limited experience with MuleSoft platform, I share the OP's sentiments about it being too complex for simple integrations. They do provide great CI/CD and DevOps functionality though if that is something that is needed.
There is not a simple answer to a question like this. One needs to look at multiple aspects of the platform and make a decision based on multitude of factors. I would advise looking at Gartner and Forrester reports for a general guidelines and working out the pricing (initial and recurring) with the vendor.
I have only used Jitterbit, so can only comment on that. It works fine. It is pretty intuitive and easy to use, but does have some flexibility with writing your own queries, defining and mapping file formats, and choosing different transfer protocols.
I've only used the free version (which you need to host somewhere and also is not supported) and it was good enough for production tasks. If you have the luxury of time, I'd say download it and try it out. If it works for you, throw it on a server or upgrade to the cloud version.
One note: Jitterbit uses background services. If you run it locally and then decide to migrate your account to a server, you need to stop those services on your local. Otherwise, it will try to run jobs from both locations and that doesn't turn out well.
Consider checking out Choreo as well. It has a novel simultaneous code + low-code approach for integration development. And provides rich AI support for performance monitoring, debugging, and data mapping.
Disclaimer: I'm a member of the project.

Netsuite Salesforce Integration ESB vs Prebuilt Connectors

Is there a benefit for using prebuilt connectors to and from SaaS billing platforms like Aria/Zuora when they live between Salesforce and Netsuite used as pure CRM and ERP/Acounting/Finance respectively. That is, versus using an ESB/Integration platform like Mulesoft or Boomi.
We are currently looking at changing billing and ERP systems and having them integrate together and with Salesforce CRM. So the chain would look like:
CRM -- Billing Solution -- ERP
Many of the billing systems have prebuilt connectors that work with ERP systems like Netsuite or Fusion, as well as connectors for Salesforce. Not to mention web service end points/APIs.
But there are integration vendors like Mulesoft and Boomi (basically Enterprise Service Bus PaaS providers) that also allow integration between the services.
I come from a SOA background and tend to favour a standalone ESB to connect the systems but due to my lack of familiarity with Saas ERP systems don't understand the benefits and pitfalls in the prebuilt connector vs ESB debate. I understand the concepts behind avoiding point to point integration, which would turn around to being a benefit for using ESBs. But is there a benefit for using prebuilt connectors within the SaaS platforms ... and are there serious downsides (my main concern).
Can anyone provide some insight here? I am not asking for "which one is best", just some real world experience good or bad that could help someone make these kinds of decisions.
I cannot provide a comprehensive comparison between the services you plan on using, but your question is quite interesting so I thought I'd share my thoughts and experience and hope you'll benefit from it.
Prebuilt connectors are not something new - they existed long before SaaS and iPaaS became a thing. So their pros and cons are still the same, the main issues you will be looking at are still very much related to the lack of flexibility you'll be facing and of course, the shortcomings of point-to-point integration. Things are somewhat refracted via the prism of SaaS/iPaaS but I believe that most aspects are still relevant.
Prebuilt Connector capabilities and support
You need to assess to what extent a prebuilt connector really covers the integration between the two systems. Services like salesforce take pride in their customizability and extensibility by using 3rd party extensions. In most cases the connector will be following a one-size-fits-all approach that only satisfies the most common and simple of all integration needs. It's all fun and games until something has to change. It is not possible to know in advance what you could need the future but think about it - would you be able to count on having your customisations and extensions covered by the prebuilt connector in case you decide to integrate them as well?
Another point you must consider is support - what happens if one of those companies decide to suddenly announce that they will stop supporting future integration via prebuilt connectors you are already using? You should check to see if there are any guarantees for you.
Tight Coupling and Service provider lock-in
Using point to point connectors will couple systems to each other so you’ll be severely limiting your options to switch between platforms if you need to at some point. It might seem a fairly simple integration scenario now, but adding more systems to the mix over time generally makes things even worse, since you are going to have dependencies here and there, and not every new system will have a connector out of the box to integrate easily with all the others you're already using. Having a middleware gives you the precious ability to map and transform data if needed, and maybe even apply some business logic that makes your life much easier (and cheaper). Also you'd be able to replace a system without having to replace others depending on it.
Consider your scenario: if you decide to change the billing system, you will have to find one that’s being properly supported by both the CRM and ERP providers. Thus, you could potentially remain locked into using exactly these three, even though for example they don’t fit your needs anymore or there is something else on the market that would have given you great competitive advantage if only you could integrate with it.
Orchestration and future investments
An important note about the p-2-p scenario is that you will not be able to implement process services that span across all the systems if needed. The added flexibility and benefits of using even simple forms of orchestration (I’m not even talking about achieving what can be achieved with a full featured business process management) will be off reach for your business. When the market changes and Time to Market is the deciding factor you may be not be prepared.
Thoughts on choosing iPaaS
Using iPaaS platform looks like a much better decision in the long run. Yet, you still have to make sure that the platform does not just give you some set of predefined connectors and drag&drop beauties (they all do), but also the ability to easily implement your very own integrations from scratch while supporting industry standards. I think that it is absolutely crucial to have this kind of flexibility when talking about an ESB solution, be it in the cloud or on premises.
The potential cons of the iPaaS approach would be:
you come to depend on yet another service provider and you will have more costs because the service is not free;
your data travels to another service provider, so there is additional risk in terms of security, no matter what the service providers may try to tell you;
more upfront effort spent on design and implementation;
additional burden, related to having to maintain integration and accommodate potential changes (however rare they might be) if a new version comes out.
Conclusion
It’s all really a tradeoff between desired flexibility and the investment that you’re willing to make. Your decision will heavily depend on the current state of your business and your growth expectations going forward, rather than the purely technical side of things.
I hope my thoughts gave your some perspective. Please update the question with your decision and reasoning when the time comes. Good luck!

Architecture for a machine database [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
This might be more of a serverfault.com question but a) it doesn't exist yet and b) I need more rep for when it does :~)
My employer has a few hundred servers (all *NIX) spread across several locations. As I suspect is common we don't really know how many servers we have: more than once I've been surprised to find a server that's been up for 5 years, apparently doing nothing but elevating the earth's temperature slightly. We have a number of databases that store bits of server information -- Puppet, Cobbler, Nagios, Cacti, our load balancers, DNS, various internal spreadsheets and so on but it's all very disparate, incomplete and overlapping. Maintaining this mess costs time and money.
So, I'd like to come up a single database which holds details of what each server is (hardware specs, role, etc) and replaces (or at least supplies data for) the databases mentioned above. The database and web interface are likely to be a Rails app as this is what I have most experience with. I'm more of a sysadmin than a coder.
Has this problem already been solved? I can't find any open source software that really fits the bill and I'm generally not too keen on bloaty, GUI vendor-supplied solutions.
How should I implement the device information collection bit? For instance, it'd be great to the database update device records when disks are added or removed, or when the server serial number changes because HP replace the board. This information comes from many different sources: dmidecode, command-line disk tools, SNMP against the server or its onboard lights-out card, and so on. I could expose all this through custom scripts and net-snmp, or I could run a local poller that reported the information back to the central DB (maybe via a RESTful interface or something). It must be easily extensible.
Have you done this? How? Tell me your experiences, discoveries, mistakes and recommendations!
This sounds like a great LDAP problem looking for a solution. LDAP is designed for this kind of thing: a catalog of items that is optimized for data searches and retrieval (but not necessarily writes). There are many LDAP servers to choose from (OpenLDAP, Sun's OpenDS, Microsoft Active Directory, just to name a few ...), and I've seen LDAP used to catalog servers before. LDAP is very standardized and a "database" of information that is usually searched or read, but not frequently updated, is the strong-suit of LDAP.
My team have been dumping all out systems in to RDF for a month or two now, we have the systems implementation people create the initial data in excel, which is then transformed to N3 (RDF) using Perl.
We view the data in Gruff (http://www.franz.com/downloads.lhtml) and keep the resulting RDF in Allegro (a triple store from the same guys that do Gruff)
It's incredibly simple and flexible - no schema means we simply augment the data on the fly and with a wide variety of RDF viewers and reasoning engines the presentation options are enless.
The best part for me? no coding, just create triples and throw them in the store then view them as graphs.
The collection of detailed machine information is a very frustrating problem (many vendors want to keep it this way). Even if you can spend a large amount of money, you probably will not find a simple solution to this problem. IBM and HP offer products that achieve what you are seeking, but they are very, very, expensive, and will leave a bad taste in your mouth once you realize that probably all you needed was 40-50% of the functionality they offer. You say that you need to monitor *Nix servers...most (if not all) unices support RFC 1514 (windows also supports this RFC as of windows 2000). The Host MIB support defined by RFC 1514 has its drawbacks however. Since it is SNMP based, it requires that SNMP be enabled on the machine, which is typically not the default for unix and windows machines. The reason for this is that SNMP was created before the entire world was using the Internet, and thus the old, crusty nature of its security is of concern. In many environs, this may not be acceptable for security reasons. However, if you are only dealing with machines behind the firewall, this might not be an issue (I suspect this is true in your case). Several years ago, I was working on a product that monitored hundreds of unix and windows machines. At the time, I did extensive research into the mechanics of how to acquire detailed information from each machine such as disk info, running processes, installed software, up-time, memory pressure, CPU and IO load (Including Network) without running a custom client on each machine. This info can be collected in a centralized fashion. As of three or four years ago, the RFC-1514 Host MIB spec was the only "standard" for acquiring detailed real-time machine info without resorting to OS-specific software. Sun and Microsoft announced a WebService based initiative many years ago to address some of this, but I suspect it never received any traction since I cannot at the moment even remember its marketing name.
I should mention that RFC 1514 is certainly no panacea. You are at the mercy of the OS-provided SNMP service, unless you have the luxury of deploying a custom info-collecting client to each machine. The RFC-1514 spec dictates that several parameters are optional, and if your target OS does not implement it, then you are back to custom code to provide the information.
I'm contemplating how to go about this myself, and I think this is one of the key pieces of infrastructure that not having around keeps us in the dark ages. Hopefully this will be a popular question on serverfault.com. :)
It's not just that you could install a single tool to collect this data, because that's not possible cheaply, but ideally you want everything from the hardware up to the applications on the network feeding into this thing.
I think the only approach that makes sense is a modular one. The range of devices and types of information is too disparate to come under a single tool. Also the collection of data needs to be as passive and asynchronous as possible - the reality of running infrastructure means that there will be interruptions and you can't rely on being able to get the data at all times.
I think the tools you've pointed out form something of an ecosystem that could work together - Cobbler can install from bare-metal and hand over to Puppet, which has support for generating Nagios configs, and storing configs in a database; for me only Cacti is a bit opaque in terms of programmatically inserting new devices, templates etc. but I know this is possible.
Ultimately you have to sit down and work out which pieces of information are important for the business you work for, and design a db schema around that. Then, work out how to get the information you need into the db, whether it's from Facter, Nagios, Cacti, or direct snmp calls.
Since you asked about collection of data, I think if you have quite disparate kit (Dell, HP etc.) then it makes sense to create a library to abstract away as much as possible the differences between them, so your scripts just make standard calls such as "checkdiskhealth". When you add new hardware you can add to the library rather than having to write a completely new script.
Sounds like a common problem that larger organizations would have. I know our (50 person company) sysadmin has a little access database of information about every server, license, and piece of hardware installed. He's very meticulous, but when it comes time to replace or repair hardware, he knows everything about it from his little db.
You and your organization could sponsor an open source project to get oyu what you need, and give back to the community so that additional features (that you may not need now) can be developed at no cost to you.
Maybe a simple web service? Just something that accepts a machine name or IP address. When the service gets input, it sticks it in a queue and kicks off a task to collect the data from the machine that notified it. The nature of the task (SNMP interrogation, remote call to a Perl script, whatever) could be stored as part of the machine information in the database. If the task fails, the machine ID stays in the queue and the machine is periodically re-polled until the information is collected. Of course, you also have to have some kind of monitor running on your servers to notice that something has changed and send the notification; hopefully this is easily accomplished with whatever server monitoring software you've already got in place.
There are some solutions from the big vendors for managing monstrous sets of machines - such as some of the Tivoli stuff from IBM. That is probably, however, overkill for mere hundreds of machines.
There are some free software server database solutions but I do not know if they provide hooks to update information automatically from the machines with dmidecode or SNMP. One I heard about (but no personal experience, sorry), is GLPI.
I believe you are looking for Zabbix. It's open source, easy to install and use.
I've installed for a client a few years ago, and if I remember right it has a client application that connects to the zabbix server to update it with the requested information.
I really recommend it: http://www.zabbix.com
Checkout Machdb Its an opensource solution to the problem you are describing.

Advice on platforms/frameworks/languages/etc for a new project

I know this is not a programming question per se, but I wanted to get as much input from the SO community on a new project I hope to get started. The project is from being started from scratch and thus every decision for programming languages, databases, frameworks, platforms and what not are up in the air. I'm hoping to get your opinion on the matter, what you feel are the strengths and weaknesses of each option.
Database:
Currently I have the option of using MSSQL or MySQL. While I am leaning towards using MySQL because it is free and most probably has all the features I need. However, there is the possibility of having a lot of hierarchical data and the new hierarchical data type in MSSQL is quite appealing. Does it really simplify matters that much? Also MSSQL supports many more advanced SQL functions that may or may not be useful in the long run. While for development I can get access to Server 2008, multiple licenses as the development team grows and for production, are the costs justified?
Programming Languages:
The project will have a web based front end UI and a server based component that will do some heavy lifting.
For the web based UI, I was thinking of maybe doing Apache/IIS with PHP or IIS with ASP.Net in C#. I'd like to use a good framework to properly utilize good design patterns that should structure the code and development of the app. As well as make modifications in the long run easy to implement. I also want the GUI to look good and don't like the idea of buying .Net controls from component vendors. Instead I prefer the idea of using good CSS, and open sources like YUI and javascript to make the UI sleek.
For the server based component, I was thinking of using C#. I have no real development experience in C++ and I'd like good libraries and sufficient speed is good enough. However, while the web based UI and server based component is loosely coupled, there may be instances where the UI needs to communicate (call methods and what not) with the server based component and I want to pick languages/frameworks that will play nice with each other.
All suggestions on frameworks to incorporate are welcome.
Version Control:
I have had good experiences with SVN and a pretty bad experiences with TFS. I've never worked with GIT. Which do you think is better in terms of features as well as general developer familiarity. I want to pick something that other developers will know and not have trouble with.
I apologize if the questions are bit redundant or I'm not providing enough information or using bad terminology. I plan to edit and improve the question as I get feedback. Thanks!
EDIT:
Who: This would most probably be a startup formed of college students or junior developers. I want the project to utilize technologies that most people are familiar with or are easy to pick up.
What: I'd need hours and days to explain the solution. But in the end when you break it down, its a web based UI (think standard web app to just manage database data) that would be used to knowledgeable clients. The server based component would be very separate except for the fact that it should be able to communicate with the web app.
I can provide more information as required but I would appreciate an opportunity for users to answer and provide their ideas before you hastily close the question.
Obviously it depends a lot on specific requirements, but then again, even with those I probably wouldn't be able to tell for sure!
I've been working on a from-scratch project myself for a couple of months, and have generally found:
Choosing Microsoft for all the layers just goes down much easier (my subjective opinion). For example I would use C# for the UI, the back end, and use MSSQL for the database. Nothing at all wrong with non-Microsoft vendors, I'm no Microsoft fan-boy, I just struggle to get productive with unfamiliar tools. Depends where your experience lies though.
Database: In particular I've found that .NET and MSSQL go easily together. When I started the project I was using a PostgreSQL (because it's free, fully featured and has open-source warm fuzzies). However I abandoned it in favour of MSSQL simply because it was taking me too long to get database work done in an unfamiliar language with unfamiliar tools. Also, I'm not sure MSSQL is so expensive anymore, for example for a web application, MSSQL 2008 Web Edition is pretty damn cheap per-processor I think (only on SPLA licensing though). If you're concerned about database features in a free implementation though, personally I think PostgreSQL has a very full feature set, nicely standardised, and rapidly growing.
UI: I'm pretty inexperienced, but ASP.NET MVC looks far less painful to me than ASP.NET Web Forms. I like PHP too, but again I'd match the UI language with the back-end language, so would recommend .NET.
On frameworks, I'm immersed in DALs at the moment. I like Subsonic for lightweight data, NHibernate for heavy-weight.
I still have a long way to go with my project so perhaps I can only see the short-term benefits and drawbacks at the moment. But in general I would say: use the technologies that you're most comfortable using, as you'll be way more productive and the end result will probably be about the same anyway. If you want to learn new technologies though, and who doesn't? - go ahead, just expect it to take a lot longer.
Didn't want to answer 'cause it's so open ended. But a few points:
Money
First, check out BizSpark. That should take care of any money aspect for 3 years. For a service company, that means not only free VS Team Suite and Office and so on, but free Windows, SQL, etc. If your startup can't afford to spend a bit on MS tech in 3 years, it's probably a bad business. So that takes out licensing.
On a similar note, Sun has Startup Essentials. Could be interesting on the hardware side of things, but I haven't actually competitively priced them versus Dell/HP.
Software
It doesn't sound like you have hard enough requirements to say "oh, this slightly-less-popular software X is perfect for my domain Y and is gonna give me a very big boost". In fact, your project might not be like that at all. Maybe it, technically, is going to be a relatively plain application just pushing data around or whatever. You didn't specify.
For a small startup, personal productivity is probably going to trump any other argument. If your people are excellent in X, then that's one of your top arguments right there.
If you really don't have any particular system you're most comfortable with, be conservative. Stick with .NET or Java, as they'll give you the widest range of useful possibilities.
As far as things like OS and Database, I'm biased, but I think Microsoft will give you platforms that are easier to take advantage of than you'll find elsewhere. For instance, setting up load balancing, clustering, centralized authentication, managing servers (updates, events, etc.) is going to be easier to get going on Windows than it would be on another platform, assuming you're not an expert in either. Configuring SQL Server, even the advanced features, is a piece of cake. (Go time someone who knows neither: Setup a DB mirror in MSSQL and MySQL -- which is going to take more work?) Again, this is all predicated on you not having experts in a particular set of technology.
Don't mix -- whatever you do, stick with the platform. If you go .NET, MSSQL is going to work better with the data providers (or things like Linq-to-SQL). If you decide to do PHP, then use MySQL as everyone else uses it and you'll encounter less resistance. If you're not inventing stuff on the technical side, don't become an edge case.
You should pick the platform first, then the language that is best for that platform (if there is any choice).
One thing you should consider is the labor pool, and labor pool cost, for specific platforms and languages. Human Resources can often get cost metrics, if you don't have ideas already.
In my town, for example, .NET platform is much more expensive per Software Engineer than open source, because the .NET developers have a higher rate (40% roughly). C# is a little higher rate than VB.NET, but also tends to bring more well rounded candidates.
Just to throw in something totally different: How about weblocks as a web framework? It uses Hunchentoot as a server, which can run either standalone or with Apache. This is all done in Common Lisp. Weblocks can use cl-sql as a backend store, which can connect to many different RDBMs (MySQL, PostgreSQL, Oracle, ODBC, SQLite).

Does anyone have database, programming language/framework suggestions for a GUI point of sale system?

Our company has a point of sale system with many extras, such as ordering and receiving functionality, sales and order history etc. Our main issue is that the system was not designed properly from the ground up, so it takes too long to make fixes and handle requests from our customers. Also, the current technology we are using (Progress database, Progress 4GL for the language) incurs quite a bit of licensing expenses on our customers due to mutli-user license fees for database connections etc.
After a lot of discussion it is looking like we will probably start over from scratch (while maintaining the current product at least for the time being). We are looking for a couple of things:
Create the system with a nice GUI front end (it is currently CHUI and the application was not built in a way that allows us to redesign the front end... no layering or separation of business logic and gui...shudder).
Create the system with the ability to modularize different functionality so the product doesn't have to include all features. This would keep the cost down for our current customers that want basic functionality and a lower price tag. The bells and whistles would be available for those that would want them.
Use proper design patterns to make the product easy to add or change any part at any time (i.e. change the database or change the front end without needing to rewrite the application or most of it). This is a problem today because the Progress 4GL code is directly compiled against the database. Small changes in the database requires lots of code recompiling.
Our new system will be Linux based, with a possibility of a client application providing functionality from one or more windows boxes.
So what I'm looking for is any suggestions on which database and/or framework or programming language(s) someone might recommend for this sort of product. Anyone that has experience in this field might be able to point us in the right direction or even have some ideas of what to avoid. We have considered .NET and SQL Express (we don't need an enterprise level DB), but that would limit us to windows (as far as I know anyway). I have heard of Mono for writing .NET code in a Linux environment, but I don't know much about it yet. We've also considered a Java and MySql based implementation.
To summarize we are looking to do the following:
Keep licensing costs down on the technology we will use to develop the product (Oracle, yikes! MySQL, nice.)
Deliver a solution that is easily maintainable and supportable.
A solution that has a component capable of running on "old" hardware through a CHUI front end. (some of our customers have 40+ terminals which would be a ton of cash in order to convert over to a PC).
Suggestions would be appreciated.
Thanks
[UPDATE]
I should note that we are currently performing a total cost analysis. This question is intended to give us a couple of "educated" options to look into to include in or analysis. Anyone who could share experiences/suggestions about client/server setups would be appreciated (not just those who have experience with point of sale systems... that would just be a bonus).
[UPDATE]
For anyone who is interested, we ended up going with Microsoft Dynamics NAV, LS Retail (a plugin for the point of sale and various other things) and then did some (and are currently working on) customization work on top of that. This setup gave us the added benefit of having a fully integrated g/l system, which our current system lacked.
Java for language (or Scala if you want to be "bleeding edge", depending on how you plan to support it and what your developers are like it might be better, but also worse)
H2 for database
Swing for GUI
Reason: Free, portable and pretty standard.
Update: Missed the part where the system should be a client-server setup. My assumption was that the database and client should run on the same machine.
I suggest you first research your constraints a bit more - you made a passing reference to a client using a particular type of terminal - this may limit your options, unless the client agrees to upgrade.
You need to do a lot more legwork on this. It's great to get opinions from web forums, but we can't possibly know your environment as well as you do.
My broad strokes advice would be to aim for technology that is widely used. This way, expertise on the platform is cheaper than "niche" technologies, and it will be easier to get help if you hit a brick wall. Of course, following this advice may not be possible if you have non-negotiable technology already in place at customers.
My second suggestion would be to complete a full project plan, with detailed specs and proper cost estimates, before going with the "rewrite from scratch" option. Right now, you're saying that it would be cheaper to rewrite the system than maintain it, and you don't really know how much it would cost to re-write.
I suggest you use browser for the UI.
Organize your application as a web application.
There are tons of options for the back-end. You can use Java + MySQL. Java backend will save you from windows/linux debate as it will run on both platforms. You won't have any licensing cost for both Java and MySQL. (Edit: Definitely there are a lot of others languages that have run-times for both linux & windows including PHP, Ruby, Python etc)
If you go this route, you may also want to consider Google Web Toolkit (GWT) for creating the browser based front-end in a modular fashion.
One word of caution though. Browsers can be pesky when it comes to memory management. In our experience, this was the most significant challenge in doing browser based POS You may want to checkout Adobe Flex that runs in browser but might be more civil in its memory management.
What is CHUI? Character-UI, as in VT terminals? Or even 3270 style?
It sounds like you need a 3-tier system - the database backend, a middle-layer that runs the bulk of the back-end business processes, and a front-end layer for the CHUI / GUI / data-gateway.
All three layers can reside on one machine; or you can distribute the tiers out to various servers. The front-end layer would control the actual terminals, whether they are VT-terminals, or a web-browser, or a custom-written 'client' application.
Make sure you have considered the hardware needs here -- are you going to have barcode scanners, cash drawers, POS debit/credit terminals, et cetra? If you are using a standard browser, it might be hard to reliably integrate those items. (At the very least, you're likely going to have to write special applets to handle them.)
Finally, consider the possibility of a thin-client technology on Windows. It greatly simplifies system management, since you only have to upgrade the software centrally. Thin-client PC's are cheap -- sub $200.
Golden Code Development (see www.goldencode.com) has a technology that does automated conversion of Progress 4GL (the schema and code... the entire application) to a Java application with a relational database backend (e.g. PostgreSQL). They currently support a very complete CHUI environment and they do refactor the code. For example, the conversion separates the UI, the data model and the business logic into separate Java classes. The entire result is a drop-in replacement that is compatible with the original (users don't need retraining, processes don't need to be modified, the data is migrated too). This is possible because they provide an application server and a set of runtime classes that provide that compatibility. The result of the automated conversion is not something that needs further editing before you can compile and run it. True terminal support is included so hardware terminals still work (it requires a small JNI library to access NCURSES from Java). All the rest of the code in the runtime is pure Java. No Progress Software Corp technology is used in the resulting system and it runs on Linux.
At least one converted system is already in production, running a 24 by 7 mission critical environment. It is a converted ERP system that their mid-sized pilot customer uses to run their entire business.

Resources