How to apply 12 Factor App to Linux driver developing? [closed] - c

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm an engineer currently developing Linux kernel-mode drivers and user-mode drivers. When I came across the theory of 12 Factor App, there is a strong voice echoing around my brain "THIS IS THE FUTURE OF DEVELOPING!".
And I kept wondering how to apply this method to Linux KMD and UMD design and developing since this theory is much too web-app based (I'm a part-time open-source web developer).
Current Developing language: C
Current Testing automation: Custom Implemented Python testing framework (Progress based, NO unit test)
Please give me some suggestions on this. Thanks and appreciated in advance.

As with most development guidelines, there is a gap between the guideline and the enforcement.
For example, in your "12 factor app" methodology, one of the factors is:
Codebase - One codebase tracked in revision control, many deploys
Which sounds great, and would really simplify things. Until you get to the point of utility libraries. You see, when you find you are reusing code across multiple projects, you probably want:
Independent build and release chains for the multiple projects.
This could mean two codebases, but the above states one codebase (perhaps one per project, perhaps one per company. Let's assume one per company first, which is easy to see as non-ideal; because, you would have commits unrelated to a project in the project's commit history. Ok, one per project, more sensible; but, what if projects need to share code? Like the libraries that format their communications and control the send / receive protocols? Well, we could create a third "protocol library" so that we have revisioning around the protocol; but, that violates the "one codebase (per project)" because now you have two codebases comprising the single releasable item.
The decisions here are not simple. The other approach is to copy the protocol code into both projects and keep them in sync by some other means.
Dependencies - Explicitly declare and isolate dependencies
It's a great idea; and, one that makes development easier in many ways. Again, just to illustrate how a great idea can suffer without clear guidelines on how to implement the idea, what do you do when you are using a library that doesn't attempt to isolate the dependencies the library uses? Many of the more complex libraries themselves depend on other libraries, and generally they clearly declare their dependencies, as do the libraries used by the libraries; however, sometimes the base, core libraries used by multiple projects (logging, configuration, etc) wind up being used at different release versions. The isolation occurred on a per-library basis, but not on a per-project basis. You could fix it, provided you wanted (or could) fork and clone the libraries, restructuring them to properly isolate their dependencies for overall coordination of version numbers; but, generally you will lack the time to work on other people's projects.
In general, the advice under "12 factor app" methodology is good; but, it leaves you up to performing the work of translating the guidelines into development protocols. Enforcement then becomes a matter of interpertation, and the means of enforcement (as well as the interpertation) fall on you to implement.
And some of the guidelines look dangerously over-simlpified:
Concurrency - Scale out via the process model
While this is an easier way to go, it's not how any single high performance web server works. They all use threading, thread pools, and other more complex constructs to avoid process switching. These constructs (which are admittedly harder to use) were created specifically due to the limitations of a traditional process model. After all, it's not common to launch a process per web request, nor would you generally "tune a program for better performance" by starting a second copy on the same machine. Certainly, there are architectures where this could work; but, so far these architectures haven't outperformed their competition.
Between machines, I wholeheartedly agree. Process scalaing is the only way to go in a distrubuted environment; but, there's not much in this methodology that talks about distributed algorithms, or even distributed computing approaches; so, again it's another thing left up to the implementor.
Finally, their process commentary seems really out-of-place for writing a command line tool. The push to daemonize things works really well for microservices; however, you can't microservice away even the clients. Eventually you'll have to write something that isn't "managed by systemd", due to starting execution and ending execution without having an always-on service.
So, it's a good framework, which might not work for some things, even if it is excellent for many things; but, in my opinion, the tooling to enforce it would have to be built by the organization using it because the interpretations one organization might make could differ from another organization.

Related

Consensus algorithm for Node.js

I'm trying to implement a collaborative canvas in which many people can draw free-handly or with specific shape tools.
Server has been developed in Node.js and client with Angular1-js (and I am pretty new to them both).
I must use a consensus algorithm for it to show always the same stuff to all the users.
I'm seriously in troubles with it since I cannot find a proper tutorial its use. I have been looking and studying Paxos implementation but it seems like Raft is very used in practical.
Any suggestions? I would really appreciate it.
Writing a distributed system is not an easy task[1], so I'd recommend using some existing strongly consistent one instead of implementing one from scratch. The usual suspects are zookeeper, consul, etcd, atomix/copycat. Some of them offer nodejs clients:
https://github.com/alexguan/node-zookeeper-client
https://www.npmjs.com/package/consul
https://github.com/stianeikeland/node-etcd
I've personally never used any of them with nodejs though, so I won't comment on maturity of clients.
If you insist on implementing consensus on your own, then raft should be easier to understand — the paper is surprisingly accessible https://raft.github.io/raft.pdf. They also have some nodejs implementations, but again, I haven't used them, so it is hard to recommend any particular one. Gaggle readme contains an example and skiff has an integration test which documents its usage.
Taking a step back, I'm not sure if the distributed consensus is what you need here. Seems like you have multiple clients and a single server. You can probably use a centralized data store. The problem domain is not really that distributed as well - shapes can be overlaid one on top of the other when they are received by server according to FIFO (imagine multiple people writing on the same whiteboard, the last one wins). The challenge is with concurrent modifications of existing shapes, by maybe you can fallback to last/first change wins or something like that.
Another interesting avenue to explore here would be Conflict-free Replicated Data Types — CRDT. Folks at github used them to implement collaborative "pair" programming in atom. See the atom teletype blog post, also their implementation maybe useful, as collaborative editing seems to be exactly the problem you try to solve.
Hope this helps.
[1] Take a look at jepsen series https://jepsen.io/analyses where Kyle Kingsbury tests various failure conditions of distribute data stores.
Try reading Understanding Paxos. It's geared towards software developers rather than an academic audience. For this particular application you may also be interested in the Multi-Paxos Example Application referenced by the article. It's intended both to help illustrate the concepts behind the consensus algorithm and it sounds like it's almost exactly what you need for this application. Raft and most Multi-Paxos designs tend to get bogged down with an overabundance of accumulated history that generates a new set of problems to deal with beyond simple consistency. An initial prototype could easily handle sending the full-state of the drawing on each update and ignore the history issue entirely, which is what the example application does. Later optimizations could be made to reduce network overhead.

Steps to take when planning and executing a new project (say mobile app)

I want to build a free app to become familiar with what is required, but I was always confused about the steps one needs to take to START a software project.
What are the steps required in order to develop a mobile app?
I will list some of the things I think should be done but I don't necessarily know how to do. Any advice, details and technologies you have to accomplish these steps would be awesome.
Decide which platform you want to develop. What are some of the pros and cons in this area for android vs iOS vs Windows8?
How to test the app - can you get free hardware to test with a well detailed app plan? Emulator?
Detail what you want the app to do and which functionalities you want.
Research if this app already exists. What are some areas of concerns in terms of not breaking the law such as patent infringement etc?
Setup a source repository such as git (google a guide I guess?)
Look at guides to familiarize yourself with APIs and write sample code to learn what you need?
Start the development and keep doing the above as needed.
Starting a software project can be as easy as start writing code. Most programmers will have an intuition as to what needs to be done and how it could be done. The other extreme of starting a software project is to start with talking to a client (or looking at the world) and figuring out what the problem is. I find that a thorough understanding of the problem you are trying to address with a project is already a long way into getting the project done painlessly. It'll give you a good understanding of what is required for you to call your project done.
So I guess point number one becomes: know what the problem is you're solving. Knowing this will also tell you if any existing app solves the same problem to a satisfying standard.
NOTE: I am not that familiar with the Windows 8 platform so my answer mostly talks about iOS and Android. The issues raised however are broad enough to cover large parts of the Windows platform.
Platform
Selecting a deployment platform is an important part of a launching a product, and a lot of other decisions depend on the platform. We are in the unfortunate state that two major mobile OSes exist that are separate in terms of code development and reuse. When considering selecting your deployment platform you'll want to think about the audience, and the (potential) sub set of the audience that is willing to pay for your application. Android might have to most devices out there but iPhone makes the most money (also for developers). However, remember that there are lots of apps out there and most developers don't ever make any (or not enough) money out of their apps.
Getting into app development with the aim of getting rich is going to leave you dissappointed. That's not very likely, then again someone always wins the lottery as well. It is a good way though to get employed and make some money that way.
Then there is the question of programming language (Java, Objective-C or C#). This is largely decided on what you are already familiar with, and if you aren't then refer back to the previous point.
Testing
Testing the product is a tricky thing. You'll have to start off with the emulator (which is usually provided with the development pack). Sooner or later however you'll have to test the app on hardware. I doubt you'll get your hands on free hardware but borrowing from friends and relatives is always an option. There may also be businesses that rent out test hardware to developers, if there isn't then I suppose that's one business idea to work on.
The platform choice will affect this also. Android is running on a much wider range of hardware than iOS.
Patent infringment
I don't know that much about patent issues, other than software patents are nasty. As a single developer I wouldn't be too worried about infringing on patents, the main purpose of them is to keep competitors at bay. What usually ends up happening is that big companies kill off competition with patent lawsuits, or they buy a smaller company that holds a nice collection of patents.
If you want to be on the safe side (meaning you own a company and are really doing this to make money) then talk to patent lawyers.
Code repository
A code hosting service like GitHub is fantastic in that it not only provides a place to have you code, but it also provides issue trackers for keeping notes on the functionality that is still missing or bugs that have crept up in your software.
The best places to start learning about Git are git-scm.com and the GitHub help pages.
Software development plan
Your last point explodes to a thing called software engineering. There has been lots of research into different ways of managing software development projects. The idea being that software development tends to be extended over long periods of time, the requirements of the project change during the project (as you learn more) and the project can involve anything from 1 to 100s of developers. Some way coordinating work between those developers (and all other parties involved like customers) has to be formalised, enter software engineering. The aim is to define a methodology and project structure that guides the development process and makes it more likely that the requirements are met at the end of the project.
Some models worth looking into include (Test Driven Development and other agile methods).
Finally I would add to the list of things that need to be done
Research libraries, note that this comes before familiarising yourself with the APIs of those libraries.
What software already exists that does a part of what you want to achieve. This goes partly back to the question of what platform to use. Apple has put a lot of attention in developing easy to use frameworks to support iOS app development. I am not that familiar with Andoird's or Windows 8 but the less code you have to write the faster the product will be done.
1 http://mobiledevices.about.com/od/kindattentiondevelopers/tp/Android-Os-Vs-Apple-Ios-Which-Is-Better-For-Developers.htm
There is only one step needed: Just start that project!
You are going to develop a free application, so it should be fun to do that. Choose whatever you like and keep going:
Make sure you are productive enough -- 10 Laws of Productivity
Avoid complexity -- Occam's razor, KISS principle
Let CI system do the boring stuff -- Machines should work; people should think.
Read books and improve yourself.
Please also avoid blind decisions. If you simply try several available options you'll eventually find the best way to achieve your goals. Do some PoC and decide. Nowadays 1-2 hours should be enough to start with any technology. This is the rule of maturity. You have your own goals, so it is better to avoid immature solutions.
Happy coding.
CPlayer I came to this forum with the same question since I am new to mobile app design and want to make my own app. I realize it is important to take certain steps in the correct order so that wasted time is minimized or eliminated. I did some research and came across two online sources I believe, if they are put together as one, will make one better source. The links are:
http://answers.oreilly.com/topic/2311-a-mobile-app-development-checklist/
http://mobiledevices.about.com/od/kindattentiondevelopers/ht/How-To-Create-An-App-For-The-Iphone.htm
Good Luck,
laroice

How to do Agile Developing and testing High volume processing systems? Tools? Methodologies? Recommendation? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am developing high volume processing systems. Like mathematical models that calculate various parameters based on milions of records, calculated derived fields over milions of records, process huge files having transactions etc...
I am well aware of unit testing methodologies and if my code is in X# I have no problem in unit testing is. Problem is I often have code in T-SQL, C# code that is a SQL stored assembly, and SSIS workflow with a good amount of logic (and outcomes etc) or some SAS process.
What is the approach YOu use when developing such systems. I usually develop several tests as Stored procedures in a designed schema(TEST) and then automatically run them overnight and check out the results. But this is only for T-SQL. But the problem is with testing SSIS packages. How do You test it? What is Your preferred approach for stubbing data into tables (especially if You need a lot data initialization). I have some approach derived over the years but maybe I am just not reading enough articles.
So Banking, Telecom, Risk developers out there. How do You test your mission critical apps that process milions of records at end day, month end etc? What frameworks do You use? How do You validate that Your ssis package is Correct (as You develop it)/ How do You achieve continous integration in such an environment (Personally I never got there)? I hope this is not to open-ended question. How do You test Your map-reduce jobs for example (i do not use hadoop but this is quite similar).
luke
Hope that this is not to open ended
Firstly build logging, monitoring & double entry systems into what you're building.
Ensure that even with these systems switched on, performance is acceptable, so benchmark, and profile these, and ensure the hardware is appropriate for the entire system.
Split each system into sub-systems which can be tested independently, so try and ensure systems are designed to be quite loosely coupled.
Also ensure each sub-system validates their inputs before processing further, this ensures erroneous data is stopped before it becomes a bigger problem.
By using logging, you can test a variety of systems in a similar way.
For any system which doesn't have unit test frameworks available, use logging, and then test the logs generated.
This should allow you to test SSIS processes, Workflow's, or assembly's.
Monitoring & double entry systems, will flag up errors & process problems, so you can identify and ideally resolve them in a timely fashion.
Finally, when systems go live, don't switch logging off entirely.
If necessary, reduce it's verbosity, but ensure this can be switched on, to debug processes, as problems will still occur in the live environment which you need to resolve.
Ensure you use live data, and edge cases, for automated testing.
Use code reviews or pair programming to ensure the code is optimal.
Ensure you use expert QA staff to think of use cases you won't think of.
Ensure you have a excellent project manager, who can manage you, your team, the related teams, the end users, and your bosses, and ensure everyone is communicating appropriately.
You won't be able to achieve well tested processes without a well run team.
Using some of the above, has allowed us to develop well tested processes, which handles billions of pounds worth of transactions annually, so we must be doing something right.
Automated regression testing, not unit testing. Custom tools to compare input and expected output. Performance over everything. Performance tests. Test using pre-loaded systems. Try on x64, x32 etc. Custom tools to synthesise data based on business cases. Modular dtsx. One dev per dtsx. List goes on.

Architecture for a machine database [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
This might be more of a serverfault.com question but a) it doesn't exist yet and b) I need more rep for when it does :~)
My employer has a few hundred servers (all *NIX) spread across several locations. As I suspect is common we don't really know how many servers we have: more than once I've been surprised to find a server that's been up for 5 years, apparently doing nothing but elevating the earth's temperature slightly. We have a number of databases that store bits of server information -- Puppet, Cobbler, Nagios, Cacti, our load balancers, DNS, various internal spreadsheets and so on but it's all very disparate, incomplete and overlapping. Maintaining this mess costs time and money.
So, I'd like to come up a single database which holds details of what each server is (hardware specs, role, etc) and replaces (or at least supplies data for) the databases mentioned above. The database and web interface are likely to be a Rails app as this is what I have most experience with. I'm more of a sysadmin than a coder.
Has this problem already been solved? I can't find any open source software that really fits the bill and I'm generally not too keen on bloaty, GUI vendor-supplied solutions.
How should I implement the device information collection bit? For instance, it'd be great to the database update device records when disks are added or removed, or when the server serial number changes because HP replace the board. This information comes from many different sources: dmidecode, command-line disk tools, SNMP against the server or its onboard lights-out card, and so on. I could expose all this through custom scripts and net-snmp, or I could run a local poller that reported the information back to the central DB (maybe via a RESTful interface or something). It must be easily extensible.
Have you done this? How? Tell me your experiences, discoveries, mistakes and recommendations!
This sounds like a great LDAP problem looking for a solution. LDAP is designed for this kind of thing: a catalog of items that is optimized for data searches and retrieval (but not necessarily writes). There are many LDAP servers to choose from (OpenLDAP, Sun's OpenDS, Microsoft Active Directory, just to name a few ...), and I've seen LDAP used to catalog servers before. LDAP is very standardized and a "database" of information that is usually searched or read, but not frequently updated, is the strong-suit of LDAP.
My team have been dumping all out systems in to RDF for a month or two now, we have the systems implementation people create the initial data in excel, which is then transformed to N3 (RDF) using Perl.
We view the data in Gruff (http://www.franz.com/downloads.lhtml) and keep the resulting RDF in Allegro (a triple store from the same guys that do Gruff)
It's incredibly simple and flexible - no schema means we simply augment the data on the fly and with a wide variety of RDF viewers and reasoning engines the presentation options are enless.
The best part for me? no coding, just create triples and throw them in the store then view them as graphs.
The collection of detailed machine information is a very frustrating problem (many vendors want to keep it this way). Even if you can spend a large amount of money, you probably will not find a simple solution to this problem. IBM and HP offer products that achieve what you are seeking, but they are very, very, expensive, and will leave a bad taste in your mouth once you realize that probably all you needed was 40-50% of the functionality they offer. You say that you need to monitor *Nix servers...most (if not all) unices support RFC 1514 (windows also supports this RFC as of windows 2000). The Host MIB support defined by RFC 1514 has its drawbacks however. Since it is SNMP based, it requires that SNMP be enabled on the machine, which is typically not the default for unix and windows machines. The reason for this is that SNMP was created before the entire world was using the Internet, and thus the old, crusty nature of its security is of concern. In many environs, this may not be acceptable for security reasons. However, if you are only dealing with machines behind the firewall, this might not be an issue (I suspect this is true in your case). Several years ago, I was working on a product that monitored hundreds of unix and windows machines. At the time, I did extensive research into the mechanics of how to acquire detailed information from each machine such as disk info, running processes, installed software, up-time, memory pressure, CPU and IO load (Including Network) without running a custom client on each machine. This info can be collected in a centralized fashion. As of three or four years ago, the RFC-1514 Host MIB spec was the only "standard" for acquiring detailed real-time machine info without resorting to OS-specific software. Sun and Microsoft announced a WebService based initiative many years ago to address some of this, but I suspect it never received any traction since I cannot at the moment even remember its marketing name.
I should mention that RFC 1514 is certainly no panacea. You are at the mercy of the OS-provided SNMP service, unless you have the luxury of deploying a custom info-collecting client to each machine. The RFC-1514 spec dictates that several parameters are optional, and if your target OS does not implement it, then you are back to custom code to provide the information.
I'm contemplating how to go about this myself, and I think this is one of the key pieces of infrastructure that not having around keeps us in the dark ages. Hopefully this will be a popular question on serverfault.com. :)
It's not just that you could install a single tool to collect this data, because that's not possible cheaply, but ideally you want everything from the hardware up to the applications on the network feeding into this thing.
I think the only approach that makes sense is a modular one. The range of devices and types of information is too disparate to come under a single tool. Also the collection of data needs to be as passive and asynchronous as possible - the reality of running infrastructure means that there will be interruptions and you can't rely on being able to get the data at all times.
I think the tools you've pointed out form something of an ecosystem that could work together - Cobbler can install from bare-metal and hand over to Puppet, which has support for generating Nagios configs, and storing configs in a database; for me only Cacti is a bit opaque in terms of programmatically inserting new devices, templates etc. but I know this is possible.
Ultimately you have to sit down and work out which pieces of information are important for the business you work for, and design a db schema around that. Then, work out how to get the information you need into the db, whether it's from Facter, Nagios, Cacti, or direct snmp calls.
Since you asked about collection of data, I think if you have quite disparate kit (Dell, HP etc.) then it makes sense to create a library to abstract away as much as possible the differences between them, so your scripts just make standard calls such as "checkdiskhealth". When you add new hardware you can add to the library rather than having to write a completely new script.
Sounds like a common problem that larger organizations would have. I know our (50 person company) sysadmin has a little access database of information about every server, license, and piece of hardware installed. He's very meticulous, but when it comes time to replace or repair hardware, he knows everything about it from his little db.
You and your organization could sponsor an open source project to get oyu what you need, and give back to the community so that additional features (that you may not need now) can be developed at no cost to you.
Maybe a simple web service? Just something that accepts a machine name or IP address. When the service gets input, it sticks it in a queue and kicks off a task to collect the data from the machine that notified it. The nature of the task (SNMP interrogation, remote call to a Perl script, whatever) could be stored as part of the machine information in the database. If the task fails, the machine ID stays in the queue and the machine is periodically re-polled until the information is collected. Of course, you also have to have some kind of monitor running on your servers to notice that something has changed and send the notification; hopefully this is easily accomplished with whatever server monitoring software you've already got in place.
There are some solutions from the big vendors for managing monstrous sets of machines - such as some of the Tivoli stuff from IBM. That is probably, however, overkill for mere hundreds of machines.
There are some free software server database solutions but I do not know if they provide hooks to update information automatically from the machines with dmidecode or SNMP. One I heard about (but no personal experience, sorry), is GLPI.
I believe you are looking for Zabbix. It's open source, easy to install and use.
I've installed for a client a few years ago, and if I remember right it has a client application that connects to the zabbix server to update it with the requested information.
I really recommend it: http://www.zabbix.com
Checkout Machdb Its an opensource solution to the problem you are describing.

Does anyone have database, programming language/framework suggestions for a GUI point of sale system?

Our company has a point of sale system with many extras, such as ordering and receiving functionality, sales and order history etc. Our main issue is that the system was not designed properly from the ground up, so it takes too long to make fixes and handle requests from our customers. Also, the current technology we are using (Progress database, Progress 4GL for the language) incurs quite a bit of licensing expenses on our customers due to mutli-user license fees for database connections etc.
After a lot of discussion it is looking like we will probably start over from scratch (while maintaining the current product at least for the time being). We are looking for a couple of things:
Create the system with a nice GUI front end (it is currently CHUI and the application was not built in a way that allows us to redesign the front end... no layering or separation of business logic and gui...shudder).
Create the system with the ability to modularize different functionality so the product doesn't have to include all features. This would keep the cost down for our current customers that want basic functionality and a lower price tag. The bells and whistles would be available for those that would want them.
Use proper design patterns to make the product easy to add or change any part at any time (i.e. change the database or change the front end without needing to rewrite the application or most of it). This is a problem today because the Progress 4GL code is directly compiled against the database. Small changes in the database requires lots of code recompiling.
Our new system will be Linux based, with a possibility of a client application providing functionality from one or more windows boxes.
So what I'm looking for is any suggestions on which database and/or framework or programming language(s) someone might recommend for this sort of product. Anyone that has experience in this field might be able to point us in the right direction or even have some ideas of what to avoid. We have considered .NET and SQL Express (we don't need an enterprise level DB), but that would limit us to windows (as far as I know anyway). I have heard of Mono for writing .NET code in a Linux environment, but I don't know much about it yet. We've also considered a Java and MySql based implementation.
To summarize we are looking to do the following:
Keep licensing costs down on the technology we will use to develop the product (Oracle, yikes! MySQL, nice.)
Deliver a solution that is easily maintainable and supportable.
A solution that has a component capable of running on "old" hardware through a CHUI front end. (some of our customers have 40+ terminals which would be a ton of cash in order to convert over to a PC).
Suggestions would be appreciated.
Thanks
[UPDATE]
I should note that we are currently performing a total cost analysis. This question is intended to give us a couple of "educated" options to look into to include in or analysis. Anyone who could share experiences/suggestions about client/server setups would be appreciated (not just those who have experience with point of sale systems... that would just be a bonus).
[UPDATE]
For anyone who is interested, we ended up going with Microsoft Dynamics NAV, LS Retail (a plugin for the point of sale and various other things) and then did some (and are currently working on) customization work on top of that. This setup gave us the added benefit of having a fully integrated g/l system, which our current system lacked.
Java for language (or Scala if you want to be "bleeding edge", depending on how you plan to support it and what your developers are like it might be better, but also worse)
H2 for database
Swing for GUI
Reason: Free, portable and pretty standard.
Update: Missed the part where the system should be a client-server setup. My assumption was that the database and client should run on the same machine.
I suggest you first research your constraints a bit more - you made a passing reference to a client using a particular type of terminal - this may limit your options, unless the client agrees to upgrade.
You need to do a lot more legwork on this. It's great to get opinions from web forums, but we can't possibly know your environment as well as you do.
My broad strokes advice would be to aim for technology that is widely used. This way, expertise on the platform is cheaper than "niche" technologies, and it will be easier to get help if you hit a brick wall. Of course, following this advice may not be possible if you have non-negotiable technology already in place at customers.
My second suggestion would be to complete a full project plan, with detailed specs and proper cost estimates, before going with the "rewrite from scratch" option. Right now, you're saying that it would be cheaper to rewrite the system than maintain it, and you don't really know how much it would cost to re-write.
I suggest you use browser for the UI.
Organize your application as a web application.
There are tons of options for the back-end. You can use Java + MySQL. Java backend will save you from windows/linux debate as it will run on both platforms. You won't have any licensing cost for both Java and MySQL. (Edit: Definitely there are a lot of others languages that have run-times for both linux & windows including PHP, Ruby, Python etc)
If you go this route, you may also want to consider Google Web Toolkit (GWT) for creating the browser based front-end in a modular fashion.
One word of caution though. Browsers can be pesky when it comes to memory management. In our experience, this was the most significant challenge in doing browser based POS You may want to checkout Adobe Flex that runs in browser but might be more civil in its memory management.
What is CHUI? Character-UI, as in VT terminals? Or even 3270 style?
It sounds like you need a 3-tier system - the database backend, a middle-layer that runs the bulk of the back-end business processes, and a front-end layer for the CHUI / GUI / data-gateway.
All three layers can reside on one machine; or you can distribute the tiers out to various servers. The front-end layer would control the actual terminals, whether they are VT-terminals, or a web-browser, or a custom-written 'client' application.
Make sure you have considered the hardware needs here -- are you going to have barcode scanners, cash drawers, POS debit/credit terminals, et cetra? If you are using a standard browser, it might be hard to reliably integrate those items. (At the very least, you're likely going to have to write special applets to handle them.)
Finally, consider the possibility of a thin-client technology on Windows. It greatly simplifies system management, since you only have to upgrade the software centrally. Thin-client PC's are cheap -- sub $200.
Golden Code Development (see www.goldencode.com) has a technology that does automated conversion of Progress 4GL (the schema and code... the entire application) to a Java application with a relational database backend (e.g. PostgreSQL). They currently support a very complete CHUI environment and they do refactor the code. For example, the conversion separates the UI, the data model and the business logic into separate Java classes. The entire result is a drop-in replacement that is compatible with the original (users don't need retraining, processes don't need to be modified, the data is migrated too). This is possible because they provide an application server and a set of runtime classes that provide that compatibility. The result of the automated conversion is not something that needs further editing before you can compile and run it. True terminal support is included so hardware terminals still work (it requires a small JNI library to access NCURSES from Java). All the rest of the code in the runtime is pure Java. No Progress Software Corp technology is used in the resulting system and it runs on Linux.
At least one converted system is already in production, running a 24 by 7 mission critical environment. It is a converted ERP system that their mid-sized pilot customer uses to run their entire business.

Resources