Architecture for a machine database [closed] - database

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
This might be more of a serverfault.com question but a) it doesn't exist yet and b) I need more rep for when it does :~)
My employer has a few hundred servers (all *NIX) spread across several locations. As I suspect is common we don't really know how many servers we have: more than once I've been surprised to find a server that's been up for 5 years, apparently doing nothing but elevating the earth's temperature slightly. We have a number of databases that store bits of server information -- Puppet, Cobbler, Nagios, Cacti, our load balancers, DNS, various internal spreadsheets and so on but it's all very disparate, incomplete and overlapping. Maintaining this mess costs time and money.
So, I'd like to come up a single database which holds details of what each server is (hardware specs, role, etc) and replaces (or at least supplies data for) the databases mentioned above. The database and web interface are likely to be a Rails app as this is what I have most experience with. I'm more of a sysadmin than a coder.
Has this problem already been solved? I can't find any open source software that really fits the bill and I'm generally not too keen on bloaty, GUI vendor-supplied solutions.
How should I implement the device information collection bit? For instance, it'd be great to the database update device records when disks are added or removed, or when the server serial number changes because HP replace the board. This information comes from many different sources: dmidecode, command-line disk tools, SNMP against the server or its onboard lights-out card, and so on. I could expose all this through custom scripts and net-snmp, or I could run a local poller that reported the information back to the central DB (maybe via a RESTful interface or something). It must be easily extensible.
Have you done this? How? Tell me your experiences, discoveries, mistakes and recommendations!

This sounds like a great LDAP problem looking for a solution. LDAP is designed for this kind of thing: a catalog of items that is optimized for data searches and retrieval (but not necessarily writes). There are many LDAP servers to choose from (OpenLDAP, Sun's OpenDS, Microsoft Active Directory, just to name a few ...), and I've seen LDAP used to catalog servers before. LDAP is very standardized and a "database" of information that is usually searched or read, but not frequently updated, is the strong-suit of LDAP.

My team have been dumping all out systems in to RDF for a month or two now, we have the systems implementation people create the initial data in excel, which is then transformed to N3 (RDF) using Perl.
We view the data in Gruff (http://www.franz.com/downloads.lhtml) and keep the resulting RDF in Allegro (a triple store from the same guys that do Gruff)
It's incredibly simple and flexible - no schema means we simply augment the data on the fly and with a wide variety of RDF viewers and reasoning engines the presentation options are enless.
The best part for me? no coding, just create triples and throw them in the store then view them as graphs.

The collection of detailed machine information is a very frustrating problem (many vendors want to keep it this way). Even if you can spend a large amount of money, you probably will not find a simple solution to this problem. IBM and HP offer products that achieve what you are seeking, but they are very, very, expensive, and will leave a bad taste in your mouth once you realize that probably all you needed was 40-50% of the functionality they offer. You say that you need to monitor *Nix servers...most (if not all) unices support RFC 1514 (windows also supports this RFC as of windows 2000). The Host MIB support defined by RFC 1514 has its drawbacks however. Since it is SNMP based, it requires that SNMP be enabled on the machine, which is typically not the default for unix and windows machines. The reason for this is that SNMP was created before the entire world was using the Internet, and thus the old, crusty nature of its security is of concern. In many environs, this may not be acceptable for security reasons. However, if you are only dealing with machines behind the firewall, this might not be an issue (I suspect this is true in your case). Several years ago, I was working on a product that monitored hundreds of unix and windows machines. At the time, I did extensive research into the mechanics of how to acquire detailed information from each machine such as disk info, running processes, installed software, up-time, memory pressure, CPU and IO load (Including Network) without running a custom client on each machine. This info can be collected in a centralized fashion. As of three or four years ago, the RFC-1514 Host MIB spec was the only "standard" for acquiring detailed real-time machine info without resorting to OS-specific software. Sun and Microsoft announced a WebService based initiative many years ago to address some of this, but I suspect it never received any traction since I cannot at the moment even remember its marketing name.
I should mention that RFC 1514 is certainly no panacea. You are at the mercy of the OS-provided SNMP service, unless you have the luxury of deploying a custom info-collecting client to each machine. The RFC-1514 spec dictates that several parameters are optional, and if your target OS does not implement it, then you are back to custom code to provide the information.

I'm contemplating how to go about this myself, and I think this is one of the key pieces of infrastructure that not having around keeps us in the dark ages. Hopefully this will be a popular question on serverfault.com. :)
It's not just that you could install a single tool to collect this data, because that's not possible cheaply, but ideally you want everything from the hardware up to the applications on the network feeding into this thing.
I think the only approach that makes sense is a modular one. The range of devices and types of information is too disparate to come under a single tool. Also the collection of data needs to be as passive and asynchronous as possible - the reality of running infrastructure means that there will be interruptions and you can't rely on being able to get the data at all times.
I think the tools you've pointed out form something of an ecosystem that could work together - Cobbler can install from bare-metal and hand over to Puppet, which has support for generating Nagios configs, and storing configs in a database; for me only Cacti is a bit opaque in terms of programmatically inserting new devices, templates etc. but I know this is possible.
Ultimately you have to sit down and work out which pieces of information are important for the business you work for, and design a db schema around that. Then, work out how to get the information you need into the db, whether it's from Facter, Nagios, Cacti, or direct snmp calls.
Since you asked about collection of data, I think if you have quite disparate kit (Dell, HP etc.) then it makes sense to create a library to abstract away as much as possible the differences between them, so your scripts just make standard calls such as "checkdiskhealth". When you add new hardware you can add to the library rather than having to write a completely new script.

Sounds like a common problem that larger organizations would have. I know our (50 person company) sysadmin has a little access database of information about every server, license, and piece of hardware installed. He's very meticulous, but when it comes time to replace or repair hardware, he knows everything about it from his little db.
You and your organization could sponsor an open source project to get oyu what you need, and give back to the community so that additional features (that you may not need now) can be developed at no cost to you.

Maybe a simple web service? Just something that accepts a machine name or IP address. When the service gets input, it sticks it in a queue and kicks off a task to collect the data from the machine that notified it. The nature of the task (SNMP interrogation, remote call to a Perl script, whatever) could be stored as part of the machine information in the database. If the task fails, the machine ID stays in the queue and the machine is periodically re-polled until the information is collected. Of course, you also have to have some kind of monitor running on your servers to notice that something has changed and send the notification; hopefully this is easily accomplished with whatever server monitoring software you've already got in place.

There are some solutions from the big vendors for managing monstrous sets of machines - such as some of the Tivoli stuff from IBM. That is probably, however, overkill for mere hundreds of machines.

There are some free software server database solutions but I do not know if they provide hooks to update information automatically from the machines with dmidecode or SNMP. One I heard about (but no personal experience, sorry), is GLPI.

I believe you are looking for Zabbix. It's open source, easy to install and use.
I've installed for a client a few years ago, and if I remember right it has a client application that connects to the zabbix server to update it with the requested information.
I really recommend it: http://www.zabbix.com

Checkout Machdb Its an opensource solution to the problem you are describing.

Related

What is the role of 'server' in an embedded server dbms? - Confusion over the many 'server'-related terms that pop up when looking for a suited rdbms

I'll give you a bit of background because I don't think my question is clear without it. Aside from that, I don't know much about servers but I think it'll become clear what I'm actually asking because of the background information.
I was/am building a small C++ program to be used by just me (a homework manager, which needed to keep track of tasks, so it relied on tasks and subtasks and needed multiple tables, etc.), so I figured I needed a database. I quickly stumbled upon SQLite, which was perfect for my case in many ways: -it's free, -it only uses .db files which can be interpreted by any software, -it can be embedded, -it's simple (in terms of documentation and libraries), and most importantly: -it is what SQLite.org describes as 'serverless'.
However, I found SQLite's dynamic type system extremely annoying ('why' is besides the point; I might make seperate posts asking questions about this) and I decided to look for a rdbms that has all the pros I mentioned above but also has static typing.
While going down this rabbit-hole of looking for a rdbms to fit my needs, I came across many terms which were all related to how the rdbms is implemented regarding the term 'server' and the like. All the terms are very vagues and one word does not mean the same every instance.
I noticed all of these keywords and contrasts popping up during my search:
Stand-alone vs server/client
Embedded vs... 'not embedded', I guess(?)
Classic serverless vs neo-serverless
Serverless... but in reality cloud-based (I thought clouds were servers(?))
Server vs service
Service vs application
User vs client
I'm as far as to know that a server is a proces that is executing on the background, not to be used by the user directly. But other than that, all these server-related terms are throwing me off.
I want a rdbms that has this 'serverless'ness SQLite.org speaks of. I saw many professional free SQL RDBMS providers which spoke of the ability to have 'embedded servers', which does contain the word 'embedded' but it also contains 'server'. So my question is: when a providers speaks of these 'embedded servers', what does it really mean?
Does it mean that there is one application, and when it runs it opens another application which functions as a server? And when it does so, is that server a service or just another normal application-like process? Or, does it work exactly the same like the serverlessness SQLite mentions, that being: the libraries inside of the compiled project already handle everything to work with only .db files? Does it need any files other than the database and the executable? Does the communication between the application and database file directly come from the code or is another proces used?
(PS: as a side-question: could you help me clear up what all the terms in the list above precisely mean?)
I realise my question might be all over the place, but so is the vocabulary I've come across in this journey. I hope you can understand where my confusion is coming from and can help me clear these points up. Thanks in advance.
By "serverless" SQLLite just means that it's a library, and doesn't run in a seperate process. In this it's like Access/Jet and other older DMBS programs that read and write files directly. The more common term for this is "embedded database".
The more common meaning of "serverless" these days is a cloud-based capability that doesn't require you to install or manage a "server" or VM. As in "We use Azure Functions for serverless compute".
The other DMBS systems are typically called "Client-Server DBMS", where the DMBS runs in a seperate process and the client program communicates with it over a network or some RPC mechanism. Client-Server RDBMS systems can be "bundled" or "embedded" with an application, and may not be running on a seperate computer, but would still be running in a seperate process.

Is there anything "incorrect" about running multiple database platforms at once?

Let's say I have an application that works with different kinds of data. Each kind of data is unique, and the different kind are only tangentially related to each other. It may be very difficult to get one kind to map to a RDMS, and another kind to a graph database. Is there any real harm in running multiple database platforms to address the various needs of the application?
There are a lot of application specific issues. Will the data stores be talking to each other directly? Or, will they be communicating through the application?
The major issues have to do with maintenance over time. If the different databases are running on the same server, then you have maintenance issues. A new server OS patch comes in, but only some of the software works with the new patch. Do you install it or not? What if you need it for some other reason?
Similarly, if all the software is running on a single server, then you might need a larger server because you have more software. You will be paying more for products that are priced by the size of the server, even though the extra compute power is going somewhere else.
In some cases, you might need to use different products. However, it is often more practical to use one product a bit sub-optimally, rather than maintain a system with a profusion of best-for-a-single-purpose software componenents.
I think there's no major issue with it as long as you watch for injections and other security issues (i don't know how you pass your data to DB)
Its technically possible and would work, however the person or team building the system would need to have all the skills to develop/test/deploy/maintain/backup the database with various mechanisms, procedures and future development and upgrade paths for each (duplicated for each type of data/database). I would certainly try and minimise those mechanisms and procedures.

How to connect mainframe data to ClearCase?

I joined a new company using Mainframes (z/os) for developement, documentation, design and testing. Now we are planning to migrate the whole system to ClearCase for version control.
My question is: are there any connectors I can use to connect the mainframe to ClearCase, so that the user can checkin/checkout straight in to their mainframe environment?
I am looking for a couple of solutions to do this, like tools I can use and so on.
If you're talking about storing the stuff in Clearcase on a distributed platform and having it accessible from z/OS, I'm not aware of any way to do that but, since z/OS comes with a full blown UNIX environment built in with access to both the UNIX file systems and z/OS data sets, it may well be doable.
What is doable, since we've done it (and, in fact, we developed large chunks of it) is to access data on the mainframe from a distributed platform.
RD/z (Rational Developer for System z, although it may have changed names by now) has a plug-in which allows Eclipse to tie in to the mainframe quite easily. It uses a started task to communicate with SCLM as its primary library manager but, from memory, you can directly access the non-SCLM data sets as well.
If you combined that with a Clearcase Eclipse plug-in, you could quite easily put together a system which, while it kept everything in a Clearcase repository, could quite easily push it up the the mainframe and build it automatically, pulling down the built artefacts if you also wanted them stored in Clearcase.
All from one environment. Of course, the dinosaurs will question your sanity at storing their 'precious' in a flaky distributed system - that fact that both products come out of IBM in no way alleviates the rivalry between mainframers and lesser mortals :-)
If that's not ideal and the company has a mainframe (and it appears they do), they will also have a support contract with IBM. Best bet would be to raise a Q&A with IBM to see if there are any solutions already out there - please don't raise a PMR for this, it's not a bug and that just annoys support/development and makes the monthly targets harder to meet :-) Alternatively, ask the people that would know. IBM developers monitor these forums and should be able to help you out more than I.
On the off-chance that you don't have a developerWorks account, keep an eye on this question that I started on your behalf. I'm actually interested in the results myself, since part of my day job is the marrying of the mainframe and distributed worlds. Although, unfortunately, it sometimes seems like it's about as successful as most Hollywood or other celebrity marriages :-)

What Are the Pros and Cons of Filemaker? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
A potential customer has asked me to look at some promotional flyers for a couple of apps which fall into the contact management / scheduler category. Both use Filemaker as their backend. It looks like these two apps are sold as web apps. At any rate I had not heard of Filemaker in about ten years, so it was surprising to see it pop up twice in the same sitting. I think it started out as a Mac platform db system.
I am more partial to SQL Server, MY SQL, etc, but before make any comments on Filemaker, I'd like to know some of the pros and cons of the system. It must be more than Access for Mac's, but I have never run across it as a player in the client / server or web app arena.
Many thanks
Mike Thomas
Calling Filemaker Pro, Access for the Mac is kind of like saying, Mac OS X is Windows for the Mac. They're both in the same category of software, they're integrated programming environments. It's like you have MySQL, PHP, HTML and your editor put together in a GUI. Comparing the two, they both have pros an cons. Here are the pros and cons of using Filemaker Pro vs PHP/MySQL/HTML in my experience.
Pros:
Easy to get started
Easy to deploy locally, turn on sharing and connect from another client
Cross-platform (Mac OS X, Windows, iOS)
There are many plugins available to extend functionality
Includes starter solutions
Anyone with access can edit the program
For the most part, drag and drop programming
Changing field/database/script names after the fact is free
Has some neat built in tricks like built in graphs, tab controls, web viewers
Built in support for importing exporting excel, cvs, tab-formatted
Cons:
Inflexible: it does what it does well, but if you need more your out of luck for the most part
Expensive compared to the free alternative: It costs about $100 per year for a local user, $150 per developer, if you are using it as a website you need specialized hosting, which tends to cost more. In addition the server part of the software is about $300-$800 a year
The plugins required to extend functionality can be expensive as well
Pretty much only drag and drop programming, you can only use predefined script steps, relationships are made by making a graph
Source control is problem
Lack of scalability
Unable to copy and paste/import or export some items from solutions
Requires the mouse to access functionality
Layout design is fairly static and dated (this is improving with the Filemaker 12 and above)
In general I would say that if you're developing exclusively for the web or a large organization Filemaker Pro probably isn't the best fit. It's difficult to have multiple people developing on the same solution. On the other hand, for a smaller organization in need of a customizable in-house database it could be a great boon. You can build rather complicated applications very quickly with it if your willing to deal with it's deficiencies.
Pros:
It's cheap
Cons:
It's cheap(ly made)
It's non-standard (easy to find
MySQL/Oracle/MSSQL/Access experts
but nobody knows Filemaker)
Using subpar and/or nonstandard technologies only creates technology debt. I've never found a respectable dev that actually enjoyed (or wanted to) using this niche product.
In my opinion this product exists because it is Access for Macs, and it gained enough of a userbase and existing applications that enough people bought each upgrade to keep it in business. There are many products on the market that still exist because it's users are locked in, not because it's a good choice.
I'll admit to bias on this subject -- I work with one of the larger FileMaker development shops out there, and have written the odd book on the subject. We actually employ many respectable developers who love using FMP. I'll try to keep it brief. :-)
FileMaker Pro is a rapid app development tool. It's primarily client-server, though it has some very respectable web publishing capabilities which work well for many applications. It is not SQL-based, but does have ODBC and JDBC interfaces, as well as an XML/HTTP interface.
As far as lock-in, FileMaker Inc has grown sales steadily, with very significant growth in new users who are attracted to the platform's solidity and ease of use.
I think Matt Haughton nailed it -- for the right applications, FMP is simply the best choice going. That said, your customer is looking at apps written in FMP Pro, and you need to evaluate those apps on their own merit. They may be good instances of FMP development, or they may not.
To know more about FMP's fitness for the task, we'd need to hear more about the proposed application and user base. Are these indeed web apps, or client-server? How many users will be using it? Do they work at one or two site, or are they spread across the Internet?
Happy to elaborate further if there's more interest.
FileMaker is designed to integrate very simply with other databases and client applications. If you are looking at building a complicated distributed system, look elsewhere.
FileMaker is NOT good to use as a front-end to another datasource due to the design goals of the External SQL Data Sources (ESS) feature set, and it is NOT good to use as a back-end to anything other that the FM client due to slow and buggy ODBC drivers. The nature of FileMaker's architecture means it doesn't scale very well with complicated solutions regardless of how well it can integrate with other systems.
Here's a developer's perspective on some limitations I've found when teaming FileMaker with other back-ends and ODBC clients:
The ODBC driver is limited, slow, and leaks memory on the client-side. The xdbc_listender.exe has similar memory leaking issues on the server side and will eventually crash when it uses a certain amount of RAM. We have a scheduled script to restart it each night.
FileMaker needs to load all related databases into memory before it can connect to a database. If its a complicated database, opening and closing a connection can be quite slow (1-2 seconds) depending on how it is structured, and more so if the database references tables in other FM databases because they need to be loaded as well. I get around this by creating persistent connections that stay open for the lifetime of the application. Although we try to minimize the number of open connections, we have yet to see a performance hit on the server.
The ODBC driver interprets queries in strange ways. For example I ran a query on 76k rows to UPDATE table_1 SET field_1 = 1 and it took 5 mins to perform the query because I think it split the one query into 46k update queries, one for each row. I know this because I watched it update the rows one-by-one in the FM client. So I don't trust the ODBC driver at all.
Here's another example of 3 different queries and how long they took searching on two date fields:
SELECT id FROM table
WHERE datefield1 = {d '2014-03-26'}
.5 seconds
SELECT id FROM table
WHERE datefield2 = {d '2014-03-26'}
.5 seconds
SELECT id FROM table
WHERE datefield1 = {d '2014-03-26'} OR datefield2 = {d '2014-03-26'}
1 minute 13 seconds!
We had problems with how FileMaker cached data from an SQL Express database. We tried to run the command to clear the cache, but it didn't always work (spent a lot of time investigating this).
FileMaker uses pessimistic locking of records; before editing (from the client or as part of an odbc transaction) FileMaker attempts to lock the row first.
The FileMaker Server service "prefers" being stopped using the Admin Console (though the Admin Console may sometimes be unable to stop it either). If the FileMaker Server service stops any other way (including power loss, via the management console, or even a normal system shutdown) then some of your databases may become corrupt. Same if a client crashes during an operation, or if the network connection is lost suddenly. The solution for a power loss is to write a batch script to try and automate the shutdown, and then buy a UPS and program it to execute your script before the juice runs out. And hope it works. Otherwise backup hourly using the built-in scheduler. Aside: SQL server doesn't have this problem because it can roll back uncommitted transactions.
Performing backups with the built-in scheduler actually suspends operations to the database during backup process. ie, if its a large database, then it might take a minute to backup and users will notice the pause because they wont be able to edit/insert, etc.
If you're using the FileMaker PHP API, take note that you can't use AND and OR together in the same request.
Running an intensive query using the ODBC driver might be fast on its own, but run the same query simultaneously (as in a multi-user environment) and it will slow down by about 300% exponentially. You will run into speed issues if you’re expecting a large volume of intensive queries to hit the database at the same time.
We have found that when the FileMaker ODBC driver says it has finished an update/insert operation, it still does not guarantee the transaction is committed; it appears that FileMaker will continue to hold the changes in the server cache until the auto-enter calculated fields are evaluated/indexed and then it saves to disc, meaning there may be more of a delay until the record is actually committed. So really the ODBC write operations are not always immediate writes, but rather eventual writes. This delay will be especially evident in complicated tables with many calculated fields and triggers.
Calculated fields may slow down execution and reading via the ODBC driver, depending on what is being evaluated. Try to read stored values whenever possible.
Using BLOB containers: Not Recommended. Storing documents such as PDFs in a container field will inflate your database file size, take longer to backup and complicate the retrieval and editing of those files via ODBC. It’s much easier to store files on a network share and write to the file on disk.
If you must use FM as a front-end solution to another database, make sure to carefully read FileMaker's Introduction to External SQL Sources.
Also refer to the the appropriate version FileMaker ODBC Guide found on their website.
Just a few comments on the subject
FileMaker is certainly cheaper than some enterprise solutions in licensing costs. However, the real cost benefit is in development time. The development life-cycle is typically orders of magnitude lower than other enterprise platforms (whatever the licensing costs of those platforms). By this I mean days instead of weeks, or weeks rather than months to develop some feature.
There is a strong argument that FileMaker is Access for the Mac. While this was a valid argument a few years ago, FileMaker has come into its own in recent years. It's worth noting that FileMaker is cross platform and used extensively on Windows as well as Mac. That being said there are still huge similarities and differences between FileMaker and Access, the truth is none of them have any bearing on your situation.
While FileMaker is non-standard it does support live connection to MySQL, MS SQL Server and Oracle.
Also, there are numerous FileMaker developers not as much as more standard platforms, but they are definitely about, if you let me know where you are I can put you in touch with a selection of developers in your area.
The important point I want to make is that in the correct context FileMaker is the best thing in the world at what it does - if you try to do something that it's not meant to do, you'll get stuck. However, it could support offices in 4 locations, it can and is being done.
Before you go and rewrite your system in some other platform you should get in touch with a FileMaker expert and see what they have to say about what you've currently got, writing more details on this site and having non-experts answer positively or negatively won't help you. In the end it has to be a business choice of costs vs. benefits.
No need to list anymore "Cons" - but here is a significant "Pro" - Filemaker Go. Once you have your database setup, download a ipad/iphone app (free for FM12) and run it from a mobile device. The database can be stored locally on the ipad/iphone or synced back to a host PC.
I'm sure this mobile solution is possible elsewhere - but the fundamental point is that an entry-level user (and I mean NO previous database experience) can create an impressive solution within a few weeks.
Personal experience: main database running FM 11 hosted on PC under my desk - 4 researchers scattered across the city collecting data on ipads - all syncing back to my PC. Previous solution was using paper and entering in data by hand.
FileMaker is an interesting app :) It started as an end-user tool and it still is one of very few database apps that a non-programmer can actually use. But somehow FileMaker developers managed to make it very scalable. There's no other platform where one can start with a useful tool and end up with a client-server app that for the whole company. In old days they used to have a splash screen that captured this very idea (I only found an imperfect version):
I.e. something as simple as a file cabinet that can grow quite big.
All FileMaker pros and cons come from its origin. As an end-user tool it's very much unlike other DBMS apps. No SQL. No real programming: scripts are basically macros that repeat user actions in a slightly more general way with variables and some logic. Lots of limitations; e.g. a list view cannot have a sidebar; a dynamic value list is always sorted alphabetically; to open a Save As dialog and read back the file name you'll need a plug-in; and so on. For a programmer this can be very frustrating, because most his assumptions will be wrong. And existing apps written by non-programmers are not exactly paragons of clarity and solid design.
But if you manage to overcome the obstacles you'll find a rather good RAD for client-server, single-user, web, and mobile apps, that stays rather usable over WAN, with such niceties as runtime and kiosk mode.
Having said that, I'm not quite sure about generic contact management and scheduling apps in FileMaker. If this is what they are, then they should be unlocked, so the customer can make changes; or they have to be niche apps that do for the customer what nothing else does.
Filemaker is enormously powerful and versatile. Excellent multi-user support. You can create wonderful solutions in Filemaker with document management, web interface, iphone interface, automated publishing support, scheduled scripts, PDF/Excel/HTML reports, XML support, caller ID record lookup, integration of web data (UPS & Fedex linked to order record for example). Extensible with plugins. It's like being in the Home Depot of data. Don't try to build Amazon; other than that what can't you build with it, and faster app dev than most anywhere else?
It has been more than a year now since I run through FM and use it in developing solutions for various clients. The following are my FM experience:
learning curve is much less than using the hard coded industry standard technology;
it can fit well as to industry standards platforms because of it's ODBC and JDBC connectivity. Your data is not locked in FM and other data format can get in FM;
it fits well as front end and back end solutions.
FM can match enterprise platform having a right database design and deployment i.e. workgroup or department oriented solutions. This is data to it's workgroup owner and make it available for other workgroups or departments;
FM is fits well for rapid application development that employs prototyping;
FM has many more capabilities you therein...
I suggest you try it yourself and I'm sure you'll love the stuff FM can offer!
Happy computing...
A little research has made me think that FileMaker is indeed Access for Mac, but perhaps a little more robust. I worked with Access for years, never really liked it, and am glad to be away from it (I always held a grudge for MSFT killing FoxPro, which I did like).
It is hard for me to imagine it as a good solution for a web based app used by offices in four locations around the country, plus many others logging on from home, etc.
Using it does not make much sense when MySQL, SQL Server, etc are available for the data storage and ASP.NET, PHP, Ruby etc are there for the programming.
Mike Thomas
While the comparisons to "Access for Mac" is inevitable, there are some important distinctions that have to be made.
FileMaker databases can be shared out to more than one person provided 1 of 2 things happen. One, a person on your network opens the DB and shares it from their computer, acting as the host. Two, you buy and install FileMaker server which hosts the DBs.
Also it's been my experience that while FileMaker developers LOVE FM, they're having to learn other technologies because more and more government agencies (my primary employer the past 10 years) are moving off of FM and into SQL Server, Oracle and to some extent Access and open source. FileMaker skills are becoming less and less in demand in the public sector, so getting support for these applications is harder and consequently, more expensive.
That being said, we have a FM server and FM 5.5 clients running an application that has been rock solid for the past 5 years.
i've been using FM for more than a year now. i'm doing and providing solutions for SMBs using the SQL standard for several years. i love those SQL stuff, but just a year a ago i run through FM Pro 9 and have it a try. amazingly, i got all i wanted in just a short time. in my experience as developer, FM Pro impressed me the way it does things.
true enough, FM is not an industry database standard but a good number of its features can compensate to what "standard" is being required of. FM pro has live connectivity to MySQL, MS SQL Server and Oracle. for me, it doesn't make sense to speak about standard if you can move your data around from FM to other platforms and vice-versa.
well, this note can't make that much convincing. it's good to try it for yourself... especially now that FM has its new version 10. believe me... you'll love it...
happy computing.
Two points seem to dominate this discussion and need consideration:
Non-Standard and what Government Agencies are doing.
Let's consider the small business owner or the single user both of whom a creating databases to meet their needs.
Now it doesn't matter what the government is doing, this is your database for your employees. Do what you want (as long as its legal, of course).
Non-Standard, well often this is the best idea since what you want to do works for you. Name your fields and tables as you like and later on rename this as you prefer. Don't try this with dbf or sql... Anyone remember those 'standard' file names bks1999.dbf bks2000.dbf Keep in mind that 'standards' exist because someone else wrote them before you arrived, not because they are the best possible idea.
And yes, there are a lot of 'bad' Filemaker solutions but they are working and supporting hundreds of thousands of people. But try to improve one of these bad solutions and compare that effort to improve a similarly bad dbf solution. A renamed field filters effortlessly through thousands of scripts and scripts in related Filemaker files. In a dbf solution it can become a nightmare as each instance has to be manually retyped.
One real test would be to compare how easily Filemaker can work with SQL, etc. as compared to other applications. That might be interesting. I've never done that but I bet I could create a working file in very little time that works with such data.
I have always said that every developer should use and be familiar with all of the tools.
25 years with Filemaker Pro, 3 years with FoxPro, 2 with 4D, etc.
Lots of comments about FileMaker being non-standard. But what is "standard"? By "standard", many people mean that a database supports Structured Query Language (SQL) (ISO Standard 9075) and FileMaker has and continues to support SQL. How every database engine supports SQL is proprietary to every database. Now it might be open source such as MySQL, but SQL is a standard to support, not the underlying language of how it is accomplished.
When most people talk about databases, they are only talking about the backend tables and schema. The front end user interface is frequently something else. And most of them now render those results as html pages via open standards like PHP. Again, FileMaker fully supports PHP calls and Apache or IIS (depending on which OS platform you are on).
So I would disagree with people saying FileMaker is non-standard.
What is unique about FileMaker is its tight integration between the schema and the User Interface. This is similar to Apple's tight integration between hardware and the Operating system, which has some nice benefits. Interestingly, FileMaker is owned by Apple, but I guess that is another topic.
Generally, FileMaker's User Interface is considerably easier to use than most open standards and most people stick to FileMaker's client User Interface instead of web interfaces. There are still a number of things supported only in FileMaker User Interface that can't be duplicated in a web browser.
FileMaker really makes rapid application development much easier with its close integration of schema and user interface. This makes development cost a whole lot less in most cases.
FileMaker's database services can be spread among up to 3 machines giving it primitive load balancing abilities with web services. While FileMaker easily supports hundreds of users, if you go into thousands of simultaneous users, many SQL only databases (eg Oracle, MS SQL Server, MySQL, Postgres) are designed to better spread out the load across more machines. Basically, if you have high simultaneous transactions, FileMaker is not your solution. For example, a company with many point of sale terminals from all over the county hitting it at the same time.
While FileMaker supports SQL and PHP, using it only that way is a waste of the money spent on the license for the FileMaker User Interface. It would not be a cost effective solution to develop a web front end and pay the full FileMaker license cost for only a backend. So, FileMaker's support of PHP and SQL is best combined with companies that have an in-house solution for staff, but also want to integrate that with their web development team for outside customers.
One last note is that FileMaker's tight integration of schema and User Interface makes security much easier. Obviously you have to set up the groups and users and I usually integrate FileMaker with Active Directory (or Open Directory). But when you use the FileMaker Client and Server connections, turning on encryption security is a single checkbox on the server. FileMaker handles all of the certificates and uses an AES 256bit cipher (at least since version 11, maybe before then too). Currently, the US Government considers that approved for up to and including the first level of Top Secret communications. In typical SQL systems, there is a lot of work to configure security on the database end as well as the user interface end of things and it is much more work than a single checkbox.
FileMaker's target audience has been small to medium sized companies, usually with 5 to 200 users, and it is a well priced product for rapid application development of databases for companies of that size.
And I can't end this comment without commenting on how easy it is to create and deploy a mobile solution on iOS devices like iPads and iPhones. FileMaker Go is a free app for use on these mobile devices and they fully support the same user interface and security. In fact, I am aware of one company that uses FileMaker as a front end interface for their Oracle database simply for access on iPhones. Expect a lot more in the mobile market in the future and FileMaker is clearly targeting mobile users.
Just to add my 2¢ to the already given answers: Everything everyone has written in the voted answers is true about Filemaker. The product is robust enough to warrant both positive and negative opinions.
I'm not a pro enough to speak to your concerns but there are a number of large complex applications written in FMP that you may want to look at. Jungle Software is a good place to start.
The down side to FMP for me as a user of some of those apps is that they come with a stack of files. The runtime of a FMP application isn't packaged as a bundle so it can look a bit complex with a large app. We did some tests a long time back because FMP had a reputation of being slow. At that time (12 years ago) FMP needed to index the db or it was slow but once it was indexed it was as fast as anything else we tested. It's big upside for semi pros is that it is very easy to do basic stuff and end up with working tool. My experience with Access was extremely negative so I wouldn't compare it at all with FMP.
In the end it doesn't really mater what it was written in, if the software does what you want and is stable buy it. If it doesn't don't. It is very easy to get data in and out of FMP so the proprietaryness of the db format doesn't really enter into it.

Does anyone have database, programming language/framework suggestions for a GUI point of sale system?

Our company has a point of sale system with many extras, such as ordering and receiving functionality, sales and order history etc. Our main issue is that the system was not designed properly from the ground up, so it takes too long to make fixes and handle requests from our customers. Also, the current technology we are using (Progress database, Progress 4GL for the language) incurs quite a bit of licensing expenses on our customers due to mutli-user license fees for database connections etc.
After a lot of discussion it is looking like we will probably start over from scratch (while maintaining the current product at least for the time being). We are looking for a couple of things:
Create the system with a nice GUI front end (it is currently CHUI and the application was not built in a way that allows us to redesign the front end... no layering or separation of business logic and gui...shudder).
Create the system with the ability to modularize different functionality so the product doesn't have to include all features. This would keep the cost down for our current customers that want basic functionality and a lower price tag. The bells and whistles would be available for those that would want them.
Use proper design patterns to make the product easy to add or change any part at any time (i.e. change the database or change the front end without needing to rewrite the application or most of it). This is a problem today because the Progress 4GL code is directly compiled against the database. Small changes in the database requires lots of code recompiling.
Our new system will be Linux based, with a possibility of a client application providing functionality from one or more windows boxes.
So what I'm looking for is any suggestions on which database and/or framework or programming language(s) someone might recommend for this sort of product. Anyone that has experience in this field might be able to point us in the right direction or even have some ideas of what to avoid. We have considered .NET and SQL Express (we don't need an enterprise level DB), but that would limit us to windows (as far as I know anyway). I have heard of Mono for writing .NET code in a Linux environment, but I don't know much about it yet. We've also considered a Java and MySql based implementation.
To summarize we are looking to do the following:
Keep licensing costs down on the technology we will use to develop the product (Oracle, yikes! MySQL, nice.)
Deliver a solution that is easily maintainable and supportable.
A solution that has a component capable of running on "old" hardware through a CHUI front end. (some of our customers have 40+ terminals which would be a ton of cash in order to convert over to a PC).
Suggestions would be appreciated.
Thanks
[UPDATE]
I should note that we are currently performing a total cost analysis. This question is intended to give us a couple of "educated" options to look into to include in or analysis. Anyone who could share experiences/suggestions about client/server setups would be appreciated (not just those who have experience with point of sale systems... that would just be a bonus).
[UPDATE]
For anyone who is interested, we ended up going with Microsoft Dynamics NAV, LS Retail (a plugin for the point of sale and various other things) and then did some (and are currently working on) customization work on top of that. This setup gave us the added benefit of having a fully integrated g/l system, which our current system lacked.
Java for language (or Scala if you want to be "bleeding edge", depending on how you plan to support it and what your developers are like it might be better, but also worse)
H2 for database
Swing for GUI
Reason: Free, portable and pretty standard.
Update: Missed the part where the system should be a client-server setup. My assumption was that the database and client should run on the same machine.
I suggest you first research your constraints a bit more - you made a passing reference to a client using a particular type of terminal - this may limit your options, unless the client agrees to upgrade.
You need to do a lot more legwork on this. It's great to get opinions from web forums, but we can't possibly know your environment as well as you do.
My broad strokes advice would be to aim for technology that is widely used. This way, expertise on the platform is cheaper than "niche" technologies, and it will be easier to get help if you hit a brick wall. Of course, following this advice may not be possible if you have non-negotiable technology already in place at customers.
My second suggestion would be to complete a full project plan, with detailed specs and proper cost estimates, before going with the "rewrite from scratch" option. Right now, you're saying that it would be cheaper to rewrite the system than maintain it, and you don't really know how much it would cost to re-write.
I suggest you use browser for the UI.
Organize your application as a web application.
There are tons of options for the back-end. You can use Java + MySQL. Java backend will save you from windows/linux debate as it will run on both platforms. You won't have any licensing cost for both Java and MySQL. (Edit: Definitely there are a lot of others languages that have run-times for both linux & windows including PHP, Ruby, Python etc)
If you go this route, you may also want to consider Google Web Toolkit (GWT) for creating the browser based front-end in a modular fashion.
One word of caution though. Browsers can be pesky when it comes to memory management. In our experience, this was the most significant challenge in doing browser based POS You may want to checkout Adobe Flex that runs in browser but might be more civil in its memory management.
What is CHUI? Character-UI, as in VT terminals? Or even 3270 style?
It sounds like you need a 3-tier system - the database backend, a middle-layer that runs the bulk of the back-end business processes, and a front-end layer for the CHUI / GUI / data-gateway.
All three layers can reside on one machine; or you can distribute the tiers out to various servers. The front-end layer would control the actual terminals, whether they are VT-terminals, or a web-browser, or a custom-written 'client' application.
Make sure you have considered the hardware needs here -- are you going to have barcode scanners, cash drawers, POS debit/credit terminals, et cetra? If you are using a standard browser, it might be hard to reliably integrate those items. (At the very least, you're likely going to have to write special applets to handle them.)
Finally, consider the possibility of a thin-client technology on Windows. It greatly simplifies system management, since you only have to upgrade the software centrally. Thin-client PC's are cheap -- sub $200.
Golden Code Development (see www.goldencode.com) has a technology that does automated conversion of Progress 4GL (the schema and code... the entire application) to a Java application with a relational database backend (e.g. PostgreSQL). They currently support a very complete CHUI environment and they do refactor the code. For example, the conversion separates the UI, the data model and the business logic into separate Java classes. The entire result is a drop-in replacement that is compatible with the original (users don't need retraining, processes don't need to be modified, the data is migrated too). This is possible because they provide an application server and a set of runtime classes that provide that compatibility. The result of the automated conversion is not something that needs further editing before you can compile and run it. True terminal support is included so hardware terminals still work (it requires a small JNI library to access NCURSES from Java). All the rest of the code in the runtime is pure Java. No Progress Software Corp technology is used in the resulting system and it runs on Linux.
At least one converted system is already in production, running a 24 by 7 mission critical environment. It is a converted ERP system that their mid-sized pilot customer uses to run their entire business.

Resources