Real World Experience of db4o and/or Eloquera Database - database

I am evaluating two object databases, db4o (http://www.db4o.com) and Eloquera Database (http://eloquera.com) for a coming project. I have to choose one. My basic requirement is scalability, multi user support and easy type evolution for RAD.
Please share your real world experience.
If you have both, can you compare these two? Which do you prefer?

For the last 2 years I've been using DB4O, and I'm now switching to Eloquera.
My reasons, in order:
I'm building a commercial product, and the royalty based licensing on DB4O is WAY to high; DB4O said we could "talk about it", but I'm a very small development shop and giving away a huge chunk of each sale I make just doesn't make any sense when there's a perfectly good alternative.
I'm using the Db4oTool.exe to modify my assmeblies in a post-build step, and it really slows down the build process. Eloquera doesn't need to modify my assemblies.
I found a bug in the DB4O code, and it took many many months before it was integrated into their codebase. I have found bugs in Eloquera and they fixed them in a day or two
DB4O is not yet on .NET 4 (although they finally have an early beta). DB4O is the ONLY thing holding me back from using VS2010 (and .NET 4). I tried migrating to VS2010 but VS2010 automatically converts all unit tests to .NET 4, so all of my persistence related unit tests immediately failed.
DB4O is not really designed to be thread-safe.
DB4O has features and many API features that are obviously ported from Java.
Robert

Eloquera ( www.eloquera.com ) originally designed and developed for use in the Web environment and it’s designed as native .NET application in C#.
Eloquera wasn’t ported from Java as many other databases.
Eloquera natively as part of architecture supports:
Simultaneous user access
Security settings
Has genuine C/S architecture, has desktop mode available.
Max database size 1TB+, in a large data scale Eloquera maintains the fast query response; it has patents pending technologies including virtual file system, indexing, and adaptive cache. Eloquera has state of the art reflection written in MSIL that allows Eloquera to outperform many databases that use Microsoft’s standard reflection.
Supports in-memory database for the fast data processing
Since most of the users in the Web come from relational database world it was natural for Eloquera to support SQL and LINQ
EF support is due next month
Unlike some databases Eloquera does not put blindly objects in the database, if you change fields from int;int; to long; it will not keep querying with a wrong results because it still sees two int;int; - it will notify the user to update the definition
Eloquera provides a native indexing for properties and fields. Most of the databases do not provide properties indexing.
I might argue with Carl regarding DB4O the easiest database on the market, since Eloquera can do the same things from API perspective.
Eloquera is younger than Versant and still has some enterprise features coming.
Last month Eloquera R&D department got engaged with Eloquera Parallel Server to provide horizontal scaling that arguably will be magnitude cheaper than Versant’s VOD.
Some of the distinguished points
Eloquera is FREE for commercial use. You are not required to pay any royalties. All features above you have for FREE.
Eloquera has a commercial support available.
Eloquera is designed for the modern world with modern architecture. It was not adapting from time to time to market needs. It is natural part of Eloquera’s architecture.

If you are interested to hear user experiences with db4o, I suggest you also ask in our db4o user forums.
While db4o was originally developed for embedded use in applications with limited resources (and now runs very well on constrained platforms like Android, CompactFramework and Silverlight) I know that we do have many users that are happily using db4o for web applications.
Indeed there is some correctness to the db4o-bashing-post by leatrop: The db4o server core currently only allows one thread to enter for storing and querying tasks in a particular database.
However there are a couple of ways to make db4o applications scale very well:
Since the setup costs for db4o databases is very low (one single API call) it is possible to work with multiple databases. You can use the db4o replication system (dRS) to distribute objects between multiple databases. It is also possible to create backups of db4o databases while they are running and to replicate these backups to multiple machines. The approach of using multiple databases (for timeslices of data or for different usecases in your application) can be very nice for backup and debugging purposes. You don't need to copy the entire database if you want to test only some aspects of your live app.
If you still find that db4o does not scale good enough for concurrent users or database sizes, you can later switch to our high-end object database Versant VOD. It was built to run in the cloud and it has a proven track record to work for thousands of concurrent users with multi-terabyte databases. VOD for .NET also comes with a LINQ provider, so the interfaces of db4o and VOD are compatible.
My recommendation: Start with db4o. It is the easiest object database to get started with and to develop with. Just store any object with one line of code, without setting up schemas or mapping files. Use LINQ to query (or native queries, if you work with Java).
db4o is open source and it's free (under the GPL).

I'm creating a 2nd generation Social Media Platform completely based on Javafx and Db4o. We are able to do things with db4o that would be impossible with any other database. Semantic OWL Ontologies and Complex relationships with Objects and Our User Definable Canvas make Db4o an amazing fit for us. We have no worries about scaling either and have found several solutions. Carl is one of the most intelligent people in software. This fact is obvious when you learn about his product.
Mike Tallent
CEO
Objectwheel

Related

Suggest: Non RDBMS database for a noob

For a new application based on Erlang, Python, we are thinking of trying out a non-RDBMS database(just for the sake of it). Some of the databases I've researched are Mongodb, CouchDB, Cassandra, Redis, Riak, Scalaris). Here is a list of simple requirements.
Ease of development - I need to make a quick proof-of-concept demo. So the database needs to have good adapters for Eralang and Python.
I'm working on a new application where we have lots of "connected" data. Somebody recommended Neo4j for graph-like data. Any ideas on that?
Scalable - We are looking at a distributed architecture, hence scalability is important.
For the moment performance(in any form) isn't exactly on top of my list, and I don't think we'll be hitting the limitations of any of the above mentioned databases anytime soon.
I'm just looking for a starting point for non-RDBMS database. Any recommendations?
We have used Mnesia in building an Enterprise Application. Mnesia when in a mode where the tables are Fragmented performs at its best because it would not have table size limits. Mnesia has performed well for the last 1 year and is still on. We have around 15 million records per table on the average and around 24 tables in a given database Schema.
I recommend mnesia Database especially the one that comes shipped within Erlang 14B03 at the Erlang.org website. We have used CouchDB and Membase Server (http://www.couchbase.com)for some parts of the system but mnesia is the main data storage (primary storage). Backups have been automated very well and the system scales well against increasing size of data yet tables running under many checkpoints. Its distribution, auto-replication and Complex Data Model enabled us to build the application very quickly without worrying about replication, scalability and fail-over / take-over of systems.
Mnesia Scales well and it's schema can be configured and changes while the database is running. Tables can be moved, copied, altered e.t.c while the system is live. Generally, it has all features of powerful systems built on top of Erlang/OTP. When you google mnesia DBMS, you will get a number of books and papers that will tell you more.
Most importantly, our application is Web based, powered by Yaws web server (yaws.hyber.org) and we are impressed with Mnesia's performance. Its record look up speeds are very good and the system feels so light yet renders alot of data. Do give mnesia a try and you will not regret it.
EDIT: To quickly use it in your application, look at the answer given here
Ease of development - I need to make a quick proof-of-concept demo. So the database needs to have good adapters for Eralang and Python.
Riak is written in Erlang => speaks Erlang natively
I'm working on a new application where we have lots of "connected" data. Somebody recommended Neo4j for graph-like data. Any ideas on that?
Neo4j is great for "connected" data. It has Python bindings, and some Erlang adapters How to Use Neo4j From Erlang. Thing to note, Neo4j is not as easy to Scale Out, at least for free. But.. it is fully transactional ( even JTA ), it persists things to disk, it is baked into Spring Data.
Scalable - We are looking at a distributed architecture, hence scalability is important.
For the moment performance(in any form) isn't exactly on top of my list, and I don't think we'll be hitting the limitations of any of the above mentioned databases anytime soon.
I believe given your input, Riak would be the best choice for you:
Written in Erlang
Naturally Distributed
Very easy to develop for/with
Lots of features ( secondary indicies, virtual nodes, fully modular, pluggable persistence [LevelDB, Bitcask, InnoDB, flat file, etc.. ], extremely reliable, built in full text search, etc.. )
Has an extremely passionate and helpful community with Basho backing it up

Implementing Transparent Persistence

Transparent persistence allows you to use regular objects instead of a database. The objects are automatically read from and written to disk. Examples of such systems are Gemstone and Rucksack (for common lisp).
Simplified version of what they do: if you access foo.bar and bar is not in memory, it gets loaded from disk. If you do foo.bar = baz then the foo object gets updated on disk. Most systems also have some form of transactions, and they may have support for sharing objects across programs and even across a network.
My question is what are the different techniques for implementing these kind of systems and what are the trade offs between these implementation approaches?
I've used such a system (ObjectStore) on several projects, most notably a commercial credit risk system, and a system for optimising flow in oil pipeline networks. The question about implementation is too complex to discuss here, but as for trade-offs between such systems and relational databases:
Object DB advantages:
very very fast - for some queries they can be 100 to 1000 times faster than a relational database. In fact the risk system I designed could not (according to Sybase themselves) be implemented on a SQL database.
very easy to integrate with C++ code - no impedance matching layers needed.
limited number of GUI libraries available for bread-and-butter CRUD apps
Relational advantages:
ad hoc queries much, much easier and faster than for Object DBs.
about a million tools to manage the database
very easy to create GUI apps
lots of people have RDBMS experience
But of course, as with all tools, you don't have to choose one. The risk app I wrote imported data from a Sybase database, and the pipeline from Oracle.

Database independence

We are in the early stages of design of a big business application that will have multiple modules. One of the requirements is that the application should be database independent, it should support SQL Server, Oracle, MySQL and DB2.
From what I have read on the web, database independence is a very bad idea: it would result in a hard-to-maintain code, database design with the least-common features in all supported DBMSs, bad performance and bad scalability. My personal gut feeling is that the complexity of this feature, more than any other feature, could increase the development cost and time exponentially. The code will be dreadful.
But I cannot persuade anybody to ignore this feature. The problem is that most data on this issue are empirical data, lacking numbers to support the case. If anyone can share any numbers-supported data on the issue I would appreciate it.
One of the possible design options is to use Entity framework for the database tier with provider for each DBMS. My personal feeling is that writing SQL statements manually without any ORM would be a "must" since you have no control on the SQL generated by the entity framework, and a database-independent scenario will need some SQL tweaking based on the DBMS the code is targeting, and I think that third-party entity framework providers will have a significant amount of bugs that only appear in the complex scenarios that the application will have. I would like to hear from anyone who has had an experience with using entity framework for database-independent scenario before.
Also, one of the possibilities discussed by the team is to support one DBMS (SQL Server, for example) in the first iteration and then add support for other DBMSs in successive iterations. I think that since we will need a database design with the least common features, this development strategy is bad, since we need to know all the features of all databases before we start writing code for the first DBMS. I need to hear from you about this possibility, too.
Have you looked at Comparison of different SQL implementations ?
This is an interesting comparison, I believe it is reasonably current.
Designing a good relational data model for your application should be database agnostic, for the simple reason that all RDBMSs are designed to support the features of relational data models.
On the other hand, implementation of the model is normally influenced by the personal preferences of the people specifying the implementation. Everybody has their own slant on doing things, for instance you mention autoincremented identity in a comment above. These personal preferences for implementation are the hurdles that can limit portability.
Reading between the lines, the requirement for database independence has been handed down from above, with the instruction to make it so. It also seems likely that the application is intended for sale rather than in-house use. In context, the database preference of potential clients is unkown at this stage.
Given such requirements, then the practical questions include:
who will champion each specific database for design and development ? This is important, inasmuch as the personal preferences for implementation of each of these people need to be reconciled to achieve a database-neutral solution. If a specific database has no champion, chances are that implementing the application on this database will be poorly done, if at all.
who has the depth of database experience to act as moderator for the champions ? This person will have to make some hard decisions at times, but horsetrading is part of the fun.
will the programming team be productive without all of their personal favourite features ? Stored procedures, triggers etc. are the least transportable features between RDBMs.
The specification of the application itself will also need to include a clear distinction between database-agnostic and database specific design elements/chapters/modules/whatever. Amongst other things, this allows implementation with one DBMS first, with a defined effort required to implement for each subsequent DBMS.
Database-agnostic parts should include all of the DML, or ORM if you use one.
Database-specific parts should be more-or-less limited to installation and drivers.
Believe it or not, vanilla-flavoured sql is still a very powerful programming language, and personally I find it unlikely that you cannot create a performant application without database-specific features, if you wish to.
In summary, designing database-agnostic applications is an extension of a simple precept:
Encapsulate what varies
I work with Hibernate which gives me the benefits of the ORM plus the database independence. Database specific features are out of the question and this usually improves my design. Everything (domain model, business logic and data access methods) are testable so development is not painful.
مرحبا , Muhammed!
Database independence is neither "good" nor "bad". It is a design decision; it is a trade-off.
Let's talk about the choices:
It would result in a hard to maintain code
This is the choice of your programmers. If you make your code database-independent, then you should use a layer between your code and the database. The best kind of layer is one that someone else has written.
...Database design with the least common features in all supported DBMSs
This is, by definition, true. Luckily, the common features in all supported databases are fairly broad; they should all implement the SQL-99 standard.
...bad performance and bad scalability
This should not be true. The layer should add minimal cost to the database.
...this is the most feature ever that could increase the development cost and time exponentially with complexity. The code will be dreadful.
Again, I recommend that you use a layer between your code and the database.
You didn't specify which language or platform you're writing for. Luckily, many languages have already abstracted out databases:
Java has JDBC drivers
Python has the Python Database API
.NET has ADO.NET
Good luck.
Database independence is an overrated application feature. In reality, it is very rare for a large business application to be moved onto a new database platform after it's built and deployed. You can also miss out on DBMS specific features and optimisations.
That said, if you really want to include database independence, you might be best to write all your database access code against interfaces or abstract classes, like those used in the .NET System.Data.Common namespace (DbConnection, DbCommand, etc.) or use an O/RM library that supports multiple databases like NHibernate.

What is the production ready NonSQL database?

With the rising of non-sql database usage in high traffic website, I'm interested to use it for my project. Now I've heard several names like Voldermort, MongoDB and CouchDB. But which are among these NonSQL database that is production ready? I've seen the download pages and it seems that none of them is production ready because is not version 1.0 yet. Is there any other names other than these 3 that is recommendable to be used in production?
What do you mean by production ready? As far as I know, all of them are being used on live systems.
You should make your choice based on how the features they provide fit your needs.
You can also add Tokyo Cabinet to the list as well as the mnesia database provided by the Erlang VM.
I think you need to start out from your project requirements to see what kind of database you really need. There are many non-relational DBMS:s out there and they differ a lot in what kind of problems they are good at solving. I think the article Should you go Beyond Relational Databases? by Martin Kleppmann is a good starting point for finding out what you need. There's also a lot of stackoverflow threads on similar topics, these are my favorites:
The Next-gen Databases
Non-Relational Database Design
When shouldn’t you use a relational
database?
Good reasons NOT to use a relational
database?
When you have narrowed down what you actually need you can take a deeper look into the alternatives to see which DBMS are production ready for your use case. Production readiness isn't a yes/no thing: people may successfully deploy some solution that for example lacks in tool support - in another project this could be a no-go.
As for version numbers different projects have a different take on this, so you can't just compare the version numbers. I'm involved in the graph database project Neo4j and even if it has been in production use for 5+ years by now we still haven't released a version 1.0 final yet.
I'm tempted to answer "use SIRA_PRISE".
It's definitely non-SQL.
And its current version is 1.2, meaning that someone like you must definitely assume it's "production-ready".
But perhaps I shouldn't be answering at all.
Nice article comparing rdbms with 'next gen' and listing some providers:
Is the Relational Database Doomed?
http://readwrite.com/2009/02/12/is-the-relational-database-doomed
I will suggest you to use Arangodb.
ArangoDB is a multi-model mostly-memory database with a flexible data model for documents and graphs. It is designed as a “general purpose database”, offering all the features you typically need for modern web applications.
ArangoDB is supposed to grow with the application—the project may start as a simple single-server prototype, nothing you couldn’t do with a relational database equally well. After some time, some geo-location features are needed and a shopping cart requires transactions. ArangoDB’s graph data model is useful for the recommendation system. The smartphone app needs a lean API to the back-end—this is where Foxx, ArangoDB’s integrated Javascript application framework, comes into play.
Another unique feature is ArangoDB’s query language AQL — it makes querying powerful and convenient. AQL enables you to describe complex filter conditions and joins in a readable format, much in the same way as SQL.
You can model your data in several ways:
in key/value pairs
as collections of documents
as graphs with nodes, edges, and properties for both
You can access data in ArangoDB:
using the general HTTP REST API via curl/wget, or your browser
via the ArangoDB shell (“arangosh”)
using a programming language specific client library
Server requirements for ArangoDB:
ArangoDB runs on Linux, OS X and Microsoft Windows.
It runs on 32bit and 64bit systems, though using a 32bit system will limit you to using only approximately 2 to 3 GB of data with ArangoDB.

What Are the Pros and Cons of Filemaker? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
A potential customer has asked me to look at some promotional flyers for a couple of apps which fall into the contact management / scheduler category. Both use Filemaker as their backend. It looks like these two apps are sold as web apps. At any rate I had not heard of Filemaker in about ten years, so it was surprising to see it pop up twice in the same sitting. I think it started out as a Mac platform db system.
I am more partial to SQL Server, MY SQL, etc, but before make any comments on Filemaker, I'd like to know some of the pros and cons of the system. It must be more than Access for Mac's, but I have never run across it as a player in the client / server or web app arena.
Many thanks
Mike Thomas
Calling Filemaker Pro, Access for the Mac is kind of like saying, Mac OS X is Windows for the Mac. They're both in the same category of software, they're integrated programming environments. It's like you have MySQL, PHP, HTML and your editor put together in a GUI. Comparing the two, they both have pros an cons. Here are the pros and cons of using Filemaker Pro vs PHP/MySQL/HTML in my experience.
Pros:
Easy to get started
Easy to deploy locally, turn on sharing and connect from another client
Cross-platform (Mac OS X, Windows, iOS)
There are many plugins available to extend functionality
Includes starter solutions
Anyone with access can edit the program
For the most part, drag and drop programming
Changing field/database/script names after the fact is free
Has some neat built in tricks like built in graphs, tab controls, web viewers
Built in support for importing exporting excel, cvs, tab-formatted
Cons:
Inflexible: it does what it does well, but if you need more your out of luck for the most part
Expensive compared to the free alternative: It costs about $100 per year for a local user, $150 per developer, if you are using it as a website you need specialized hosting, which tends to cost more. In addition the server part of the software is about $300-$800 a year
The plugins required to extend functionality can be expensive as well
Pretty much only drag and drop programming, you can only use predefined script steps, relationships are made by making a graph
Source control is problem
Lack of scalability
Unable to copy and paste/import or export some items from solutions
Requires the mouse to access functionality
Layout design is fairly static and dated (this is improving with the Filemaker 12 and above)
In general I would say that if you're developing exclusively for the web or a large organization Filemaker Pro probably isn't the best fit. It's difficult to have multiple people developing on the same solution. On the other hand, for a smaller organization in need of a customizable in-house database it could be a great boon. You can build rather complicated applications very quickly with it if your willing to deal with it's deficiencies.
Pros:
It's cheap
Cons:
It's cheap(ly made)
It's non-standard (easy to find
MySQL/Oracle/MSSQL/Access experts
but nobody knows Filemaker)
Using subpar and/or nonstandard technologies only creates technology debt. I've never found a respectable dev that actually enjoyed (or wanted to) using this niche product.
In my opinion this product exists because it is Access for Macs, and it gained enough of a userbase and existing applications that enough people bought each upgrade to keep it in business. There are many products on the market that still exist because it's users are locked in, not because it's a good choice.
I'll admit to bias on this subject -- I work with one of the larger FileMaker development shops out there, and have written the odd book on the subject. We actually employ many respectable developers who love using FMP. I'll try to keep it brief. :-)
FileMaker Pro is a rapid app development tool. It's primarily client-server, though it has some very respectable web publishing capabilities which work well for many applications. It is not SQL-based, but does have ODBC and JDBC interfaces, as well as an XML/HTTP interface.
As far as lock-in, FileMaker Inc has grown sales steadily, with very significant growth in new users who are attracted to the platform's solidity and ease of use.
I think Matt Haughton nailed it -- for the right applications, FMP is simply the best choice going. That said, your customer is looking at apps written in FMP Pro, and you need to evaluate those apps on their own merit. They may be good instances of FMP development, or they may not.
To know more about FMP's fitness for the task, we'd need to hear more about the proposed application and user base. Are these indeed web apps, or client-server? How many users will be using it? Do they work at one or two site, or are they spread across the Internet?
Happy to elaborate further if there's more interest.
FileMaker is designed to integrate very simply with other databases and client applications. If you are looking at building a complicated distributed system, look elsewhere.
FileMaker is NOT good to use as a front-end to another datasource due to the design goals of the External SQL Data Sources (ESS) feature set, and it is NOT good to use as a back-end to anything other that the FM client due to slow and buggy ODBC drivers. The nature of FileMaker's architecture means it doesn't scale very well with complicated solutions regardless of how well it can integrate with other systems.
Here's a developer's perspective on some limitations I've found when teaming FileMaker with other back-ends and ODBC clients:
The ODBC driver is limited, slow, and leaks memory on the client-side. The xdbc_listender.exe has similar memory leaking issues on the server side and will eventually crash when it uses a certain amount of RAM. We have a scheduled script to restart it each night.
FileMaker needs to load all related databases into memory before it can connect to a database. If its a complicated database, opening and closing a connection can be quite slow (1-2 seconds) depending on how it is structured, and more so if the database references tables in other FM databases because they need to be loaded as well. I get around this by creating persistent connections that stay open for the lifetime of the application. Although we try to minimize the number of open connections, we have yet to see a performance hit on the server.
The ODBC driver interprets queries in strange ways. For example I ran a query on 76k rows to UPDATE table_1 SET field_1 = 1 and it took 5 mins to perform the query because I think it split the one query into 46k update queries, one for each row. I know this because I watched it update the rows one-by-one in the FM client. So I don't trust the ODBC driver at all.
Here's another example of 3 different queries and how long they took searching on two date fields:
SELECT id FROM table
WHERE datefield1 = {d '2014-03-26'}
.5 seconds
SELECT id FROM table
WHERE datefield2 = {d '2014-03-26'}
.5 seconds
SELECT id FROM table
WHERE datefield1 = {d '2014-03-26'} OR datefield2 = {d '2014-03-26'}
1 minute 13 seconds!
We had problems with how FileMaker cached data from an SQL Express database. We tried to run the command to clear the cache, but it didn't always work (spent a lot of time investigating this).
FileMaker uses pessimistic locking of records; before editing (from the client or as part of an odbc transaction) FileMaker attempts to lock the row first.
The FileMaker Server service "prefers" being stopped using the Admin Console (though the Admin Console may sometimes be unable to stop it either). If the FileMaker Server service stops any other way (including power loss, via the management console, or even a normal system shutdown) then some of your databases may become corrupt. Same if a client crashes during an operation, or if the network connection is lost suddenly. The solution for a power loss is to write a batch script to try and automate the shutdown, and then buy a UPS and program it to execute your script before the juice runs out. And hope it works. Otherwise backup hourly using the built-in scheduler. Aside: SQL server doesn't have this problem because it can roll back uncommitted transactions.
Performing backups with the built-in scheduler actually suspends operations to the database during backup process. ie, if its a large database, then it might take a minute to backup and users will notice the pause because they wont be able to edit/insert, etc.
If you're using the FileMaker PHP API, take note that you can't use AND and OR together in the same request.
Running an intensive query using the ODBC driver might be fast on its own, but run the same query simultaneously (as in a multi-user environment) and it will slow down by about 300% exponentially. You will run into speed issues if you’re expecting a large volume of intensive queries to hit the database at the same time.
We have found that when the FileMaker ODBC driver says it has finished an update/insert operation, it still does not guarantee the transaction is committed; it appears that FileMaker will continue to hold the changes in the server cache until the auto-enter calculated fields are evaluated/indexed and then it saves to disc, meaning there may be more of a delay until the record is actually committed. So really the ODBC write operations are not always immediate writes, but rather eventual writes. This delay will be especially evident in complicated tables with many calculated fields and triggers.
Calculated fields may slow down execution and reading via the ODBC driver, depending on what is being evaluated. Try to read stored values whenever possible.
Using BLOB containers: Not Recommended. Storing documents such as PDFs in a container field will inflate your database file size, take longer to backup and complicate the retrieval and editing of those files via ODBC. It’s much easier to store files on a network share and write to the file on disk.
If you must use FM as a front-end solution to another database, make sure to carefully read FileMaker's Introduction to External SQL Sources.
Also refer to the the appropriate version FileMaker ODBC Guide found on their website.
Just a few comments on the subject
FileMaker is certainly cheaper than some enterprise solutions in licensing costs. However, the real cost benefit is in development time. The development life-cycle is typically orders of magnitude lower than other enterprise platforms (whatever the licensing costs of those platforms). By this I mean days instead of weeks, or weeks rather than months to develop some feature.
There is a strong argument that FileMaker is Access for the Mac. While this was a valid argument a few years ago, FileMaker has come into its own in recent years. It's worth noting that FileMaker is cross platform and used extensively on Windows as well as Mac. That being said there are still huge similarities and differences between FileMaker and Access, the truth is none of them have any bearing on your situation.
While FileMaker is non-standard it does support live connection to MySQL, MS SQL Server and Oracle.
Also, there are numerous FileMaker developers not as much as more standard platforms, but they are definitely about, if you let me know where you are I can put you in touch with a selection of developers in your area.
The important point I want to make is that in the correct context FileMaker is the best thing in the world at what it does - if you try to do something that it's not meant to do, you'll get stuck. However, it could support offices in 4 locations, it can and is being done.
Before you go and rewrite your system in some other platform you should get in touch with a FileMaker expert and see what they have to say about what you've currently got, writing more details on this site and having non-experts answer positively or negatively won't help you. In the end it has to be a business choice of costs vs. benefits.
No need to list anymore "Cons" - but here is a significant "Pro" - Filemaker Go. Once you have your database setup, download a ipad/iphone app (free for FM12) and run it from a mobile device. The database can be stored locally on the ipad/iphone or synced back to a host PC.
I'm sure this mobile solution is possible elsewhere - but the fundamental point is that an entry-level user (and I mean NO previous database experience) can create an impressive solution within a few weeks.
Personal experience: main database running FM 11 hosted on PC under my desk - 4 researchers scattered across the city collecting data on ipads - all syncing back to my PC. Previous solution was using paper and entering in data by hand.
FileMaker is an interesting app :) It started as an end-user tool and it still is one of very few database apps that a non-programmer can actually use. But somehow FileMaker developers managed to make it very scalable. There's no other platform where one can start with a useful tool and end up with a client-server app that for the whole company. In old days they used to have a splash screen that captured this very idea (I only found an imperfect version):
I.e. something as simple as a file cabinet that can grow quite big.
All FileMaker pros and cons come from its origin. As an end-user tool it's very much unlike other DBMS apps. No SQL. No real programming: scripts are basically macros that repeat user actions in a slightly more general way with variables and some logic. Lots of limitations; e.g. a list view cannot have a sidebar; a dynamic value list is always sorted alphabetically; to open a Save As dialog and read back the file name you'll need a plug-in; and so on. For a programmer this can be very frustrating, because most his assumptions will be wrong. And existing apps written by non-programmers are not exactly paragons of clarity and solid design.
But if you manage to overcome the obstacles you'll find a rather good RAD for client-server, single-user, web, and mobile apps, that stays rather usable over WAN, with such niceties as runtime and kiosk mode.
Having said that, I'm not quite sure about generic contact management and scheduling apps in FileMaker. If this is what they are, then they should be unlocked, so the customer can make changes; or they have to be niche apps that do for the customer what nothing else does.
Filemaker is enormously powerful and versatile. Excellent multi-user support. You can create wonderful solutions in Filemaker with document management, web interface, iphone interface, automated publishing support, scheduled scripts, PDF/Excel/HTML reports, XML support, caller ID record lookup, integration of web data (UPS & Fedex linked to order record for example). Extensible with plugins. It's like being in the Home Depot of data. Don't try to build Amazon; other than that what can't you build with it, and faster app dev than most anywhere else?
It has been more than a year now since I run through FM and use it in developing solutions for various clients. The following are my FM experience:
learning curve is much less than using the hard coded industry standard technology;
it can fit well as to industry standards platforms because of it's ODBC and JDBC connectivity. Your data is not locked in FM and other data format can get in FM;
it fits well as front end and back end solutions.
FM can match enterprise platform having a right database design and deployment i.e. workgroup or department oriented solutions. This is data to it's workgroup owner and make it available for other workgroups or departments;
FM is fits well for rapid application development that employs prototyping;
FM has many more capabilities you therein...
I suggest you try it yourself and I'm sure you'll love the stuff FM can offer!
Happy computing...
A little research has made me think that FileMaker is indeed Access for Mac, but perhaps a little more robust. I worked with Access for years, never really liked it, and am glad to be away from it (I always held a grudge for MSFT killing FoxPro, which I did like).
It is hard for me to imagine it as a good solution for a web based app used by offices in four locations around the country, plus many others logging on from home, etc.
Using it does not make much sense when MySQL, SQL Server, etc are available for the data storage and ASP.NET, PHP, Ruby etc are there for the programming.
Mike Thomas
While the comparisons to "Access for Mac" is inevitable, there are some important distinctions that have to be made.
FileMaker databases can be shared out to more than one person provided 1 of 2 things happen. One, a person on your network opens the DB and shares it from their computer, acting as the host. Two, you buy and install FileMaker server which hosts the DBs.
Also it's been my experience that while FileMaker developers LOVE FM, they're having to learn other technologies because more and more government agencies (my primary employer the past 10 years) are moving off of FM and into SQL Server, Oracle and to some extent Access and open source. FileMaker skills are becoming less and less in demand in the public sector, so getting support for these applications is harder and consequently, more expensive.
That being said, we have a FM server and FM 5.5 clients running an application that has been rock solid for the past 5 years.
i've been using FM for more than a year now. i'm doing and providing solutions for SMBs using the SQL standard for several years. i love those SQL stuff, but just a year a ago i run through FM Pro 9 and have it a try. amazingly, i got all i wanted in just a short time. in my experience as developer, FM Pro impressed me the way it does things.
true enough, FM is not an industry database standard but a good number of its features can compensate to what "standard" is being required of. FM pro has live connectivity to MySQL, MS SQL Server and Oracle. for me, it doesn't make sense to speak about standard if you can move your data around from FM to other platforms and vice-versa.
well, this note can't make that much convincing. it's good to try it for yourself... especially now that FM has its new version 10. believe me... you'll love it...
happy computing.
Two points seem to dominate this discussion and need consideration:
Non-Standard and what Government Agencies are doing.
Let's consider the small business owner or the single user both of whom a creating databases to meet their needs.
Now it doesn't matter what the government is doing, this is your database for your employees. Do what you want (as long as its legal, of course).
Non-Standard, well often this is the best idea since what you want to do works for you. Name your fields and tables as you like and later on rename this as you prefer. Don't try this with dbf or sql... Anyone remember those 'standard' file names bks1999.dbf bks2000.dbf Keep in mind that 'standards' exist because someone else wrote them before you arrived, not because they are the best possible idea.
And yes, there are a lot of 'bad' Filemaker solutions but they are working and supporting hundreds of thousands of people. But try to improve one of these bad solutions and compare that effort to improve a similarly bad dbf solution. A renamed field filters effortlessly through thousands of scripts and scripts in related Filemaker files. In a dbf solution it can become a nightmare as each instance has to be manually retyped.
One real test would be to compare how easily Filemaker can work with SQL, etc. as compared to other applications. That might be interesting. I've never done that but I bet I could create a working file in very little time that works with such data.
I have always said that every developer should use and be familiar with all of the tools.
25 years with Filemaker Pro, 3 years with FoxPro, 2 with 4D, etc.
Lots of comments about FileMaker being non-standard. But what is "standard"? By "standard", many people mean that a database supports Structured Query Language (SQL) (ISO Standard 9075) and FileMaker has and continues to support SQL. How every database engine supports SQL is proprietary to every database. Now it might be open source such as MySQL, but SQL is a standard to support, not the underlying language of how it is accomplished.
When most people talk about databases, they are only talking about the backend tables and schema. The front end user interface is frequently something else. And most of them now render those results as html pages via open standards like PHP. Again, FileMaker fully supports PHP calls and Apache or IIS (depending on which OS platform you are on).
So I would disagree with people saying FileMaker is non-standard.
What is unique about FileMaker is its tight integration between the schema and the User Interface. This is similar to Apple's tight integration between hardware and the Operating system, which has some nice benefits. Interestingly, FileMaker is owned by Apple, but I guess that is another topic.
Generally, FileMaker's User Interface is considerably easier to use than most open standards and most people stick to FileMaker's client User Interface instead of web interfaces. There are still a number of things supported only in FileMaker User Interface that can't be duplicated in a web browser.
FileMaker really makes rapid application development much easier with its close integration of schema and user interface. This makes development cost a whole lot less in most cases.
FileMaker's database services can be spread among up to 3 machines giving it primitive load balancing abilities with web services. While FileMaker easily supports hundreds of users, if you go into thousands of simultaneous users, many SQL only databases (eg Oracle, MS SQL Server, MySQL, Postgres) are designed to better spread out the load across more machines. Basically, if you have high simultaneous transactions, FileMaker is not your solution. For example, a company with many point of sale terminals from all over the county hitting it at the same time.
While FileMaker supports SQL and PHP, using it only that way is a waste of the money spent on the license for the FileMaker User Interface. It would not be a cost effective solution to develop a web front end and pay the full FileMaker license cost for only a backend. So, FileMaker's support of PHP and SQL is best combined with companies that have an in-house solution for staff, but also want to integrate that with their web development team for outside customers.
One last note is that FileMaker's tight integration of schema and User Interface makes security much easier. Obviously you have to set up the groups and users and I usually integrate FileMaker with Active Directory (or Open Directory). But when you use the FileMaker Client and Server connections, turning on encryption security is a single checkbox on the server. FileMaker handles all of the certificates and uses an AES 256bit cipher (at least since version 11, maybe before then too). Currently, the US Government considers that approved for up to and including the first level of Top Secret communications. In typical SQL systems, there is a lot of work to configure security on the database end as well as the user interface end of things and it is much more work than a single checkbox.
FileMaker's target audience has been small to medium sized companies, usually with 5 to 200 users, and it is a well priced product for rapid application development of databases for companies of that size.
And I can't end this comment without commenting on how easy it is to create and deploy a mobile solution on iOS devices like iPads and iPhones. FileMaker Go is a free app for use on these mobile devices and they fully support the same user interface and security. In fact, I am aware of one company that uses FileMaker as a front end interface for their Oracle database simply for access on iPhones. Expect a lot more in the mobile market in the future and FileMaker is clearly targeting mobile users.
Just to add my 2¢ to the already given answers: Everything everyone has written in the voted answers is true about Filemaker. The product is robust enough to warrant both positive and negative opinions.
I'm not a pro enough to speak to your concerns but there are a number of large complex applications written in FMP that you may want to look at. Jungle Software is a good place to start.
The down side to FMP for me as a user of some of those apps is that they come with a stack of files. The runtime of a FMP application isn't packaged as a bundle so it can look a bit complex with a large app. We did some tests a long time back because FMP had a reputation of being slow. At that time (12 years ago) FMP needed to index the db or it was slow but once it was indexed it was as fast as anything else we tested. It's big upside for semi pros is that it is very easy to do basic stuff and end up with working tool. My experience with Access was extremely negative so I wouldn't compare it at all with FMP.
In the end it doesn't really mater what it was written in, if the software does what you want and is stable buy it. If it doesn't don't. It is very easy to get data in and out of FMP so the proprietaryness of the db format doesn't really enter into it.

Resources