Is MS Access 2007 database more reliable than SQLite - database

Not long ago I developed a Windows forms application to use SQLite. Now there is the need to have the database shared on a network. We don't want to use a Client/Server (MS SQL or MySQL) database because we want to simplify installation of the application as much as possible.
The challenge here is SQLite's performance on a network when shared. The filelocking system is not reliable on a network and thus we can have data conflicts.
I am ruling out SQLite for this purpose and considering MS Access database file (accdb) instead. I am wondering if this can handle concurrent transactions say users updating a record at the same time. Is it much better than SQLite.
Maximum number of users should be between 5 and 10.

5-10 users should be no problem. You should split it into front- and backend if you are not using the legacy application as frontend.

Related

SQLite database remote access

I have a SQLite database on my local machine and my web services running on the same machine access it using SQLAlchemy like this:
engine = create_engine('sqlite:///{}'.format('mydatabase.db'), echo=True)
We are planning to host our web services on a separate machine from where the database is hosted. How can we make this 'mydabata.db' be accessible for our web services remotely for my web services? Thanks.
From SQLite when to use docs:
Situations Where A Client/Server RDBMS May Work Better
Client/Server Applications
If there are many client programs sending SQL to the same database over a network, then use a client/server database engine instead of SQLite. SQLite will work over a network filesystem, but because of the latency associated with most network filesystems, performance will not be great. Also, file locking logic is buggy in many network filesystem implementations (on both Unix and Windows). If file locking does not work correctly, two or more clients might try to modify the same part of the same database at the same time, resulting in corruption. Because this problem results from bugs in the underlying filesystem implementation, there is nothing SQLite can do to prevent it.
A good rule of thumb is to avoid using SQLite in situations where the same database will be accessed directly (without an intervening application server) and simultaneously from many computers over a network.
SQLite works well for embedded system or at least when you use it on the same computer. IMHO you'll have to migrate to one of the larger SQL solutions like PostgreSQL, MariaDB or MySQL. If you've generated all your queries though the ORM (SQLAlchemy) then there will be no problem migrating to another RDBMS. But even if wrote SQL queries too there should not be much problems because all these RDBMSes use very similar dialects (unlike Microsoft's T-SQL). And since SQLite is lite it supports only a subset of what other RDBMSes support so there should not be a problem.

Does sharing database structure make my application significantly more vulnerable?

I have a software that has two versions. One version is designed as a desktop application with its own database (MS SQL CE) that would reside on the user's machine. Another version has a client-server application where the database resides on a server. The database structure is nearly identical in both cases. The reason we have this setup is because we work in developing countries where the internet connection is unreliable and client-server or web-based applications are not always possible. Some of the software uses also don't require data-sharing and it's easier for the database to reside on the machine.
Someone with the desktop version can simply open up the SQL CE database and look at all of the tables and fields. Does this knowledge make my client-server application significantly more vulnerable to being hacked? If yes, what steps can I take to decrease the risk.
Database structure by itself will not make your web application vulnerable. However if there is a SQL injection or other vulnerability somewhere in client-server application, for the attacker it would be much easier (will take less time) to operate with your database.
In other words, your application is not less secure with exposed database structure, especially if it benefits performance for some users.
Knowledge of the database structure can make your application less secure.
You can develop a frontend for your database structure so it is not directly exposed to the users.

Database synchronization

Recently my clients have asked me if they can use they’re application remotely, disconnected from the local network and the company server.
One solution is to place the database in the cloud, but a connection to the database, and the cloud and an internet connection must be always available.
There not always the case.
So my question is - Is there any database sync system, or a synchronization library so that I can work disconnected with local database and when I connect synchronize the changes I have made and receive changes others have made?
Update:
The application is under Windows (7/xp) ( for now )
It's in Delphi 2007 win32
All client need to have Read/Write access
All Clients have internet connection, but not always ON
Security is not critical, but the Sync service should encrypt the communication
When in the presence of the companies network the system should sync and use the Server Database and not the local one.
You have a host of issues with thinking about such a solution. First, there are lots of possible solutions, such as:
Using database replication within a database, to mimic every update (like a "hot" backup)
Building an application to copy the database periodically (every night)
Using a third-party tool (which is what you are asking, I think)
With replication services, the connection does not have to always be up. Changes to the database are logged when the connection is not available and then applied when they can be sent.
However, there are lots of other issues when you leave a corporate network. What about security of the data and access rights? Do you have other options, such as making it easier to access the database from within the network? Do the users need only read-access to the database or read-write access? Would both versions need to be accessed at the same time. Would there be updates to both at the same time?
You may have other options that are more secure than just moving a database to the cloud.
I believe RemObjects DataAbstract allows offline mode and synchronization by using what they call Briefcases. All your other requirements (security, encrypted connections, etc.) are also covered.
This is not a drop-in replacement, thought, and may need extensive rewrite/refactoring of your application. There are lots of upsides, thought; business rules can/should be enforced on the server (real security), scriptable business rules, multiplatform architecture, etc.
There are some products available in the Java world (SymmetricDS lgpl license) - apart from actually being a working system it is documents how it achieved synchronization. Connects to any db with jdbc support. . There is a pro version but the user guide (downloadable pdf) gives you the db schema plus rules on push pull syncing. Useful if you want to build your own.
Btw there is a data replication so tag that would help.
One possibility that is free is the Microsoft Sync Framework: http://msdn.microsoft.com/en-us/sync/bb736753.aspx
It may be possible for you to use it, but you would need to provid some more detail about your application and operating environment to be sure.
IS it possible to share a database like .mdb and work fine? i try but sometimes the file where the databse is changes from DB to DB1 i use delphi Xe4 and Google Drive .
Thank´s

Is it possible to have an Access back-end database available for multiple users on the same network?

I am developing a Visual Basic .NET application to be used by the staff of a small training centre nearby. The front-end (UI, menus, etc.) will all be in VB .NET, and there will be a back-end database for storing all of the required data, such as student records and meeting information.
What I would like to know is if it's possible to use a Microsoft Access database for this purpose, and have it accessible by all the staff in the centre (on the same network) at the same time. For example, would I be able to put the database in a shared network folder, and have a copy of the VB application on each PC that would all be able to read/edit/add to the database?
Advice would be appreciated as to how I should proceed. (Note: I would really prefer a method of doing this with MS Access as opposed to suggestions to switch to SQL, as Access was the requested platform)
Thanks in advance.
Yes it can be done and from a programming stand point it is any (much) different then using SQL Server. I think the biggest considerations you have to think about are:
How many simultaneous users do you expect to have using the application?
How secure does the application need to be? Is Access security enough?
How big do I expect the database to become in the next 1 to 5 years?
I think those are you biggest considerations when using Access as a data store and if your answers fall within the specs of Access capabilities then go for it. You can always migrate to SQL Server at a later time if you run into the limits of Access.
You did not mention the version of Access that you are using but a quick Google/Bing search should return specs for every version available.
Yes, but probably not advisable. Despite the disclaimer in your post, you should try to convince the powers to be to look at SQL Server Express instead-- it's free.
But, if Access is the database, all you need to do is have the database reside on a shared directory with full read-write capabilities for all the users. Hopefully when you say "staff of a small training centre", you mean it.
Install the VB.Net program on the client computers and setup the connection string with the path to the database.
Someone else with more recent Microsoft Access experience can probably give better hints on how to reduce the corruption factor. My own experience was to stay away from queries in Access-- have the Access database only for tables and do all of your queries with SQL statements in your client code. My corrupted databases reduced dramatically when I did that, but that was 10-15 years ago.
Back up the database religiously.
Yes, just make sure you chane the extension of your back end access db to your_database_name.be_accdb and it will start logging once the user start writing to it. But I recommend SQL sever

Which is a better sharding strategey with respect to performance? Hashing a GUID or contacting a scale-out manager?

We're building a relatively high profile site that is expected to have 100 million hits on the first days of launch. My predecessor had argued for a scale-up SQL strategy using a single server with 1 TB RAM and 32 cores. We have been advised that this is not a feasible soluiton.
In response, I have shifted to a scale-out strategy with multiple SQL servers and horizontal partitioning. My question revolves around how I will direct the DAL to the appropriate database server. There will be many reads and writes for each user of the application. My first thought was to use a single scale out server that stored the profile id (GUID) of each user and would return the connection string to that user's shard. This seemed like a lot of overhead and created a single point of failure.
My second strategy was to route to the database by GUID so I could directly code this into the DAL. GUID's aren't random though so I'm thinking I'd need to hash it in order to get a relatively even distribution between my database shards. Every user including anonymous users has a GUID, so this is really the only property I have available to me that I can use for sharding.
So the question is whether I'm going to kill performance with the hashing that will have to occur. I'm pretty confident that the hash will be less of a bottleneck than a database read, but I'd really like some feedback on this or any other thoughts the community would like to share about my strategy.
Some specifics:
We're using SQL 2008 R2 Enterprise on the db servers. Each db will have 64GB RAM and 8 cores. The databases will be on shared storage. Vmotion will be used if a server goes down. There will be a slew of web servers at launch (30-40?) but the exact number will be dictated by performance testing. The application is built on .net 4.0 with the Enterprise Library v5. Web server load balancing will be handled by a Cisco ACE. We have requested that each of the database servers be on a separate vsphere instance.
Thanks!
Is any user profile needing to interact with another user profile? Typical example would be a Wall of some sort, eg. where you see the status updates from all your friends. See Scale out SQL Server by using Reliable Messaging to understand why I'm asking this.
As for hashing, a good hashing scheme can accommodate a change in the distribution (eg. a segment owned by machine A is split into two new segments now owned by A and B) w/o changes to the application. Hashing in the app layer does not solve this problem, having a dedicated 'scale-out manager' is better, as long as you design the scale-out manager to be highly available and control the number of requests it has to respond to (eg. less thatn 1 per HTTP request on the front end layer).

Resources