Opening database connections in parallel - database

If a MySQL database connection is already open from a script (say PHP), is it advisable for another script (say python) to access the same database table at the the same time?
If not, what should be the alternative?

Database systems like MySQL are designed to accommodate this kind of multi-user access. They do this by employing locking mechanisms, most of which work in the background. The locking mechanisms prevent one user from trying to read a record while someone else is writing it, or two users from writing the same record at once.
See http://dev.mysql.com/doc/refman/5.0/en/internal-locking.html

Related

Could not update; currently locked by another user - MS Access 2000

I have a problem with my MS Access database file that is used shared on a network. Sometimes when i try to make an update to the data, is says record locking by 'PC 1'..already try check option same like this picture but still cannot solve. Sometime it locks by itself also cannot open that file.
enter image description here
Note : Window XP
I also have a MS Access db that is on a shared network. It is not uncommon for a user to entering data, while another is updated/deleting information within the tables. If I understand your question correctly, you would like to be able to update the information regardless of whether or not someone else's machine is in the database.
Solution:
Create a db shell that has linked tables to the functional database. An advantage of this technique is that by having users interact with the shell, you can prevent the database from being locked by a machine. In addition, it allows multiple users to access and edit the information within the database, without running into record locking issues.
Hope this helps.

Merge multiple Access database into one big database

I have multiple ~50MB Access 2000-2003 databases (MDB files) that only contain tables with data. The data-databases are located on a server in my enterprise that can take ~1-2 second to respond (and about 10 seconds to actually open the 50 MDB file manually while browsing in the file explorer). I have other databases that only contain forms. Most of those forms-database (still MDB files) are actually copied from the server to the client (after some testing, the execution looks smoother) before execution with a batch file. Most of those forms-databases use table-links to fetch the data from the data-databases.
Now, my question is: is there any advantage/disadvantage to merge all data-databases from my ~50MB databases to make one big database (let's say 500MB)? Will it be slower? It would actually help to clean up my code if I wouln't have to connect to all those different databases and I don't think 500MB is a lot, but I don't pretend to be really used to Access by any mean and that's why I'm asking. If Access needs to read the whole MDB file to get the data from a specific table, then it would be slower. It wouldn't be really that surprising from Microsoft, but I've been pleased so far with MS Access database performances.
There will never be more than ~50 people connected to the database at the same time (most likely, this number won't in fact be more than 10, but I prefer being a little bit conservative here just to be sure).
The db engine does not read the entire MDB file to get information from a specific table. It must read information from the system tables (hidden tables whose names start with MSys) to determine where the data you need is stored. Furthermore, if you're using a query to retrieve information from the table, and the db engine can use an index to determine which rows satisfy the query's WHERE clause, it may read only those rows from the table.
However, you have issues with your network's performance. When those lead to dropped connections, you risk corrupting the MDB. That is why Access is not well suited for use in wide area networks or with wireless connections. And even on a wired LAN, you can suffer such problems when the network is flaky.
So while reducing the amount of data you pull across the network is a good thing, it is not the best remedy for Access on a flaky network. Instead you should migrate the data to a client-server db so it can be kept safe in spite of dropped connections.
You are walking on thin ice here.
Access will handle your scenario, but is not really meant to allow so many concurrent connections.
Merging everything in a big database (500mb) is not a wise move.
Have you tried to open it from a network location?
As far as I can suggest, I will use a backend SqlServer Express to merge all the tables in a single real client-server database.
The changes required by client mdb front-end should not be very pervasive.

In Memory Database as a backup for Database Failures

Is in-memory database a viable backup option for performing read operations in case of database failures? One can insert data into an in-memory database once in a while and in case the database server/web server goes down (rare occurence), one can still access the data present in the in-memory database outside of web server.
If you're going to hold your entire database in memory, you might just as well perform all operations there and hold your backup on disk.
No, since a power outage means your database is gone. Or if the DB process dies, and the OS deallocates all the memory it was using.
I'd recommend a second hard drive, external or internal, and dump the data to that hard drive.
Obviously it probably depends on your database usage. For instance it would be hard for me to imagine StackOverflow doing this.
On the other hand not every application is SO. If your database usage is limited you could take a cue from Mobile Applications which accept the fact that a server may not always be available. And treat your web application as though it were a Mobile Client. See Architecting Disconnected Mobile Applications Using a Service Oriented Architecture

Has open source ever created a single file database that auto handles transactions?

Has open source ever created a single file database that has better performance when handling large sets of sql queries that aren't delivered in formal SQL transaction sets? I work with a .NET server that does some heavy replication of thousands of rows of data from another server and it does so it a 1-by-1 fashion without formal SQL transactions. So, therefore I cannot use SQLite or FirebirdDB or JavaDB because they all don't automatically batch the transactions and therefore the performance is dismal. Each insert waits for the success of the previous one, etc. So, I am forced to use a heavier database like SQLServer, MySQL, Postgres, or Oracle.
Does anyone know of a flat file database (that has a JDBC connect driver) that would support auto batching transactions and solve my problem?
The main think I dont like about the heavier databases is the lack of the ability to see inside the database with a one-mouse-click operation, like you can with SQLLite.
I tried creating a SQLite database and
then set PRAGMA read_uncommitted=TRUE;
and it didn't result in any
performance improvement.
I think that Firebird can work for this.
Firebird have good dotnet provider and many solution for replication
May be you can read this article for Firebird transaction
Try hypersonic DB - http://hsqldb.org/doc/guide/ch02.html#N104FC
If you want your transactions to be durable (i.e. survive a power failure) then the database will HAVE to write to the disc after each transaction (this is usually a log of some sort).
If your transactions are very small this will result in a huge number of writes, and very poor performance even on your battery backed raid controller or SSD, but worse performance on consumer-grade hardware.
The only way of avoiding this is to somehow disable the flush at txn commit (which of course breaks durability). I have no idea which ones support this, but it should be easy to find out.

Detecting for Pessimistic lock in VB6

I have a database system developed in VB6, and we have a scenario where more than one user may register at the same time triggering an insert in the database. I have used normal sqlconnection and recordset to make the insert and i initialize it with a pessimistic lock. Now how can i check in my application before inserting a record, if the table has been locked or not, thus if the table being inserted to has been locked currently i can alert the user that the table is in use or i can store his data temporarily and insert it once the lock is released. The underlying database is Access and the application is across multiple systems with database on a server.
You might want to read through Locking Shared Data by Using Recordset Objects in VBA. Most of it applies to VB6 as well as VBA.
It isn't really "normal" to lock a whole table, and you can't even do it via ADO and the Jet OLE DB Provider. Your question doesn't provide enough information to suggest any specific course of action.
You don't "check before inserting" either. Applications should be designed to stumble over locks relatively rarely. When they do, you deal with this as an exception. This is reflected in both the DAO and ADO APIs.

Resources