My scenario is this. I have a single entity composed of several rows in several tables. Conceptually this can be seen as a single document. When a user opens a "document", all associated rows must be locked, much like Windows locks a file when it is opened. As the "document" may be open until the user chooses to close it, I don't think transactions are a viable solution. The only solution I have come up with is to have a boolean 'Locked' field in every table and to set this to True on relevant rows when a "document" is opened. I'm using SQL Server 2008 R2.
Any ideas?
You can use application locking.
Assuming each document has a unique ID, a portable solution is to use a single table that knows about every locked document:
CREATE TABLE DocumentLocked (
doc_id INT PRIMARY KEY,
session_id <some type>,
lock_acquired DATETIME
);
When you want to lock a document, try to insert the id of the document in question along with some session ID identifying the owning session and the time you locked it. If it fails, the document is already locked. The session ID and lock_acquired columns don't play a role in the locking. It just helps display useful information to the user, like, "This document was locked by Rubio at 9:43 am."
One problem with this approach is that crashed clients can leave documents locked forever, so you need some application-level mechanism to clobber locks. The lock_acquired column can serve as a timeout mechanism by requiring the application to update the time every so often, and using this to detect stale locks.
If you don't care about portability, go with #demas's answer.
In you case "locking" is probably part of your business processes and low-level transaction mechanism should not be used for that purpose. So you're right, you need 'Locked' field either in all participating tables or a single one in a special "lock"-table.
Related
I have a question to understand how to optimize a database. We have a SQL Server and the main data is ordered vertically. So a record has these columns
ID version fieldindex fieldvalue
and ID, version, fieldindex is the primary key.
So if you want to load a logical recordset you have to load all lines of an ID + version. One "record" could contain of somewhat 60 lines. I'm afraid that this causes some problems in the performance of the database.
There are around 10 users working in parallel in the application and we are getting deadlocks very often. We even get deadlocks inserting new lines, so normally a record that isn't persistent can't be locked.
So my question is, how does SQL Server lock records? Is it possible that a record is locked even if I am not selecting this record particularly?
I would be glad if someone could explain how the database is working.
You've got EAV, which is generally considered bad.
To make EAV work acceptably, you'll need the right indexing structure and possibly some care with lock hints and transactions.
Generally you'll want your clustered index to be (EntityID, AttributeId), so all the attributes for an entity are co-located. But to avoid deadlocks you may need to X lock the main Entity row when modifying the attributes, as SQL Server will otherwise use row locking on the AttributeValue table, which can lead to deadlocks, and logical inconsistencies. You can X lock it by modifying the row as the first operation in your transaction, or by reading it with an XLOCK hint.
Depending on the role of "version" in your system, it will be somewhere in the Clustered Index too. If attributes are individually versioned, then at the end. If individual Entities are viersioned then in the middle. And if a Version contains multiple Entities then as the leading column.
I am new to database locking, so please bear with me.
I have my SELECT statements in one file, and my UPDATE statements in another. Both make a connection at the start of the file, then disconnect at the end. What I am trying to do is lock the table row in the select statement, and if the user cancels or updates, release that lock.
I have tried advisory locks, but they unlock when I disconnect. All other locks that I tried do the same.
Is there a way that I can achieve what I want in my current DB structure, or do I need to rewrite the whole thing or brute force it with things like row_locked named boolean columns in my DB?
Thanks in advance for the help :)
If you need to persist that information across transactions and sessions, then yes, you need a column that stores that information.
I wouldn't use a boolean though, but a nullable timestamp column so that you can see when the row was locked (null means "not locked", not null means "locked"). This is very useful to cleanup "abandoned locks".
We want to know what rows in a certain table is used frequently, and which are never used. We could add an extra column for this, but then we'd get an UPDATE for every SELECT, which sounds expensive? (The table contains 80k+ rows, some of which are used very often.)
Is there a better and perhaps faster way to do this? We're using some old version of Microsoft's SQL Server.
This kind of logging/tracking is the classical application server's task. If you want to realize your own architecture (there tracking architecture) do it on your own layer.
And in any case you will need application server there. You are not going to update tracking field it in the same transaction with select, isn't it? what about rollbacks? so you have some manager who first run select than write track information. And what is the point to save tracking information together with entity info sending it back to DB? Save it into application server file.
You could either update the column in the table as you suggested, but if it was me I'd log the event to another table, i.e. id of the record, datetime, userid (maybe ip address etc, browser version etc), just about anything else I could capture and that was even possibly relevant. (For example, 6 months from now your manager decides not only does s/he want to know which records were used the most, s/he wants to know which users are using the most records, or what time of day that usage pattern is etc).
This type of information can be useful for things you've never even thought of down the road, and if it starts to grow large you can always roll-up and prune the table to a smaller one if performance becomes an issue. When possible, I log everything I can. You may never use some of this information, but you'll never wish you didn't have it available down the road and will be impossible to re-create historically.
In terms of making sure the application doesn't slow down, you may want to 'select' the data from within a stored procedure, that also issues the logging command, so that the client is not doing two roundtrips (one for the select, one for the update/insert).
Alternatively, if this is a web application, you could use an async ajax call to issue the logging action which wouldn't slow down the users experience at all.
Adding new column to track SELECT is not a practice, because it may affect database performance, and the database performance is one of major critical issue as per Database Server Administration.
So here you can use one very good feature of database called Auditing, this is very easy and put less stress on Database.
Find more info: Here or From Here
Or Search for Database Auditing For Select Statement
Use another table as a key/value pair with two columns(e.g. id_selected, times) for storing the ids of the records you select in your standard table, and increment the times value by 1 every time the records are selected.
To do this you'd have to do a mass insert/update of the selected ids from your select query in the counting table. E.g. as a quick example:
SELECT id, stuff1, stuff2 FROM myTable WHERE stuff1='somevalue';
INSERT INTO countTable(id_selected, times)
SELECT id, 1 FROM myTable mt WHERE mt.stuff1='somevalue' # or just build a list of ids as values from your last result
ON DUPLICATE KEY
UPDATE times=times+1
The ON DUPLICATE KEY is right from the top of my head in MySQL. For conditionally inserting or updating in MSSQL you would need to use MERGE instead
Is it possible to effectively tail a database table such that when a new row is added an application is immediately notified with the new row? Any database can be used.
Use an ON INSERT trigger.
you will need to check for specifics on how to call external applications with the values contained in the inserted record, or you will write your 'application' as a SQL procedure and have it run inside the database.
it sounds like you will want to brush up on databases in general before you paint yourself into a corner with your command line approaches.
Yes, if the database is a flat text file and appends are done at the end.
Yes, if the database supports this feature in some other way; check the relevant manual.
Otherwise, no. Databases tend to be binary files.
I am not sure but this might work for primitive / flat file databases but as far as i understand (and i could be wrong) the modern database files are encrypted. Hence reading a newly added row would not work with that command.
I would imagine most databases allow for write triggers, and you could have a script that triggers on write that tells you some of what happened. I don't know what information would be available, as it would depend on the individual database.
There are a few options here, some of which others have noted:
Periodically poll for new rows. With the way MVCC works though, it's possible to miss a row if there were two INSERTS in mid-transaction when you last queried.
Define a trigger function that will do some work for you on each insert. (In Postgres you can call a NOTIFY command that other processes can LISTEN to.) You could combine a trigger with writes to an unpublished_row_ids table to ensure that your tailing process doesn't miss anything. (The tailing process would then delete IDs from the unpublished_row_ids table as it processed them.)
Hook into the database's replication functionality, if it provides any. This should have a means of guaranteeing that rows aren't missed.
I've blogged in more detail about how to do all these options with Postgres at http://btubbs.com/streaming-updates-from-postgres.html.
tail on Linux appears to be using inotify to tell when a file changes - it probably uses similar filesystem notifications frameworks on other operating systems. Therefore it does detect file modifications.
That said, tail performs an fstat() call after each detected change and will not output anything unless the size of the file increases. Modern DB systems use random file access and reuse DB pages, so it's very possible that an inserted row will not cause the backing file size to change.
You're better off using inotify (or similar) directly, and even better off if you use DB triggers or whatever mechanism your DBMS offers to watch for DB updates, since not all file updates are necessarily row insertions.
I was just in the middle of posting the same exact response as glowcoder, plus another idea:
The low-tech way to do it is to have a timestamp field, and have a program run a query every n minutes looking for records where the timestamp is greater than that of the last run. The same concept can be done by storing the last key seen if you use a sequence, or even adding a boolean field "processed".
With oracle you can select an psuedo-column called 'rowid' that gives a unique identifier for the row in the table and rowid's are ordinal... new rows get assigned rowids that are greater than any existing rowid's.
So, first select max(rowid) from table_name
I assume that one cause for the raised question is that there are many, many rows in the table... so this first step will be taxing the db a little and take some time.
Then, select * from table_name where rowid > 'whatever_that_rowid_string_was'
you still have to periodically run the query, but it is now just a quick and inexpensive query
Is there any way to force drop a table in sybase which is currently in use ?
Or any way to get rid of the lock.
Force drop will be a better option. But does a force drop exist ?
Sybase is totally online, and multi-user, no need for single user mode.
If you have enough privilege you can perform various actions. None of these actions "break data or database or referential integrity" that id already defined in DDL:
if the problem is that the table (not pages) is locked, and you want you eliminate the table lock, which is preventing other users accessing the table, kill the spid. sp_lock will identify the server process id.
if you actually want to drop the table, but it is locked, first kill the spid; then drop the table.
(There is a "force drop" command, but that is undocumented and unsupported; more important it is for special cases, not necesary for your case.)
No you can't because if it does, Sybase would break the integrity of the database.
Imagine : an user is reading datas from a table and at the same time another is destroying this same table !!
If you want to force it, you have to turn on the database into "single-user" and after that no one -but you- will can connect to the database and do what you want...
try http://www.tek-tips.com/viewthread.cfm?qid=220392&page=49
for switching to single user.