I hav an TCL application where multiple children read and write from a single DB connection to a sqlite DB and as sone of the child processes can take longer than the others,I encounter "database locked" error thrown by TCL.
I know I can have busy call back method with TCL API for sqlite - but it does not seem to be called even when the db is locked. I just want all children to work properly and make any children wait for the lock and retry.
any advice/examples much appreciated.
thanks
Looking through the documentation, I see that:
dbconn timeout 2000
will set the lock-acquisition timeout to 2 seconds. Or you can tinker with the busy method. How long it should be seems to depend on how much contention you've got going on (a factor that has to be tuned according to your hardware and code deployment) and also whether SQLite's been compiled with support for short sleeping (if HAVE_USLEEP wasn't 1 during configuration, bad things can happen apparently). If you've got such a dodgy build in play, I strongly recommend fixing that because you don't know what else is mis-configured.
Related
We have a development server on which there are no jobs/transactions running but the lazy writer and checkpoint are consuming more CPU. What could be the reason behind this? Please refer to below screenshot and advise anybody.
lazy writer and checkpoint are consuming more CPU.
No. Not "consuming", "consumed", at some point in the past. These sessions are currently just waiting around for any work to do, not consuming any resources.
Next time you modify some tables in the database they will wake up and write the changed data into the database files. When they do they will consume some CPU and perform some IO before going back to sleep, and you'll see that those values increased a little bit.
You should NOT use compatibility views like sys.sysprocesses unless you are on SQL Server 2000. The column you meant is not even well documented and it's not clear if these are cpu ticks or smth else.
You should use this code:
select session_id, cpu_time
from sys.dm_exec_sessions
where session_id in (4, 15);
for the spids mentioned above, and you'll see that your system processes do well.
People have blogged about which wait types are harmless and can be safely ignored from performance monitoring. For example, some wait types indicate that a thread pool is polling for work, which is not an indication of something bad. Here is article I came across a while back, it is 2012'ish, however, it still has pertinent information -->
http://thomaslarock.com/2012/05/are-you-using-the-right-sql-server-performance-metrics/
I am using sqlite3 3.6.23.1 version in fedora linux.I have two threads running to access the database.There is a chance that both thread will try perform write operation in the same table.
Table gets locked when first thread is performing write operation.How do i handle this case.
Is there any C,sqlite3 API mechanism like wait and then write into the table until another threads is completing write operation.
Thanks & Regards.
-praveen
There is a "shared cache" mode which can be set via the C API as described here.
Sqlite3 does a good job of maximizing concurrency and it is also thread-safe. See the File Locking and Concurrency document for more details.
The table does indeed get locked for the duration of the write operation, but sqlite3 is able to navigate this condition by waiting for the lock to be released and then the lock is granted to the second process/thread wanting to perform a write (or read). The time-out for waiting for a lock can be configured in your sqlite connection code. Here is the syntax for Python 2.7:
sqlite3.connect(database[, timeout, detect_types, isolation_level,
check_same_thread, factory, cached_statements])
The default timeout is 5.0 seconds. So it would take a fairly bulky SELECT or COMMIT transaction to hold a lock for that amount of time. Based on your use-case you could, however, tweak the timeout OR include code to catch timeout exceptions.
A final option would be to incorporate some kind of flagging mechanism in your code that requires competing threads to wait until a flag clears before attempting to access the DB, but this duplicates the effort of the sqlite3 developers who have catered for concurrency scenarios as a major part of their job.
Here is an interesting article outlining a problem whereby older versions of sqlite may sleep for a whole second when unable to acquire a lock.
I am implementing a small database like MySQL.. Its a part of a larger project..
Right now i have designed the core database, by which i mean i have implemented a parser and i can now execute some basic sql queries on my database.. it can store, update, delete and retrieve data from files.. As of now its fine.. however i want to implement this on network..
I want more than one user to be able to access my database server and execute queries on it at the same time... I am working under Linux so there is no issue of portability right now..
I know i need to use Sockets which is fine.. I also know that i need to use a concept like Thread Pool where i will be required to create a maximum number of threads initially and then for each client request wake up a thread and assign it to the client..
As for now what i am unable to figure out is how all this is actually going to be bundled together.. Where should i implement multithreading.. on client side / server side.? how is my parser going to be configured to take input from each of the clients separately?(mostly via files i think?)
If anyone has idea about how i can implement this pls do tell me bcos i am stuck here in this project...
Thanks.. :)
If you haven't already, take a look at Beej's Guide to Network Programming to get your hands dirty in some socket programming.
Next I would take his example of a stream client and server and just use that as a single threaded query system. Once you have got this down, you'll need to choose if you're going to actually use threads or use select(). My gut says your on disk database doesn't yet support parallel writes (maybe reads), so likely a single server thread servicing requests is your best bet for starters!
In the multiple client model, you could use a simple per-socket hashtable of client information and return any results immediately when you process their query. Once you get into threading with the networking and db queries, it can get pretty complicated. So work up from the single client, add polling for multiple clients, and then start reading up on and tackling threaded (probably with pthreads) client-server models.
Server side, as it is the only person who can understand the information. You need to design locks or come up with your own model to make sure that the modification/editing doesn't affect those getting served.
As an alternative to multithreading, you might consider event-based single threaded approach (e.g. using poll or epoll). An example of a very fast (non-SQL) database which uses exactly this approach is redis.
This design has two obvious disadvantages: you only ever use a single CPU core, and a lengthy query will block other clients for a noticeable time. However, if queries are reasonably fast, nobody will notice.
On the other hand, the single thread design has the advantage of automatically serializing requests. There are no ambiguities, no locking needs. No write can come in between a read (or another write), it just can't happen.
If you don't have something like a robust, working MVCC built into your database (or are at least working on it), knowing that you need not worry can be a huge advantage. Concurrent reads are not so much an issue, but concurrent reads and writes are.
Alternatively, you might consider doing the input/output and syntax checking in one thread, and running the actual queries in another (query passed via a queue). That, too, will remove the synchronisation woes, and it will at least offer some latency hiding and some multi-core.
In an environment with a SQL Server failover cluster or mirror, how do you prefer to handle errors? It seems like there are two options:
Fail the entire current client request, and let the user retry
Catch the error in your DAL, and retry there
Each approach has its pros and cons. Most shops I've worked with do #1, but many of them also don't follow strict transactional boundaries, and seem to me to be leaving themselves open for trouble in the event of failure. Even so, I'm having trouble talking them into #2, which should also result in a better user experience (one catch is the potentially long delay while the failover happens).
Any arguments one way or the other would be appreciated. If you use the second approach, do you have a standard wrapper that helps simplify implementation? Either way, how do you structure your code to avoid issues such as those related to the lack of idempotency in the command that failed?
Number 2 could be an infinite loop. What if it's network related, or the local PC needs rebooted, or whatever?
Number 1 is annoying to users, of course.
If you only allow access via a web site, then you'll never see the error anyway unless the failover happens mid-call. For us, this is unlikely and we have failed over without end users realising.
In real life you may not have nice clean DAL on a web server. You may have an Excel sheet connecting (most financials) or WinForms where the connection is kept open, so you only have the one option.
Fail over should only take a few seconds anyway. If the DB recovery takes more than that, you have bigger issues anyway. And if it happens often enough to have to think about handling it, well...
In summary, it will happen that rarely that you want to know and number 1 would be better. IMHO.
I tried sqlite,
by using multi-thread, only one thread can update db at the same time..
I need multi-thread updating the db at same time.
Is there are any DB can do the job?
ps: I use delphi6.
I found that sqlite can support multi-threading,
But in my test of asgsqlite, when one thread inserting, others will fail to insert.
I'm still in testing.
SQLite can be used in multi-threaded environments.
Check out this link.
Firebird can be used in an embedded version, but it's no problem to use the standard (server) installation locally as well. Very small, easy to deploy, concurrent access. Works good with Delphi, you should look into it as an option.
See also the StackOverflow question "Which embedded database to use in a Delphi application?"
Sqlite locks the entire database when updating (unless this has changed since I last used it). A second thread cannot update the database at the same time (even using entirely separate tables). However there is a timeout parameter that tells the second thread to retry for x milliseconds before failing. I think ASqlite surfaces this parameter in the database component (I think I actually wrote that bit of code, all 3 lines, but it was a couple of years ago).
Setting the timeout to a larger value than 0 will allow multiple threads to update the database. However there may be performance implications.
since version 3.3.1, SQLite's threading requirements have been greatly relaxed. in most cases, it means that it simply works. if you really need more concurrency than that, it might be better to use a DB server.
SQL Server 2008 Express supports concurrency, as well as most other features of SQL Server. And it's free.
Why do you need multiple threads to update it at the same time? I'm sure sqlite will ensure that the updates get done correctly, even if that means one thread waiting for the other one to finish; this is transparent to the application.
Indeed, having several threads updating concurrently would, in all likelihood, not be beneficial to performance. That's to say, it might LOOK like several threads were updating concurrently, but actually the result would be that the updates get done slower than if they weren't (due to the fact that they need to hold many page locks etc to avoid problems).
DBISAM from ElevateSoft works very nicely in multi-threaded mode, and has auto-session naming to make this easy. Be sure to follow the page in the help on how to make it all safe, and job done.
I'm actually at the moment doing performance testing with a multi-threaded Java process on Sybase ASE. The process parses a 1GB file and does inserts into a table.
I was afraid at first, because many of the senior programmers warned me about "table locking" and how dangerous it is to do concurrent access to DB. But I went ahead and did testing (because I wanted to find out for myself).
I created and compared a single threaded process to a process using 4 threads. I only received a 20% reduction in total execution time. I retried the the process using different thread counts and batch insert sizes. The maximum I could squeeze was 20%.
We are going to be switching to Oracle soon, so I'll share how Oracle handles concurrent inserts when that happens.