Here is a description of my problem:
I have 2 threads in my program. One is the main thread and the other one that i create using pthread_create
The main thread performs various functions on an sqlite3 database. Each function opens to perform the required actions and closing it when done.
The other thread simply reads from the database after a set interval of time and uploads it onto a server. The thread also opens and closes the database to perform its operation.
The problem occurs when both threads happen to open the database. If one finishes first, it closes the database thus causing the other to crash making the application unusable.
Main requires the database for every operation.
Is there a way I can prevent this from happening? Mutex is one way but if I use mutex it will make my main thread useless. Main thread must remain functional at all times and the other thread runs in the background.
Any advice to make this work would be great.
I did not provide snippets as this problem is a bit too vast for that but if you do not understand anything about the problem please do let me know.
EDIT:
static sqlite3 *db = NULL;
Code snippet for opening database
int open_database(char* DB_dir) // argument is the db path
rc = sqlite3_open(DB_dir , &db);
if( rc )
{
//failed to open message
sqlite3_close(db);
db = NULL;
return SDK_SQL_ERR;
}
else
{
//success message
}
}
return SDK_OK;
}
And to close db
int close_database()
{
if(db!=NULL)
{
sqlite3_close(db);
db = NULL;
//success message
}
return 1;
}
EDIT: I forgot to add that the background thread performs one single write operation that updates 1 field of the table for each row it uploads onto the server
Have your threads each use their own database connection. There's no reason for the background thread to affect the main thread's connection.
Generally, I would want to be using connection pooling, so that I don't open and close database connections very frequently; connection opening is an expensive operation.
In application servers we very often have many threads, we find that a connection pool of a few tens of connections is sufficient to service requests on behalf of many hundreds of users.
Basically built into sqlite3 there are mechanisms to provide locking... BEGIN EXCLUSIVE then you can also register a sleep callback so that the other thread can do other things...
see sqlite3_busy_handler()
Related
tpinit and tptern tuxedo function taking time. Its basically used in every request by client to join and leave the application. we observed heavy slowness when number of request is higher from a multi-threaded client process.
We try to increase the virtual core in machine but still face the same problem.
TPINIT * tpinitbuf;
if((tpinitbuf = (TPINIT *)tpalloc("TPINIT",(char *)NULL,TPINITNEED(16))) == (TPINIT *)NULL)
{
printf("ERROR IS:: %s\n", tpstrerror(tperrno));
return NULL;
}
tpinitbuf->flags = TPMULTICONTEXTS;
tpinit(tpinitbuf); //this function is taking time.
tpgetctxt(&ctxt, 0);
tpfree ((char *) tpinitbuf) ;
retVal=tpcall("MY_SERVICE",(char *)buf1,0,(char **) &buf2,&size,0L);
tpterm(); // this function is taking time.
Ideally tpinit, tpterm should take around 50 milliseconds, but when number of request is high its takes around 1.3 sec.
Why do you do that? Do tpinit() once for thread and do tpterm() only when the thread terminates. If you create new short-lived threads all the time then switch to using a thread pool.
Think of "joining Tuxedo application" as "connecting to a database" - and connecting/disconnecting does not seem such a great idea anymore.
There is a number of things tpinit() has to do: register itself in the shared memory (takes semaphores to prevent concurrent updates), create a reply queue and register it in the shared memory (so BBL can clean up after crashed processes), lookup service-to-queue mapping, loading plug-ins, etc. Tuxedo could be faster at all of that but if you do that too often it's your own fault, not Tuxedo's.
I want to create to function. The first one is connect to DB, the second one is complete reconnection if first is failed.
In my experiment I turn off DB at start to get connect block failed and call reconnect block. After it I am turning on DB, and expecting that connection block will success, but I am getting exception.
Here is my code:
bool connect()
{
if(connection is null)
{
scope(failure) reconnect(); // call reconnect if fail
this.connection = mydb.lockConnection();
writeln("connection done");
return true;
}
else
return false;
}
void reconnect()
{
writeln("reconnection block");
if(connection is null)
{
while(!connect) // continue till connection will not be established
{
Thread.sleep(3.seconds);
connectionsAttempts++;
logError("Connection to DB is not active...");
logError("Reconnection to DB attempt: %s", connectionsAttempts);
connect();
}
if(connection !is null)
{
logWarn("Reconnection to DB server done");
}
}
}
The log (turning on DB after few seconds):
reconnection block
reconnection block
connection done
Reconnection to DB server done
object.Exception#C:\Users\Dima\AppData\Roaming\dub\packages\vibe-d-0.7.30\vibe-d\source\vibe\core\drivers\libevent2.d(326): Failed to connect to host 194.87.235.42:3306: Connection timed out [WSAETIMEDOUT ]
I can't understand why I am getting exception after: Reconnection to DB server done
There's two main problems here.
First of all, there shouldn't be any need for automatic retry attempts at all. If it didn't work the first time, and you don't change anything, there's no reason doing the same exact thing should suddenly work the second time. If your network is that unreliable, then you have much bigger problems.
Secondly, if you are going to automatically retry anyway, that's code's not going to work:
For one thing, reconnect is calling connect TWICE on every failure: Once at the end of the loop body and then immediately again in the loop condition regardless of whether the connection succeeded. That's probably not what you intended.
But more importantly, you have a potentially-infinite recursion going on there: connect calls reconnect if it fails. Then reconnect calls connect up to six times, each of those times connect calls reconnect AGAIN on failure, looping forever until the connection configuration that didn't work somehow magically starts working (or perhaps more likely, until you blow the stack and crash).
Honestly, I'd recommend simply throwing that all away: Just call lockConnection (if you're using vibe.d) or new Connection(...) (if you're not using vibe.d) and be done with it. If your connection settings are wrong, then trying the same connection settings again isn't going to fix them.
lockConnection -- Is there supposed to be a matching "unlock"? – Rick James
No, the connection pool in question comes from vibe.d. When the fiber which locked the connection exits (usually meaning "when your server is done processing a request"), any connections the fiber locked automatically get returned to the pool.
When trying to implement a simple echo server with concurrent support on linux.
Following approaches are used:
Use pthread functions to create a pool of thread, and maintained in a linked list. It's created on process start, and destroy on process termination.
Main thread will accept request, and use a POSIX message queue to store accepted socket file descriptor.
Threads in pool loop to read from message queue, and handle request it gets, when there is no request, it will block.
The program seems working now.
The questions are:
Is it suitable to use message queue in the middle, is it efficient enough?
What is the general approach to accomplish a thread tool that needs to handle concurrent request from multiple clients?
If it's not proper to make threads in pool loop & block to retrieve msg from message queue, then how to deliver requests to threads?
This seems unneccesarily complicated to me. The usual approach for a multithreaded server is:
Create a listen-socket in a thread process
Accept the client-connections in a thread
For each accepted client connection, create a new threads, which receives the corresponding file descriptor and does the work
The worker thread closes the client connection, when it is fully handled
I do not see much benefit in prepopulating a thread-pool here.
If you really want a threadpool:
I would just use a linked list for accepted connections and a pthread_mutex to synchronize access to it:
The listener-process enqueues client fds at the tail of the list.
The clients dequeue it at the head.
If the queue is empty, the thread can wait on a variable (pthread_cond_wait) and are notified by the listener process (pthread_cond_signal) when connections are available.
Another alternative
Depending on the complexity of handling requests, it might be an option to make the server single-threaded, i.e. handle all connections in one thread. This eliminates context-switches altogether and can thus be very performant.
One drawback is, that only one CPU-core is used. To improve that, a hybrid-model can be used:
Create one worker-thread per core.
Each thread handles simultaneously n connections.
You would however have to implement mechanisms to distribute the work fairly amongst the workers.
In addition to using pthread_mutex, you will want to use pthread_cond_t (pthread condition), this will allow you to put the threads in the thread pool to sleep while they are not actually doing work. Otherwise, you will be wasting compute cycles if they are sitting there in a loop checking for something in the work queue.
I would definitely consider using C++ instead of just pure C. The reason I suggest it is that in C++ you are able to use templates. Using a pure virtual base class (lets call it: "vtask"), you can create templated derived classes that accept arguments and insert the arguments when the overloaded operator() is called, allowing for much, much more functionality in your tasks:
//============================================================================//
void* thread_pool::execute_thread()
{
vtask* task = NULL;
while(true)
{
//--------------------------------------------------------------------//
// Try to pick a task
m_task_lock.lock();
//--------------------------------------------------------------------//
// We need to put condition.wait() in a loop for two reasons:
// 1. There can be spurious wake-ups (due to signal/ENITR)
// 2. When mutex is released for waiting, another thread can be waken up
// from a signal/broadcast and that thread can mess up the condition.
// So when the current thread wakes up the condition may no longer be
// actually true!
while ((m_pool_state != state::STOPPED) && (m_main_tasks.empty()))
{
// Wait until there is a task in the queue
// Unlock mutex while wait, then lock it back when signaled
m_task_cond.wait(m_task_lock.base_mutex_ptr());
}
// If the thread was waked to notify process shutdown, return from here
if (m_pool_state == state::STOPPED)
{
//m_has_exited.
m_task_lock.unlock();
//----------------------------------------------------------------//
if(mad::details::allocator_list_tl::get_allocator_list_if_exists() &&
tids.find(CORETHREADSELF()) != tids.end())
mad::details::allocator_list_tl::get_allocator_list()
->Destroy(tids.find(CORETHREADSELF())->second, 1);
//----------------------------------------------------------------//
CORETHREADEXIT(NULL);
}
task = m_main_tasks.front();
m_main_tasks.pop_front();
//--------------------------------------------------------------------//
//run(task);
// Unlock
m_task_lock.unlock();
//--------------------------------------------------------------------//
// execute the task
run(task);
m_task_count -= 1;
m_join_lock.lock();
m_join_cond.signal();
m_join_lock.unlock();
//--------------------------------------------------------------------//
}
return NULL;
}
//============================================================================//
int thread_pool::add_task(vtask* task)
{
#ifndef ENABLE_THREADING
run(task);
return 0;
#endif
if(!is_alive_flag)
{
run(task);
return 0;
}
// do outside of lock because is thread-safe and needs to be updated as
// soon as possible
m_task_count += 1;
m_task_lock.lock();
// if the thread pool hasn't been initialize, initialize it
if(m_pool_state == state::NONINIT)
initialize_threadpool();
// TODO: put a limit on how many tasks can be added at most
m_main_tasks.push_back(task);
// wake up one thread that is waiting for a task to be available
m_task_cond.signal();
m_task_lock.unlock();
return 0;
}
//============================================================================//
void thread_pool::run(vtask*& task)
{
(*task)();
if(task->force_delete())
{
delete task;
task = 0;
} else {
if(task->get() && !task->is_stored_elsewhere())
save_task(task);
else if(!task->is_stored_elsewhere())
{
delete task;
task = 0;
}
}
}
In the above, each created thread runs execute_thread() until the m_pool_state is set to state::STOPPED. You lock the m_task_lock, and if the state is not STOPPED and the list is empty, you pass the m_task_lock to your condition, which puts the thread to sleep and frees the lock. You create the tasks (not shown), add the task (m_task_count is an atomic, by the way, that is why it is thread safe). During the add task, the condition is signaled to wake up a thread, from which the thread proceeds from the m_task_cond.wait(m_task_lock.base_mutex_ptr()) section of execute_thread() after m_task_lock has been acquired and locked.
NOTE: this is a highly customized implementation that wraps most of the pthread functions/objects into C++ classes so copy-and-pasting will not work whatsoever... Sorry. And w.r.t. the thread_pool::run(), unless you are worrying about return values, the (*task)() line is all you need.
I hope this helps.
EDIT: the m_join_* references is for checking whether all the tasks have been completed. The main thread sits in a similar conditioned wait that checks whether all the tasks have been completed as this is necessary for the applications I use this implementation in before proceeding.
I'm trying to create an application which only allows a single instance across all Windows users.
I'm currently doing it by opening a file to write and leaving it open. Is this method safe? Do you know of an alternative method using C?
The standard solution is to create a global mutex during application startup. The first time that the app is started, this will succeed. On subsequent attempts, it will fail, and that is your clue to halt and fail to load the second instance.
You create mutexes in Windows by calling the CreateMutex function. As the linked documentation indicates, prefixing the name of the mutex with Global\ ensures that it will be visible for all terminal server sessions, which is what you want. By contrast, the Local\ prefix would make it visible only for the user session in which it was created.
int WINAPI _tWinMain(...)
{
const TCHAR szMutexName[] = TEXT("Global\\UNIQUE_NAME_FOR_YOUR_APP");
HANDLE hMutex = CreateMutex(NULL, /* use default security attributes */
TRUE, /* create an owned mutex */
szMutexName /* name of the mutex */);
if (GetLastError() == ERROR_ALREADY_EXISTS)
{
// The mutex already exists, meaning an instance of the app is already running,
// either in this user session or another session on the same machine.
//
// Here is where you show an instructive error message to the user,
// and then bow out gracefully.
MessageBox(hInstance,
TEXT("Another instance of this application is already running."),
TEXT("Fatal Error"),
MB_OK | MB_ICONERROR);
CloseHandle(hMutex);
return 1;
}
else
{
assert(hMutex != NULL);
// Otherwise, you're the first instance, so you're good to go.
// Continue loading the application here.
}
}
Although some may argue it is optional, since the OS will handle it for you, I always advocate explicitly cleaning up after yourself and calling ReleaseMutex and CloseHandle when your application is exiting. This doesn't handle the case where you crash and don't have a chance to run your cleanup code, but like I mentioned, the OS will clean up any dangling mutexes after the owning process terminates.
I'm just beginning to understand how an apache server works, andthe other day I ran into a problem when programming a very simple webpage while displaying a hit count for the page:
/* The simplest HelloWorld module */
#include <httpd.h>
#include <http_protocol.h>
#include <http_config.h>
static int noOfViews = 0;
static int helloworld_handler(request_rec *r)
{
if (!r->handler || strcmp(r->handler, "helloworld")) {
return DECLINED;
}
if (r->method_number != M_GET) {
return HTTP_METHOD_NOT_ALLOWED;
}
noOfViews++;
ap_set_content_type(r, "text/html;charset=ascii");
ap_rputs("<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01//EN\">\n",
r);
ap_rputs("<html><head><title>Apache HelloWorld "
"Module</title></head>", r);
ap_rputs("<body><h1>Hello World!</h1>", r);
ap_rputs("<p>This is the Apache HelloWorld module!</p>", r);
ap_rprintf(r, "<p>Views: %d</p>", noOfViews);
ap_rputs("</body></html>", r);
return OK;
}
static void helloworld_hooks(apr_pool_t *pool)
{
ap_hook_handler(helloworld_handler, NULL, NULL, APR_HOOK_MIDDLE);
}
module AP_MODULE_DECLARE_DATA helloworld_module = {
STANDARD20_MODULE_STUFF,
NULL,
NULL,
NULL,
NULL,
NULL,
helloworld_hooks
};
What basically happened is when I would refresh the page, the hit counter would go up, but sometimes it would randomly drop in number. Someone told me that it was because of the way the Apache Prefork MPM worked. After reading this:
http://httpd.apache.org/docs/2.0/mod/prefork.html
I understand the problem more, but I'm still not 100% sure whats going on. So the prefork MPM creates a bunch of child processes, some of them idle, and waits for clients to connect, so when I'm refreshing the page, I'm actually connecting to a bunch of different child processes the server is running. However, this module has a limited number of child processes it can keep up at the same time, so sometimes when it kills a process my counter goes down. I'm not entirely sure if this explanation is correct or why exactly the counter drops.
All advice is appreciated.
Yes, either that or you got one of the other Apache processes to serve you the request when the counter went down.
You could try and configure Apache in such a way that it only spawns exactly 1 child process that lives forever, but by doing that you limit Apaches capabilities.
I recommend that you try and keep your module completely stateless. If you want that hit counter, save the state in a file or a database and retrieve it from there when you need it. You could even talk to another process that has the hit counter just in a static variable like your module at the moment.
You are storing your hit count in the noOfViews variable, which means in the memory of a single process.
Whether under worker or prefork MPM, httpd typically spawns multiple child processes. Each will have its own memory storage for noOfViews, so you are only counting the number of hits for that process. When your request is randomly given to a different process, it has a different counter.
You will notice this more for prefork than worker because each prefork process only handles one request at a time, while worker is threaded and may handle multiple; so there are a lot more processes under prefork than worker. But the same thing will occur under either MPM when your requests are directed to different processes.
Also note that restarting httpd, or just killing individual processes, will lose the counter. New processes will start at a count of 0. So, this is not a good approach if your goal is to count hits globally.