Exception after connection to DB with mysql-native driver - database

I want to create to function. The first one is connect to DB, the second one is complete reconnection if first is failed.
In my experiment I turn off DB at start to get connect block failed and call reconnect block. After it I am turning on DB, and expecting that connection block will success, but I am getting exception.
Here is my code:
bool connect()
{
if(connection is null)
{
scope(failure) reconnect(); // call reconnect if fail
this.connection = mydb.lockConnection();
writeln("connection done");
return true;
}
else
return false;
}
void reconnect()
{
writeln("reconnection block");
if(connection is null)
{
while(!connect) // continue till connection will not be established
{
Thread.sleep(3.seconds);
connectionsAttempts++;
logError("Connection to DB is not active...");
logError("Reconnection to DB attempt: %s", connectionsAttempts);
connect();
}
if(connection !is null)
{
logWarn("Reconnection to DB server done");
}
}
}
The log (turning on DB after few seconds):
reconnection block
reconnection block
connection done
Reconnection to DB server done
object.Exception#C:\Users\Dima\AppData\Roaming\dub\packages\vibe-d-0.7.30\vibe-d\source\vibe\core\drivers\libevent2.d(326): Failed to connect to host 194.87.235.42:3306: Connection timed out [WSAETIMEDOUT ]
I can't understand why I am getting exception after: Reconnection to DB server done

There's two main problems here.
First of all, there shouldn't be any need for automatic retry attempts at all. If it didn't work the first time, and you don't change anything, there's no reason doing the same exact thing should suddenly work the second time. If your network is that unreliable, then you have much bigger problems.
Secondly, if you are going to automatically retry anyway, that's code's not going to work:
For one thing, reconnect is calling connect TWICE on every failure: Once at the end of the loop body and then immediately again in the loop condition regardless of whether the connection succeeded. That's probably not what you intended.
But more importantly, you have a potentially-infinite recursion going on there: connect calls reconnect if it fails. Then reconnect calls connect up to six times, each of those times connect calls reconnect AGAIN on failure, looping forever until the connection configuration that didn't work somehow magically starts working (or perhaps more likely, until you blow the stack and crash).
Honestly, I'd recommend simply throwing that all away: Just call lockConnection (if you're using vibe.d) or new Connection(...) (if you're not using vibe.d) and be done with it. If your connection settings are wrong, then trying the same connection settings again isn't going to fix them.
lockConnection -- Is there supposed to be a matching "unlock"? – Rick James
No, the connection pool in question comes from vibe.d. When the fiber which locked the connection exits (usually meaning "when your server is done processing a request"), any connections the fiber locked automatically get returned to the pool.

Related

How to send measurements to a server and receive data from sensor at the same time?

I'm trying to process data from my sensors, and simultaneously, upload data to the server(Thingspeak).
The problem is, whenever server connection(using wifi) ends(and i couldn't find a way to extend my session to prevent timeout), reconnecting takes time and during that time, i can't process the data from the sensors, resulting occasional holes in my data.
I heard there's some way of resolving this problem by using callback functions, somehow making the core to wait for the response from the server each time I try to connect to server, and at the same time, process the data i'm getting from the sensor.
My code right now is like this
loop
{
while(now==prev)
{
processdata;
}
prev=now;
count++;
if(count==15)
{
count=0;
senddata();
}
}
senddata()
{
if(!serverconnected)
{
if(!send connect request()) error message; //after this function calls,
if(!receive connection confirmed()) error message; //takes too long time until this function finishes executing.
}
send data.
}
actual function names for the commented parts are
client.connect(host, port)
client,verify(fingerprint, host)
functions from
WiFiClientSecure.h
is there any way to use callback methods to fix this issue?
While searching for the solution, I found the following header file
espconn.h
which seems to have callback functions that I can use... but I'm not sure if this is using different methods of establishing wifi connections to the server, nor how to use the functions itself.
As long as you use rest api you will not be able to comfortably keep the session alive. So you better have websocket or MQTT like protocol where session is handled by them an you will be only responsible to push the data to server instantly on any time.
This link describes how an mqtt client connection to be done on Thingspeak and pushing the data to it.
Some code cuts from the link :
#include <PubSubClient.h>
WiFiClient client;
PubSubClient mqttClient(client);
const char* server = "mqtt.thingspeak.com";
mqttClient.setServer(server, 1883);

Winsock 15 connections at the same time becomes unstable

My server is listening on the port 1234 for incoming connections, i made an array of sockets and i am looping through that array looking for an already free sockets (Closed = 0) increasing that array to hold a new incoming sockets if no free socket available in the sockets array, This will be later the index to the sockets to identify each connection alone. Not sure if that is a good approach but it works fine and stable until i tried to stress my server with about 15 client opened at the same time. The problem i am facing is that some of the client apps gets connection time-out and the server becomes unstable handling those multiple connections at the same time. I am not sending much data only 20 bytes on each connect event received from the server. I have tried to increase the Backlog value on the listen call but that didn't help either.
I have to wait longer for those clients to connect. But they connect eventually. And the server still responses to new connections later on (If i close all those clients apps for example or open a new client app it will connect immediatly). Also these connections stay open and i do not close socket from the client side.
I am using CSocketPlus Class
Connection request event
Private Sub Sockets_ConnectionRequest(ByVal Index As Variant, ByVal requestID As Long)
Dim sckIndex As Integer
sckIndex = GetFreeSocketIndex(Sockets)
Sockets.Accept sckIndex, requestID
End Sub
Function GetFreeSocketIndex(Sockets As CSocketPlus) As Integer
Dim i As Integer, blnFound As Boolean
If Sockets.ArrayCount = 1 Then 'First we check if we have not loaded any arrays yet (First connector)
Sockets.ArrayAdd 1
GetFreeSocketIndex = 1
ReDim UserInfo(1)
Exit Function
Else
'Else we loop through all arrays and find a free (Not connected one)
For i = 1 To Sockets.ArrayCount - 1
If Sockets.State(i) <> sckConnected Then
'Found one use it
blnFound = True
Sockets.CloseSck i
GetFreeSocketIndex = i
If UBound(UserInfo) < i Then
ReDim Preserve UserInfo(i)
End If
Exit Function
End If
Next i
'Didn't find any load one and use it
If blnFound = False Then
Sockets.ArrayAdd i
Sockets.CloseSck i
GetFreeSocketIndex = i
ReDim Preserve UserInfo(i)
End If
End If
End Function
Does anyone know why there is a performance issues when multiple connections occur at the same time? why the server becomes slowly to response?
What makes the server to accept faster?
I have also tried to not accept the connection ie not calling accept on connection request event but still the same issue.
EDIT: I have tried to put a debug variable to print the output of socket number on the FD_ACCEPT event on the CSocket class and it seems that WndProc is delaying to post the messages in case of a lot of connections.
Ok the problem seems to be from my connection. I have moved my Server to my RDP which has 250Mbps Download speed with 200Mbps Upload speed and it seems to work very well there. Tested it with 100 client made a connection and every one of them connected immediatly!. Wonder why i have such issues where my home connection is 40/40 Mbps...hmmm. Anyone knows why that happen?
Edit: Seems to be the Router option!
Enable Firewall
Enable DoS protection
Disabled the firewall (just for testing purposes) and everything works flawlessly!
So basically the router is thinking that there is some kind of dos attack.
So it will slow down the traffic.

C Multithreading - Sqlite3 database access by 2 threads crash

Here is a description of my problem:
I have 2 threads in my program. One is the main thread and the other one that i create using pthread_create
The main thread performs various functions on an sqlite3 database. Each function opens to perform the required actions and closing it when done.
The other thread simply reads from the database after a set interval of time and uploads it onto a server. The thread also opens and closes the database to perform its operation.
The problem occurs when both threads happen to open the database. If one finishes first, it closes the database thus causing the other to crash making the application unusable.
Main requires the database for every operation.
Is there a way I can prevent this from happening? Mutex is one way but if I use mutex it will make my main thread useless. Main thread must remain functional at all times and the other thread runs in the background.
Any advice to make this work would be great.
I did not provide snippets as this problem is a bit too vast for that but if you do not understand anything about the problem please do let me know.
EDIT:
static sqlite3 *db = NULL;
Code snippet for opening database
int open_database(char* DB_dir) // argument is the db path
rc = sqlite3_open(DB_dir , &db);
if( rc )
{
//failed to open message
sqlite3_close(db);
db = NULL;
return SDK_SQL_ERR;
}
else
{
//success message
}
}
return SDK_OK;
}
And to close db
int close_database()
{
if(db!=NULL)
{
sqlite3_close(db);
db = NULL;
//success message
}
return 1;
}
EDIT: I forgot to add that the background thread performs one single write operation that updates 1 field of the table for each row it uploads onto the server
Have your threads each use their own database connection. There's no reason for the background thread to affect the main thread's connection.
Generally, I would want to be using connection pooling, so that I don't open and close database connections very frequently; connection opening is an expensive operation.
In application servers we very often have many threads, we find that a connection pool of a few tens of connections is sufficient to service requests on behalf of many hundreds of users.
Basically built into sqlite3 there are mechanisms to provide locking... BEGIN EXCLUSIVE then you can also register a sleep callback so that the other thread can do other things...
see sqlite3_busy_handler()

java.sql.SQLRecoverableException: Connection is already in use

In my java code, I am processing huge amount of data. So I moved the code as servlet to Cron Job of App Engine. Some days it works fine. After the amount of the data increases, the cron job is not working and shows the following error message.
2012-09-26 04:18:40.627
'ServletName' 'MethodName': Inside SQLExceptionjava.sql.SQLRecoverableException:
Connection is already in use.
I 2012-09-26 04:18:40.741
This request caused a new process to be started for your application, and thus caused
your application code to be loaded for the first time. This request may thus take
longer and use more CPU than a typical request for your application.
W 2012-09-26 04:18:40.741
A problem was encountered with the process that handled this request, causing it to
exit. This is likely to cause a new process to be used for the next request to your
application. If you see this message frequently, you may be throwing exceptions during
the initialization of your application. (Error code 104)
How to handle this problem?
This exception is typical when a single connection is shared between multiple threads. This will in turn happen when your code does not follow the standard JDBC idiom of acquiring and closing the DB resources in the shortest possible scope in the very same try-finally block like so:
public Entity find(Long id) throws SQLException {
Connection connection = null;
// ...
try {
connection = dataSource.getConnection();
// ...
} finally {
// ...
if (connection != null) try { connection.close(); } catch (SQLException ignore) {}
}
return entity;
}
Your comment on the question,
#TejasArjun i used connection pooling with servlet Init() method.
doesn't give me the impression that you're doing it the right way. This suggests that you're obtaining a DB connection in servlet's init() method and reusing the same one across all HTTP requests in all HTTP sessions. This is absolutely not right. A servlet instance is created/initialized only once during webapp's startup and reused throughout the entire remaining of the application's lifetime. This at least confirms the exception you're facing.
Just rewrite your JDBC code according the standard try-finally idiom as demonstrated above and you should be all set.
See also:
Is it safe to use a static java.sql.Connection instance in a multithreaded system?

Why is my using statement not closing connection? [duplicate]

This question already has an answer here:
Closed 11 years ago.
Possible Duplicate:
Entity framework 4 not closing connection in sql server 2005 profiler
Well, lots of developers on stackoverflow are saying that I should not worry to close my connection: my using statement will close the connection for me, here and here and all over the site. Unfortunately, I do not see it happening. Here is my code:
[Test, Explicit]
public void ReconnectTest()
{
const string connString = "Initial Catalog=MyDb;Data Source=MyServer;Integrated Security=SSPI;Application Name=ReconnectTest;";
for (int i = 0; i < 2000; i++)
{
try
{
using (var conn = new SqlConnection(connString))
{
conn.Open();
using (var command = conn.CreateCommand())
{
command.CommandText = "SELECT 1 as a";
command.CommandType = CommandType.Text;
command.ExecuteNonQuery();
}
//conn.Close();
// optional breakpoint 1 below
}
}
catch(SqlException e)
{
// breakpoint 2 below
Console.WriteLine(e);
}
// breakpoint 3 below
}
}
When I enable all breakpoints and start my test, the first iteration succeeds, and I hit breakpoint 3. At this point, the connection is still open: I see it in the Profiler, and sp_who2 outputs it too.
Let's suppose that at this time I am out for a lunch, and my connection is idle. As such, our production server kills it. To imitate it, I am killing the connection from SSMS.
So, when I hit F5 and run the second iteration, my connection is killed. Unfortunately, it does not reopen automatically, so ExecuteNonQuery throws the following exception: "transport-level error has occurred". When I run the third iteration, my connection actually opens: I see it as an event in Profiler, and sp_who2 outputs it as well.
Even when I have uncommented my conn.Close() command, the connection still does not close, and when I kill it from SSMS, the next iteration still blows up.
What am I missing? Why can't using statement close my connection? Why can't Open() actually open it the first time, but succeeds the next time?
This question has originated from my previous one
When you call SqlConnection.Dispose(), which you do because of the using block, the connection is not closed, per-se. It is released back to the connection pool.
In order to avoid constantly building/tearing down connections, the connection pool will keep connections open for your application to use. So it makes perfect sense that the connection would still show as being open.
What's happening after that, I can't explain offhand - I know that keeping a random connection open would not cause that, though, because your application can certainly make more than a single concurrent connection.

Resources