Check MySql Connection is Opened Or Not in Visual C++ - winforms

Sorry, If u filling bored. I have searched on several search engines but could not got any result. Anyway I am working in an App which database is mysql. Now I have created a database wrapper class and want to check if the connection is already opened. Could u help me?
String^ constring = L"datasource=localhost;port=3306;username=root;password=pass;database=eps;";
String^ my_query = L"select id from eps_users where usr = '" + this->user_name->Text + "' and psw = md5('" + this->pass_word->Text + "');";
MySqlConnection^ conDatabase = gcnew MySqlConnection(constring);
MySqlCommand^ cmd = gcnew MySqlCommand(my_query, conDatabase);
MySqlDataReader^ myreader;
try
{
conDatabase->Open();
myreader = cmd->ExecuteReader();
int count = 0;
while (myreader->Read())
{
count = count + 1;
}
if (count == 1){
MessageBox::Show("Username And Password is correct.", "Success", MessageBoxButtons::OK,
MessageBoxIcon::Information);
this->Hide();
Form2^ f2 = gcnew Form2(constring);
f2->ShowDialog();
}
else{
MessageBox::Show("Username And Password is not correct.", "Error", MessageBoxButtons::OK,
MessageBoxIcon::Error);
// <del>
this->Hide();
Form2^ f2 = gcnew Form2(constring);
f2->ShowDialog();
// </del>
}
}
catch (Exception^ ex)
{
MessageBox::Show(ex->Message);
}
conDatabase->Close();
I need to check if( conDatabase->HasBeenOpened()) { conDatabase->Open();}

The MySqlConnection type implements a feature called connection pooling that relies on the garbage collector to help recycle connections to your database, such that the best practice with regards to connection objects is to create a brand new object for most calls to the database, so that the garbage collector can correctly recycle the old ones. The process goes like this:
Create a new connection
Open the connection
Use the connection for one query/transaction
Dispose the connection
Where all four steps live within a single try/catch/finally block. (Also, the dispose step needs to happen inside the finally block!) Because you generally start with a brand new connection object, there's not typically a need to check if it's open first: you know it's closed. You also don't need to check the state after calling Open(): the method will block until it's finished, and throw an exception if it fails.
However, if you really are in one of the (rare!) situations where it's a good idea to preserve the connection for an extended period, you can check the state like this:
if( conDatabase->State == ConnectionState::Open)
Now, there is one other issue in that code I'd like to talk about. The issue comes down to this: what do you think will happen if I put the following into your username text box:
';DROP Table eps_users;--
If you think that it will try to execute that DROP statement in your database, you're right: it will! More subtle and damaging queries are possible, as well. This is a huge issue: there are bots that run full time crawling web sites looking for ways to abuse this, and even an corporate internal desktop apps will get caught from time to time. To fix this, you need to use Parameterized Queries for every instance where include user-provided data as part of your sql statement.
A quick example might look like this:
String^ my_query = L"select id from eps_users where usr = #userID;";
MySqlCommand^ cmd = gcnew MySqlCommand(my_query, conDatabase);
cmd->Parameters->AddWithValue(L"#userID", this->user_name->Text);

Related

Executing a non-query requires a transaction

I migrated my code from WebApi2 to NET5 and now I have a problem when executing a non-query. In the old code I had:
public void CallSp()
{
var connection = dataContext.GetDatabase().Connection;
var initialState = connection.State;
try
{
if (initialState == ConnectionState.Closed)
connection.Open();
connection.Execute("mysp", commandType: CommandType.StoredProcedure);
}
catch
{
throw;
}
finally
{
if (initialState == ConnectionState.Closed)
connection.Close();
}
}
This was working fine. After I migrated the code, I'm getting the following exception:
BeginExecuteNonQuery requires the command to have a transaction when the connection assigned to the command is in a pending local transaction. The Transaction property of the command has not been initialized.
So, just before calling Execute I added:
var ct = dataContext.GetDatabase().CurrentTransaction;
var tr = ct.UnderlyingTransaction;
And passed the transaction to Execute. Alas, CurrentTransaction is null, so the above change can't be used.
So then I tried to create a new transaction by doing:
using var tr = dataContext.GetDatabase.BeginTransaction();
And this second change throws a different exception complaining that SqlConnection cannot use parallel transactions.
So, now I'm in a situation where I originally had no problem to having neither an existing transaction nor can I create a new one.
How can I make Dapper happy again?
How can I make Dapper happy again?
Dapper has no opinion here whatsoever; what is unhappy is your data provider. It sounds like somewhere, somehow, your dataContext has an ADO.NET transaction active on the connection. I can't tell you where, how, or why. But: while a transaction is active on a connection, ADO.NET providers tend to be pretty fussy about having that same transaction explicitly specified on all commands that are executed on the connection. This could be because you are somehow sharing the same connection between multiple threads, or it could simply be that something with the dataContext has an incomplete transaction somewhere.

How can you run a report from the ReportServer database without building subscriptions?

I'd like to build a back end system that allows me to run each report every night and then query the execution log to see if anything failed. I know you can build out subscriptions for these reports and define parameters etc but is there a way to execute each report from the ReportServer database using TSQL without building out each subscription?
I understand that your overall goal is that you want to automate this and not have to write a subscription for every report. You say you want to do it in T-SQL, but is that required to meet your overall goal?
If you can accept, say .Net, then you can use the System.Data.SqlClient.SqlConnection and related classes to query your report server catalog and fetch a listing of all your reports.
Then you can use System.Net.WebClient or similar tool to attempt to download a pdf of your report. From there you can either read your execution log, or catch the error in the .Net Code.
EDIT
Well, since you accepted the answer, and it seems you may go this route, I'll mention that if you're not familiar with .net, it may be a long path for you. Here's a few things to get you started.
Below is a c# function utilizing .Net that will query the report catalog. If safeImmediate is set to true, it will only capture reports that can be run immediately, as in there are no parameters or the defaults cover the parameters.
IEnumerable<string> GetReportPaths(
string conStr,
bool safeImmediate // as in, you can exexute the report right away without paramters
) {
using (var con = new SqlConnection(conStr))
using (var cmd = new SqlCommand()) {
cmd.Connection = con;
cmd.CommandText = #"select path from catalog where type=2";
con.Open();
if (safeImmediate)
cmd.CommandText = #"
select path
from catalog
cross apply (select
params = convert(xml, Parameter).value('count(Parameters/Parameter)', 'int'),
defaults = convert(xml, Parameter).value('count(Parameters/Parameter/DefaultValues/Value)', 'int')
) counts
where type = 2
and params = defaults
and path not like '%subreport%' -- this is not standard. Just works for my conventions
";
using (var rdr = cmd.ExecuteReader())
while (rdr.Read())
yield return rdr["path"].ToString();
}
}
The next function will download a report given proper paths passed to it:
byte[] DownloadReport (
WebClient wc,
string coreUrl,
string fullReportPath,
string parameters = "" // you won't use this but may come in handy for other uses
) {
var pathToViewer = "ReportServer/Pages/ReportViewer.aspx"; // for typical ssrs installs
var renderOptions = "&rs:Format=pdf&rs:Command=Render"; // return as pdf
var url = $#"{coreUrl}/{pathToViewer}?{fullReportPath}{parameters}{renderOptions}";
url = Uri.EscapeUriString(url); // url's don't like certain characters, fix it
return wc.DownloadData(url);
}
And this utilizes the functions above to find what's succeeding and whats not:
var sqlCon = "Server=yourReportServer; Database=ReportServer; Integrated Security=yes"; // or whatever
var ssrsSite = "http://www.yourSite.org";
using (var wc = new WebClient()) {
wc.UseDefaultCredentials = true; // or whatever
int loops = 3; // get rid of this when you're ready for prime-time
foreach(var path in GetReportPaths(sqlCon, true)) {
try {
DownloadReport(wc, ssrsSite, path);
Debug.WriteLine($"Success with: {path}");
}
catch(Exception ex) { // you might want to get more specific
Debug.WriteLine($"Failed with: {path}");
}
if (loops-- == 0)
break;
}
}
Lots to learn, but it can be very beneficial. Good luck.

SQLite database connection is making changes in memory but not saving to the file - AS3 AIR

I'm trying to write to a local SQLite database using the flash.data.* classes in AIR. I'm opening a synchronous connection in CREATE mode and using the begin() and commit() methods to execute the queries. Everything seems to be executing as expected. The query execute() and connection commit() method's success handler is being called, the connection object's totalChanges property is being incremented, everything looks good except the database file is not being written to. Any ideas what I could be doing wrong?
I don't think it's related to...
the query itself since that was
throwing errors whenever something
didn't match up.
the file mode for the same reason.
file permissions - currently set to 777
Simplified version of the code:
var database:File = new File(File.applicationDirectory.nativePath + "//" + PATH_TO_DB );
var connection:SQLConnection = new SQLConnection();
connection.open( database, SQLMode.CREATE );
connection.begin();
var statement:SQLStatement = new SQLStatement();
statement.sqlConnection = connection;
statement.addEventListener(SQLEvent.RESULT, onQueryResult);
statement.addEventListener(SQLErrorEvent.ERROR, onQueryError);
statement.text = "INSERT INTO myCrazyTable (foo) VALUES ('bar')";
statement.execute();
connection.commit(new Responder(onCommitComplete));
function onQueryResult(event:SQLEvent):void {
trace("Query successful"); // this is getting called
}
function onQueryError(event:SQLErrorEvent):void {
trace("Error in query: " + event.error.message);
}
function onCommitComplete(event:SQLEvent):void {
trace("Commit Success"); // this is getting called
connection.close();
}
// Database isn't getting touched.
Did you check options defined by pragma? For example http://www.sqlite.org/pragma.html#pragma_synchronous can cause this behavior.
The first thing I would ask is, how do you know it isn't being touched? Timestamp? Or are you querying for the item and not finding it?
The main reason I ask is, I'm suspicious of this code:
var database:File = new File(File.applicationDirectory.nativePath + "//" + PATH_TO_DB );
I think the // is incorrect, and I think possibly your DB is ending up somewhere other than where you think it is, most likely at the root filesystem. If my theory is correct, you are writing data to a different db than you think you are, not just "in memory".
I fixed the issue. I ended up rewriting the code for accessing the database. I'm not sure exactly where the problem was, something I must have been overlooking. Thanks for everyone's help.

SubSonic - Need to manually force connections closed?

In using Enterprise Library, there was an issue with having to manually close db connections, as GC, when scanning the heap, looks for items out of scope.
A connection that is part of a pool that is being used but the connection state is broken or fetching, but you have already received your results, will be kept open, and connection handles in the pool will run out.
Thus, adding manual connection checking and forcedly closing the connections is good form.
Now, take SubSonic. With an EntLib base, I am doing the following in a finally block:
public static bool GetISOCountryCodes(out DataSet dsISOCountryCodes, out Response dbResponse)
{
dbResponse = new Response();
dsISOCountryCodes = new DataSet();
StoredProcedure sp = null;
try
{
sp = SPs.GetISOCountryCodes(null);
dsISOCountryCodes = sp.GetDataSet();
// set the response object properties
dbResponse = new Response((int)sp.OutputValues[0]);
return dbResponse.IsValid;
}
catch (System.Exception ex)
{
return dbResponse.IsValid;
}
finally
{
if (sp.Command != null && sp.Command.ToDbCommand().Connection != null &&
sp.Command.ToDbCommand().Connection.State == ConnectionState.Open)
sp.Command.ToDbCommand().Connection.Close();
}
}
I know it's been said that you don't have to manually do this, as SubSonic will do this for you, however, I'd like to know if anyone has run into issues with SubSonic not closing connections (once again, as it uses EntLib at the root), and if there are better ways of accomplishing this.
Obviously, in all my data caller methods, I will reference one, say, "ConnectionCloser()" method.
Thanks.
This post was more of a notification for discussion. However, I'm not sure if the issue has actually been resolved with v5. So essentially the answer is to continue checking in the finally block.

The file 'C:\....\.....\.....\bin\debug\128849991926295643' already exists

I'm using Visual C#2008 Express Edition and an Express SQL database. Every time I build my solution, I get an error like the one above. Obviously the file name changes. A new file is also created every time I hit a debug point.
I have a stored proc that gets every row from a database table, it gets these rows every time the main form initialises and Adds them to a Generics list. Without inserting or deleting from the table, it gets a different number of rows each time I start my windows application. The error started happening at the same time as the weird data retrieval issue. Any ideas at all about what can cause this?
Thanks
Jose,
Sure, here's my c# method, it retrieves every row in my table, each row has an int and and Image ....
private List<ImageNumber> GetListOfKnownImagesAndNumbers()
{
//ImageNumber imNum = new ImageNumber();
SqlCommand sqlCommand = new SqlCommand();
sqlCommand.Connection = _conn;
try
{
MemoryStream ms = new MemoryStream();
sqlCommand.CommandText = "usp_GetKnownImagesAndValues";
_conn.Open();
using (IDataReader dr = sqlCommand.ExecuteReader())
{
while (dr.Read())
{
ImageNumber imNum = new ImageNumber();
imNum.Value = dr.IsDBNull(dr.GetOrdinal("ImageValue")) ? 0 : Convert.ToInt32(dr["ImageValue"]);
//Turn the bitmap into a byte array
byte[] barrImg = (byte[])dr["ImageCaptured"];
string strfn = Convert.ToString(DateTime.Now.ToFileTime());
FileStream fs = new FileStream(strfn,
FileMode.CreateNew, FileAccess.Write);
fs.Write(barrImg, 0, barrImg.Length);
fs.Flush();
fs.Close();
imNum.Image = (Bitmap)Image.FromFile(strfn);
_listOfNumbers.Add(imNum);
}
dr.Close();
_conn.Close();
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
finally
{
_conn.Close();
}
return _listOfNumbers;
}
And here's my stored proc....
ALTER PROCEDURE dbo.usp_GetKnownImagesAndValues
AS
BEGIN
select ImageCaptured, ImageValue
from CapturedImages
END
Thanks for looking at this. The answer in the end was to put a Thread.Sleep inside the while loop and it started working perfectly. There may be something else I could do, I am obviously waiting for something to complete which is why allowing more time has helped here. If I knew what needed to complete and how to detect when it had completed then I could check for that instead of simply waiting for a short time.

Resources