We are using snowflake JDBC to execute unload queries on the snowflake. As unload queries potentially take more time to complete, we are periodically checking if query execution is completed by checking with query id. It is all working well if query execution and status check happen in the same session. Due to any reason, if the service restarts and creates a new connection, it is not able to get the execution status of the query executed before restarting the service.
creating snowflake connection
private Properties getProperties() {
Properties properties = new Properties();
properties.put("user", snowflakeConfig.getUsername());
properties.put("password", snowflakeConfig.getPassword());
properties.put("CLIENT_SESSION_KEEP_ALIVE", "true");
return properties;
}
private Connection getConnection() throws SQLException {
return DriverManager.getConnection(snowflakeConfig.getUrl(), getProperties());
}
Executing unload query :
resultSet = statement.unwrap(SnowflakeStatement.class).executeAsyncQuery(sql_command);
queryID = resultSet.unwrap(SnowflakeResultSet.class).getQueryID();
Get query status with query ID returned in above step:
resultSet = connection.unwrap(SnowflakeConnection.class).createResultSet(queryID);
queryStatus = resultSet.unwrap(SnowflakeResultSet.class).getStatus();
Can someone help me to do it in right way.
Related
The database tool I'm writing investigates blocked queries by running a parallel query against sys.dm_exec_requests if the main query got delayed to find the cause of the delay.
That works fine if the investigating connection has the VIEW SERVER STATE permission. If not, however, sys.dm_exec_requests only contains entries for the connection it runs on - which is somewhat pointless for connections where only one query can run at a time.
Enter MARS, the first time I was thinking this arcane feature may be useful for something.
With MARS enabled, I can run the investigating query on the same connection as the delayed query we're investigating.
However, a simple test shows that if the first MARS query is blocked, apparently the second one is also, even if the second has no reason to be.
I'm running this test code in LinqPad (with Dappper for a tighter code sample, but I got the same effect in my app that doesn't use Dapper):
var csb = new SqlConnectionStringBuilder();
csb.TrustServerCertificate = true;
csb.DataSource = #".\";
csb.InitialCatalog = "...";
csb.IntegratedSecurity = true;
using var c0 = new SqlConnection(csb.ConnectionString);
csb.MultipleActiveResultSets = true;
using var c1 = new SqlConnection(csb.ConnectionString);
using var c2 = new SqlConnection(csb.ConnectionString);
// Begin the blocking transaction on connection #0
await c0.QueryAsync(#"
begin transaction
select * from mytable with (tablockx, holdlock)
");
// This query on connection #1 is blocked by connection #0
var blockedTask = c1.QuerySingleAsync<int>("select count(*) from mytable");
// Strangely, this second query is blocked as well
var requests = await c1.QueryAsync(#"
select session_id, cpu_time, reads, logical_reads
from sys.dm_exec_requests r
");
// We don't get here unless you swap `c1` for `c2` in the last query, making
// it run on it's own connection, thus requiring VIEW SERVER STATE to be useful
requests.Dump();
await blockedTask;
You just need a database with any random table to apply this.
MARS allows interleaved execution of multiple requests on the same connection, not concurrent execution.
In the case of a blocked SELECT query, other queries on the same connection cannot execute until the select query completes or yields by returning results.
As per docs SqlCommandTimeout is
This property is the cumulative time-out (for all network packets that
are read during the invocation of a method) for all network reads
during command execution or processing of the results. A time-out can
still occur after the first row is returned, and does not include user
processing time, only network read time.
For example, with a 30 second time out, if Read requires two network
packets, then it has 30 seconds to read both network packets. If you
call Read again, it will have another 30 seconds to read any data that
it requires.
I have code below that executes the stored procedure and then reads the data using SqlReader row by row.
public static async Task<IEnumerable<AvailableWorkDTO>> prcGetAvailableWork(this MyDBContext dbContext, int userID)
{
var timeout = 120
var result = new List<AvailableWorkDTO>();
using (var cmd = dbContext.Database.GetDbConnection().CreateCommand())
{
var p1 = new SqlParameter("#UserID", SqlDbType.Int)
{
Value = userID
};
cmd.CommandText = "dbo.prcGetAvailableWork";
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add(p1);
cmd.CommandTimeout = timeout;
await dbContext.Database.OpenConnectionAsync().ConfigureAwait(false);
using (var reader = await cmd.ExecuteReaderAsync().ConfigureAwait(false))
{
while (await reader.ReadAsync().ConfigureAwait(false))
{
var item = new AvailableWorkDTO();
item.ID = reader.GetInt32(0);
item.Name = reader.GetString(1);
item.Title = reader.GetString(2);
item.Count = reader.GetInt32(3);
result.Add(item);
}
}
}
return result;
}
In Sql Profiler I see only one call to stored procedure as expected. So I am guessing the stored proc executes and returns entire result set.
Questions
1>If SqlReader is reading one record at a time, where is the entire resultset is stored while reader is reading? Is it temporarily stored in SQL Server memory or Application Server memory?
2>Using EF Core is there any way to read the entire result set at once?
The resultset isn't stored anywhere, it's streamed directly to the client. As the server reads rows from disk or memory, they are fed through the query plan and out across the network. This is why you always need to make sure read as fast as possible and to dispose the reader and connection: because the query is running the whole time.
To "read the entire result set at once", you just do what you are doing now: loop the reader and add it to a List. Alternatively, you could use DataTable.Load, however I do not advise this, and it is also not async.
The reader is just an object that is capable of returning individual rows from a command. What you see in the profiler is a single execution of a command. If you also monitor SQL:Batch Completed event, you will see that that only happens when the reader is finished.
You can use a stored procedure with EF instead of ADO.Net, but I am not sure that it will be faster.
Create a special class to get data from the stored procedure, or use existing AvailableWorkDTO. This class should have all properties that select clause of your stored procedure has. You don't need to select everything in your stored procedure. Just select the properties that AvailableWorkDTO has and add NotMapped attribute
[NotMapped]
public class AvailableWorkDTO
{
.....
}
after this add this class to dbContext DbSet
public virtual DbSet<AvailableWorkDTO> AvailableWorkDTOs { get; set; }
And this is a sample function to show how to get data using the stored procedure
public async Task<IEnumerable<AvailableWorkDTO>> prcGetAvailableWork(MyDBContext dbContext, int userID)
{
var pId = new SqlParameter("#UserID", userID);
return await dbContext.Set<AvailableWorkDTO>()
.FromSqlRaw("Execute db.prcGetAvailableWork #UserID", pId)
.ToArrayAsync();
}
I'm running big dependency scan on legacy db and see that some objects have obsolete ref links, if you run this code in SSMS for View that points to not existing table like in my case, you will get your output on Results tab AND error info in Messages . Like in my case below.
I tried to check all env things I know and output of this stored procedure, but didn't see any indication.
How I can capture this event as I'm running this in looped dynamic SQL script and capture output in my table for further processing?
Updated:
it just text in Message box ,on error, you still have output on
Results tab
this is sp, it loop thru object list I took from sys.object and run this string as my sample to get all dependencies, load all into table. This call to
sql_reference_entities is the only way to get inter database
dependency on column level. So I need stick to this 100$>
--
Select *
From sys.dm_sql_referenced_entities('dbo.v_View_Obs_Table','Object')
--
----update------
This behavior was fixed in SQL Server 2014 SP3 and SQL Server 2016 SP2:
Starting from Microsoft SQL Server 2012, errors raised by
sys.dm_sql_referenced_entities (such as when an object has undergone a
schema change) cannot be caught in a TRY...CATCH Transact-SQL block.
While this behavior is expected in SQL Server 2012 and above, this
improvement introduces a new column that's called is_incomplete to the
Dynamic Management View (DMV).
KB4038418 - Update adds a new column to DMV sys.dm_sql_referenced_entities in SQL Server 2014 and 2016
----update-------
The tldr is that you can't capture these on the server side, and must use a client program in C#, PowerShell or some other client that can process info messages.
That DMV is doing something strange that I don't fully understand. It's generating errors (which a normal UDF is not allowed to do), and those errors do not trigger a TRY/CATCH block or set ##error. EG
create table tempdb.dbo.foo(id int)
go
create view dbo.v_View_Obs_Table
as
select * from tempdb.dbo.foo
go
drop table tempdb.dbo.foo
go
begin try
Select * From sys.dm_sql_referenced_entities('dbo.v_View_Obs_Table','Object')
end try
begin catch
select ERROR_MESSAGE(); --<-- not hit
end catch
However these are real errors, as you can see running this from client code:
using System;
using System.Data.SqlClient;
namespace ConsoleApp6
{
class Program
{
static void Main(string[] args)
{
using (var con = new SqlConnection("Server=.;database=AdventureWorks;integrated security=true"))
{
con.Open();
con.FireInfoMessageEventOnUserErrors = true;
con.InfoMessage += (s, a) =>
{
Console.WriteLine($"{a.Message}");
foreach (SqlError e in a.Errors)
{
Console.WriteLine($"{e.Message} Number:{e.Number} Class:{e.Class} State:{e.State} at {e.Procedure}:{e.LineNumber}");
}
};
var cmd = con.CreateCommand();
cmd.CommandText = "Select * From sys.dm_sql_referenced_entities('dbo.v_View_Obs_Table','Object')";
using (var rdr = cmd.ExecuteReader())
{
while (rdr.Read() || (rdr.NextResult() && rdr.Read()))
{
Console.WriteLine(rdr[0]);
}
}
Console.ReadKey();
}
}
}
}
outputs
Invalid object name 'tempdb.dbo.foo'.
Invalid object name 'tempdb.dbo.foo'. Number:208 Class:16 State:3 at v_View_Obs_Table:4
0
The dependencies reported for entity "dbo.v_View_Obs_Table" might not include references to all columns. This is either because the entity references an object that does not exist or because of an error in one or more statements in the entity. Before rerunning the query, ensure that there are no errors in the entity and that all objects referenced by the entity exist.
The dependencies reported for entity "dbo.v_View_Obs_Table" might not include references to all columns. This is either because the entity references an object that does not exist or because of an error in one or more statements in the entity. Before rerunning the query, ensure that there are no errors in the entity and that all objects referenced by the entity exist. Number:2020 Class:16 State:1 at :1
How do you start a transaction in ODBC? Specifically i happen to be dealing with SQL Server, but the question can work for any data source.
In native T-SQL, you issue the command:
BEGIN TRANSACTION
--...
COMMIT TRANSACTION
--or ROLLBACK TRANSACTION
In ADO.net, you call:
DbConnection conn = new SqlConnection();
DbTransaction tx = conn.BeginTransaction();
//...
tx.Commit();
//or tx.Rollback();
In OLE DB you call:
IDBInitialize init = new MSDASQL();
IDBCreateSession session = (init as IDBCreateSession).CreateSession();
(session as ITransactionLocal).StartTransaction(ISOLATIONLEVEL_READCOMMITTED, 0, null, null);
//...
(session as ITransactionLocal).Commit();
//or (session as ITransactionLocal).Rollback();
In ADO you call:
Connection conn = new Connection();
conn.BeginTrans();
//...
conn.CommitTrans();
//or conn.RollbackTrans();
What about ODBC?
For ODBC, Microsoft gives a hint on their page Transactions in ODBC:
An application calls SQLSetConnectAttr to switch between the two ODBC modes of managing transactions:
Manual-commit mode
All executed statements are included in the same transaction until it is specifically stopped by calling SQLEndTran.
Which means i just need to know what parameters to pass to SQLSetConnectAttr:
HENV environment;
SQLAllocEnv(&environment);
HDBC conn;
SQLAllocConnect(henv, &conn);
SQLSetConnectAttr(conn, {attribute}, {value}, {stringLength});
//...
SQLEndTran(SQL_HANDLE_ENV, environment, SQL_COMMIT);
//or SQLEndTran(SQL_HANDLE_ENV, environment, SQL_ROLLBACK);
But the page doesn't really give any hint about which parameter will start a transaction. It might be:
SQL_COPT_SS_ENLIST_IN_XA
To begin an XA transaction with an XA-compliant Transaction Processor (TP), the client calls the Open Group tx_begin function. The application then calls SQLSetConnectAttr with a SQL_COPT_SS_ENLIST_IN_XA parameter of TRUE to associate the XA transaction with the ODBC connection. All related database activity will be performed under the protection of the XA transaction. To end an XA association with an ODBC connection, the client must call SQLSetConnectAttr with a SQL_COPT_SS_ENLIST_IN_XA parameter of FALSE. For more information, see the Microsoft Distributed Transaction Coordinator documentation.
But since i've never heard of XA, nor do i need MSDTC to be running, i don't think that's it.
maruo answered it. But to clarify:
HENV environment;
SQLAllocEnv(&environment);
HDBC conn;
SQLAllocConnect(henv, &conn);
SQLSetConnectAttr(conn, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_OFF, SQL_IS_UINTEGER);
//...
SQLEndTran(SQL_HANDLE_ENV, environment, SQL_COMMIT);
//or SQLEndTran(SQL_HANDLE_ENV, environment, SQL_ROLLBACK);
SQLSetConnectAttr(conn, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_ON, SQL_IS_UINTEGER);
ODBC can operate in two modes: AUTOCOMMIT_ON and AUTOCOMMIT_OFF. Default is AUTOCOMMIT_ON. When Autocommit is ON each command you start using a Session Handle associated with that Connection will be Auto-Committed.
Let's see how "manual commit" (alias AUTOCOMMIT_OFF) works.
First you switch AUTOCOMMIT OFF Using something like this:
if (!SQL_SUCCEEDED(Or=SQLSetConnectAttr(Oc, SQL_ATTR_AUTOCOMMIT,
(SQLPOINTER)SQL_AUTOCOMMIT_OFF,
SQL_IS_UINTEGER))) {
// error handling here
}
Where "Oc" is the connection handle.
Second You run all commands as usual: Prepare / Execute statements, Bind Parameters, etc.... There's NO specific command to "START" a transaction. All commands after you switched Autocommit OFF are part of the transaction.
Third You commit:
if (!SQL_SUCCEEDED(Or=SQLEndTran(SQL_HANDLE_DBC, Oc, SQL_COMMIT))) {
// Error handling
}
And - again - all new commands from now on are automatically part of a new transaction that you will have to commit using another SQLEndTran command as show here above.
Finally... to switch AUTOCOMMIT_ON Again:
if (!SQL_SUCCEEDED(Or=SQLSetConnectAttr(Oc, SQL_ATTR_AUTOCOMMIT,
(SQLPOINTER)SQL_AUTOCOMMIT_ON, SQL_IS_UINTEGER))) {
// Error Handling
}
I am using Hibernate to access my database. I would like to delete a set of fields on function of a criteria. My database is PostgreSQL and my Java code is:
public void deleteAttr(String parameter){
Configuration cfg = new Configuration();
cfg.configure(resource.getString("hibernate_config_file"));
SessionFactory sessionFactory = cfg.buildSessionFactory();
session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
tx.begin();
String sql = "delete from attribute where timestamp > to_date('"+parameter+"','YYYY-MM-DD')"
session.createSQLQuery(sql);
tx.commit();
}
The method runs, but it doesn't delete data from database. I have also checked the sql sentence in PgAdmin and it works, but not in code. Why? Does someone help me?
Thanks in advance!
It's because you're creating a query, but you don't execute it:
String sql = "delete from attribute where timestamp > to_date('"+parameter+"','YYYY-MM-DD')"
Query query = session.createSQLQuery(sql);
query.executeUpdate();
You should really use bound named parameters rather than string concatenation to pass parameters in your query: it's usually more efficient, it' much more robust, but above all, it doesn't open the door to SQL injection attacks.