Getting "Failed to start system task System Task" - Airflow DAG/SQL Server - sql-server

After I triggered and refreshed the dag task, it went from running, delayed, to failed. The error log from the airflow told me to check the error from sql server which I got "Failed to start system task System Task" when I checked the logs on my sql server docker container. I'm not sure if I need to specify a schema but the rest of the connection params are correct.
[entrypoint.sh]
"${AIRFLOW_CONN_MY_SRC_DB:=mssql+pyodbc://SA:P#SSW0RD#mssqlcsc380:1433/?driver=ODBC+Driver+17+for+SQL+Server}"
[dag.py]
with DAG (
'mssql_380_dag',
start_date=days_ago(1),
schedule_interval=None,
catchup=False,
default_args={
'owner' : 'me',
'retries' : 1,
'retry_delay' : dt.timedelta(minutes=5)
}
) as dag:
get_requests = MsSqlOperator(
task_id = 'get_requests',
mssql_conn_id = 'my_src_db',
sql = 'select * from Request',
dag = dag
)

The issue was just that it couldn't notice the table so I specified the database which fixed the issue even though the database should of been recognized since I've passed it on the connection string.
sql = 'use csc380db; select * from Request',

Related

Snowflake Query returns Http status: Unprocessable Entity

I'm able to successfully connect to the Snowflake database through my .NET app, but I'm unable to run a SQL command due to the following error from the Snowflake:
Message: Http status: UnprocessableEntity
ResponseContent:
"code" : "391920",
"message" : "Unable to run the command. You must specify the warehouse to use by either setting the warehouse field in the body of the request or by setting the DEFAULT_NAMESPACE property for the current user.",
"sqlState" : "57P03",
"statementHandle" : "01a8
Here is my code I'm using.
public async Task<QueryResult> QuerySnowflake(string statement, string database, string schema)
{
var content = new
{
statement,
database,
schema
};
return await _httpClient.SnowflakePost<QueryResult>($"https://{_accountId}.snowflakecomputing.com/api/v2/statements", content, await GetHeaders(), _cancellationToken);
}
statement = SELECT * FROM SNOWFLAKE_SAMPLE_DATA.TPCH_SF1.CUSTOMER
database = SNOWFLAKE_SAMPLE_DATA
schema = TPCH_SF1
I have already tried the following:
ALTER USER my_username SET DEFAULT_NAMESPACE = SNOWFLAKE_SAMPLE_DATA.TPCH_SF1
GRANT SELECT ON ALL TABLES IN SCHEMA "TPCH_SF1" TO ROLE sysadmin
ALTER USER my_username SET DEFAULT_ROLE = sysadmin
All of these did not change the error response.
I don't think it needs a code change as it works with other Snowflake accounts (I'm using a new trial account). I believe I have my something wrong with my account (e.g. missing role, missing warehouse, missing permission, etc).
Any help would be very much appreciated.
The user does not have a default warehouse and none is specified in the connection request or a use command in the session. You can try sending this command before running your select:
use warehouse MY_WAREHOUSE;
You can also specify it in the connection, or specify a default for the user:
ALTER USER MY_USER SET DEFAULT_WAREHOUSE = MY_WAREHOUSE;

Run Stored Procedure in Airflow

I try to run my stored procedure in Airflow. Simply, I imported mssql operator and tried to execute following:
sql_command = """ EXEC [spAirflowTest] """
t3 = MsSqlOperator( task_id = 'run_test_proc',
mssql_conn_id = 'FIConnection',
sql = sql_command,
dag = dag,
database = 'RDW')
It completes this task as successful. However, task is not even executed. Because I get no error from system, I also cannot identify the error. To identify whether it arrived to my microsoft sql server, I checked with data profiling and it seems like server gets the command but does not execute it. Indeed, I can see sql command in data profiling tool.
When I run command for reading something, like :
select *
from sys.tables
it returns successful, also, with result. How can I solve this problem? Is there anyone who encountered with this issue?
sql_command = """ EXEC [spAirflowTest] """
t3 = MsSqlOperator( task_id = 'run_test_proc',
mssql_conn_id = 'FIConnection',
sql = sql_command,
dag = dag,
database = 'RDW',
autocommit = True)
adding autocommit as above solved the issue

Stored Procedure cannot run successfully in SQL Server via SQLAlchemy

What I am using
Ubuntu 16.04
Python 3.6
FreeTDS, TDS Version 7.3
SQLAlchemy 1.2.5
Windows server 2012
SQL Server 2008 Enterprise
My purpose
I write code in Python on Ubuntu machine to insert and execute stored procedure on MS SQL Server 2008. I create an order for customer. An order may have many main ingredients, toppings. When finish order, I run a stored procedure to process data to user_order and employee_order.
The stored procedure
In stored procedure, when select data from source tables and process data, if any error is happened, transaction is rolled back.
My code snippet
def process():
engine = get_engine() # my method get engine by connection string
session_maker = sessionmaker(bind=engine.execution_options(isolation_level='SERIALIZABLE'))
session = session_maker()
ref = 'REF0000001'
try:
# Create order
order = Order(id=1, ref=ref)
# Add main ingredients
main1 = Main(order=1, name='coffee')
main2 = Main(order=1, name='milk')
# Topup
topup1 = TopUp(order=1, name='cookies')
topup2 = TopUp(order=1, name='chocolate')
session.add(order)
session.flush()
session.add_all([main1, main2])
session.flush()
session.add_all([topup1, topup2])
session.flush()
session.commit()
except:
session.rollback()
reraise
finally:
session.close()
del session
time.sleep(1)
session = session_maker()
session.execute('EXEC finish_order %a' % ref)
session.commit()
session.close()
del session
And result is
There is no error, but there is no data in user_order and employee_order even though stored procedure finish_order is run.
But, if I run the stored procedure again as a simple query in terminal or SQL Studio Management, the data is imported to destination tables.
Doubts
Is there any chance that data has not been finished inserting into origin tables yet when stored procedure is called?
Please help me with this case.
Thank you!

Connecting Oracle to SQL server using dg4odbc

apologies if this question has already been asked, i just couldn't find the answer to my case.
I'm trying to make a connection (link) between Oracle 11g MS SQL database which are on 2 different servers. I've followed the instruction on this link
http://www.dba-oracle.com/t_heterogeneous_database_connections_sql_server.htm
Only that in my listener on SQL server, I'm using DG4ODBC rather than hsodbc
i've listed the steps below, but i can't figure out how to resolve it.
1) installed oracle client on my SQL server.
2) created an ODBC (64 bit) connection in the SQL server to point to my SQL target database called dg4odbc
3) Created file called initdg4odbc.ora in
D:\app\user\product\11.2.0\client_2\hs\admin with the below content:
# HS init parameters
#
HS_FDS_CONNECT_INFO = dg4odbc
HS_FDS_TRACE_LEVEL = on
4) Updated my listener to be as below:
LISTENER =
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=tcp)(HOST=sqlserver)(PORT=1521))
)
SID_LIST_LISTENER=
(SID_LIST=
(SID_DESC=
(SID_NAME=dg4odbc)
(ORACLE_HOME=D:\app\user\product\11.2.0\client_2)
(PROGRAM=dg4odbc)
)
)
#CONNECT_TIMEOUT_LISTENER = 0
5) when I stop and start the listener, I get the below message:
Instance "dg4odbc", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
6) on my Oracle database server, updated the tnsnames.ora file to include:
dg4odbc.world = (DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)
(HOST = sqlserver)
(PORT = 1521)
)
(CONNECT_DATA = (SID=dg4odbc)
)
(HS=OK)
)
7) when I try to ping the tns using : tnsping dg4odbc I get the following error:
Used parameter files:
E:\oracle\product\11.2.0.4\dbhome_1\network\admin\sqlnet.ora
TNS-03505: Failed to resolve name
Could you please tell me where I am going wrong? Im getting a bit desperate to get this connection to work.
really appreciate your help on this.
Thanks

SQL Server Service Broker POISON_MESSAGE_HANDLING(STATUS = ON)

We are hosting one of our solutions which was originally built on SQL Server 2008 R2 and this instance is hosted on a SQL Server 2008 instance (not R2). The database created fine but for some reason the service broker queues were created with:
POISON_MESSAGE_HANDLING(STATUS = OFF)
I have tried setting this to on but with no luck, we have always declared the queue like this:
CREATE QUEUE QueueName WITH STATUS=ON, ACTIVATION
(STATUS = ON, MAX_QUEUE_READERS = 1,
PROCEDURE_NAME = QueueProcedureName, EXECUTE AS OWNER);
Is there a way to create the queue as about with the defaults of R2?
EDIT - More Info:
This is the error message which makes no sense as works fine on 2008 R2.
GO
ALTER QUEUE [Store].[UpdateStoredPublishingSegmentUsersSendQueue]
WITH POISON_MESSAGE_HANDLING(STATUS = ON);
Msg 102, Level 15, State 1, Line 2
Incorrect syntax near 'POISON_MESSAGE_HANDLING'.
This is an issue with the version of sql server. The POISON-MESSAGE_HANDLING is not supported in version less than 2008 R2. Hope this helps!
DISCLAIMER: I haven't tried the following commands, as I am running on 2005 which doesn't support the POISON_MESSAGE_HANDLING option.
Have you tried the ALTER QUEUE command after executing the CREATE?
ALTER QUEUE <queue name> WITH
POISON_MESSAGE_HANDLING ( STATUS = ON )
In alternative try modifying your CREATE command like this:
CREATE QUEUE <queue name> WITH
STATUS=ON,
ACTIVATION (
STATUS = ON,
MAX_QUEUE_READERS = 1,
PROCEDURE_NAME = <activated sproc name>,
EXECUTE AS OWNER
),
POISON_MESSAGE_HANDLING ( STATUS = ON );

Resources