I am trying to use the oracle SQL access advisor utility.
For recommendation on performance tuning and partitioning of tables.
But when i try to use it gives me no results and says there are no result for the task.
I am trying below code to generate the recommendation this is an example from SCOTT schema which also giving me same error
declare
v_sql varchar2(2000) := 'select * from emp where empno in (7369,7499)';
v_tuning_task varchar2(200) := 'tune_task_advisor_view7';
v_tune_result clob;
begin
dbms_advisor.quick_tune ( dbms_advisor.sqlaccess_advisor , v_tuning_task, v_sql );
DBMS_ADVISOR.reset_task(v_tuning_task);
dbms_advisor.set_task_parameter(v_tuning_task, 'ANALYSIS_SCOPE', 'ALL');
dbms_advisor.set_task_parameter(v_tuning_task, 'STORAGE_CHANGE', '10000000');
dbms_advisor.set_task_parameter ( v_tuning_task, 'MODE', 'COMPREHENSIVE');
dbms_output.put_line ('Quick Tune Completed');
end;
SELECT DBMS_ADVISOR.get_task_script ('tune_task_advisor_view7') AS script FROM dual;
The Error --
SELECT DBMS_ADVISOR.get_task_script ('tune_task_advisor_view7') AS script FROM dual
Error report -
SQL Error: ORA-13631: The most recent execution of task tune_task_advisor_view7 contains no results.
ORA-06512: at "SYS.PRVT_ADVISOR", line 3350
ORA-06512: at "SYS.DBMS_ADVISOR", line 641
ORA-06512: at line 1
13631. 00000 - "The most recent execution of task %s contains no results."
*Cause: The user attempted to create a report or script from a task that
has not successfully completed an execution.
*Action: Execute the task and then retry the operation
There is one more that when i comment all the SET_TASK_PARAMETER procedure then it run but does not provide any recommendations the output is like --
SCRIPT
--------------------------------------------------------------------------------
Rem SQL Access Advisor: Version 11.2.0.3.0 - Production
Rem
Rem Username:
Is there any parameter which i missed to define here.
For RESET_TASK if i do not rest_task then it gives me error for it also so i have used it here.
Thanks in advance
When I get this error, this typically means that the DBMS_ADVISOR does not have any advise for the profile based on its diagnostic so it doesn't return a report. I typically don't try to produce a ADDM advisor report, but rather look for all the HASH plans I can find for the query, run an explain plan on those hash values, and then profile the one with the best plan.
I forgot to execute the task !
exec dbms_advisor.execute_task('tune_task_advisor_view7');
and then
SELECT DBMS_ADVISOR.get_task_script ('tune_task_advisor_view7') AS script FROM
dual;
OUTPUT will be
SCRIPT
--------------------------------------------------------------------------------
Rem SQL Access Advisor: Version 12.1.0.2.0 - Production
Rem
Rem Username: CHRIS
Rem Task: tune_task_advisor_view7
Rem Execution date:
Rem
/* RETAIN INDEX "SCOTT"."PK_EMP" */
Find address and hash for the query SQL_ID
select address, hash_value, plan_hash_value from v$sqlarea where sql_id = '3mx8whn1c5jbb';
Purge the plan out of the cursor cache (only if it’s determined that it’s the wrong plan). Statistics should be up to date prior to purging out the plan.
exec sys.dbms_shared_pool.purge(',','C');
exec sys.dbms_shared_pool.purge('00000005DF37F740,46318955','C');
Create the tuning plan if one does not exist.
variable stmt_task VARCHAR2(64);
SET SERVEROUTPUT ON LINESIZE 200 PAGESIZE 20000 LONG 9999
EXEC :stmt_task := DBMS_SQLTUNE.CREATE_TUNING_TASK(sql_id => '3mx8whn1c5jbb',task_name => '3mx8whn1c5jbb_AWR_tuning_task');
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK('3mx8whn1c5jbb_AWR_tuning_task', 'TEXT', 'TYPICAL', 'FINDINGS') FROM DUAL;
If the tuning plan is reported. Accept the new plan.
DECLARE
sqlprofile_name VARCHAR2(30);
BEGIN
sqlprofile_name := DBMS_SQLTUNE.ACCEPT_SQL_PROFILE (
task_name => '3mx8whn1c5jbb_AWR_tuning_task'
, name => 'sql_profile_1'
, force_match => true
);
END;
/
If creating a profile doesn’t work or if the plan does not exist in the cursor cache, check the AWR repository to see if a better plan exists.
If you have an AWR repository with the correct plan, you can rerun the queries above and just use the option to get the hash and sql_id from the AWR repository.
If that doesn't work. THEN I used ADDM (As you did above).
Related
I'm testing SnowFlake. To do this I created an instance of SnowFlake on GCP.
One of the tests is to try the daily load of data from a STORAGE INTEGRATION.
To do that I had generated the STORAGE INTEGRATION and the stage.
I tested the copy
copy into DEMO_DB.PUBLIC.DATA_BY_REGION from #sg_gcs_covid pattern='.*data_by_region.*'
and all goes fine.
Now it's time to test the daily scheduling with the task statement.
I created this task:
CREATE TASK schedule_regioni
WAREHOUSE = COMPUTE_WH
SCHEDULE = 'USING CRON 42 18 9 9 * Europe/Rome'
COMMENT = 'Test Schedule'
AS
copy into DEMO_DB.PUBLIC.DATA_BY_REGION from #sg_gcs_covid pattern='.*data_by_region.*';
And I enabled it:
alter task schedule_regioni resume;
I got no errors, but the task don't loads data.
To resolve the issue i had to put the copy in a stored procedure and insert the call of the storede procedure instead of the copy:
DROP TASK schedule_regioni;
CREATE TASK schedule_regioni
WAREHOUSE = COMPUTE_WH
SCHEDULE = 'USING CRON 42 18 9 9 * Europe/Rome'
COMMENT = 'Test Schedule'
AS
call sp_upload_c19_regioni();
The question is: this is a desired behavior or an issue (as I suppose)?
Someone can give to me some information about this?
I've just tried ( but with storage integration and stage on AWS S3) and it works fine also using copy command inside sql part of the task, without calling a stored procedure.
In order to start investigating the issue, I would check following info (maybe for debugging I would create the task scheduling it every few minutes):
check task_history and verify executions
select *
from table(information_schema.task_history(
scheduled_time_range_start=>dateadd('hour',-1,current_timestamp()),
result_limit => 100,
task_name=>'YOUR_TASK_NAME'));
if previous step is successfull, check copy_history and verify the input file name , target table and number of records/errors are the expected ones
SELECT *
FROM TABLE (information_schema.copy_history(TABLE_NAME => 'YOUR_TABLE_NAME',
start_time=> dateadd(hours, -1, current_timestamp())))
ORDER BY 3 DESC;
Check if the results are the same you get when the task with sp call is executed.
Please also confirm that you are loading new files not yet loaded into your table with COPY command (otherwise you need to specify FORCE = TRUE parameter in the copy command or remove metadata information truncating your target table to reload the same files).
I have a two tables in SQL Server, in which one is the source for a MERGE operation into another.
The source table has 30Mil Records
The Target table has 180Mil Records. Both tables have 227 columns.
I do have SSIS, but I'm told in this case, a MERGE statement is the better option. Below is a shortened version of it:
;WITH MySource as (
SELECT * FROM [STAGE].[dbo].[STAGE_TABLE]
)
MERGE [EDW].[dbo].[TARGET_TABLE] AS MyTarget
USING MySource
ON MySource.[ID_FIELD] = MyTarget.[ID_FIELD]
AND MySource.[LoadDate] >= MyTarget.[LoadDate]
WHEN MATCHED THEN UPDATE SET
<<Target Column>> = MySource.<<Source Colums>> --227 columns
WHEN NOT MATCHED THEN INSERT
(
[ID_FIELD],
[LoadDate],
<<225 Other Columns>>
)
VALUES (
MySource.[ID_FIELD],
MySource.[LoadDate],
MySource.<<225 other columns>>
);
The only changes I made to the script above is truncating the list of columns to keep the code block here short.
My Problem is that I am getting hung on the execution. The profile screen shows a CXPACKET suspension with the error: cwaitpipenewrow, node=2.
How do I troubleshoot this? Thank you.
Seems like CXPACKET and suspended state means that some threads which have completed are logging that other thread's state which have not completed yet.
Please check here. The query need to update upto 1 Billion values in the table. hence it would be slow running queries.
https://dba.stackexchange.com/questions/96346/cxpacket-suspended-and-null-wait-type
https://www.sqlshack.com/troubleshooting-the-cxpacket-wait-type-in-sql-server/
Hope these articles might help you debug.
I am trying to use GO to get R to pull a multipart query from a SQL Server database but R keeps erroring out on me when I try this. Does anyone know a workaround to get RODBC to run multipart queries?
Example query:
query2 = "IF OBJECT_ID('tempdb..#ATTTempTable') IS NOT NULL
DROP TABLE #ATTTempTable
GO
SELECT
* INTO #ATTTempTable
FROM ETL.ATT.fact_responses fr
WHERE fr.ResponseDateTime > '2015-07-06'
"
channel <- odbcConnect("<host name>", uid="<uid>", pwd="<pwd>")
raw = sqlQuery(channel, query2)
close(channel)
and result
> raw
[1] "42000 102 [Microsoft][ODBC Driver 11 for SQL Server][SQL Server]Incorrect syntax near 'GO'."
[2] "[RODBC] ERROR: Could not SQLExecDirect 'IF OBJECT_ID('tempdb..#ATTTempTable') IS NOT NULL\n DROP TABLE #ATTTempTable\n\nGO\n\nSELECT\n\t* INTO #ATTTempTable\nFROM ETL.ATT.fact_responses fr\nWHERE fr.ResponseDateTime > '2015-07-06'\n'"
>
Because your query contains multiple line with conditional logic it resembles a stored procedure.
Simply save that stored procedure in SQL Server:
CREATE PROCEDURE sqlServerSp #ResponseDateTime nvarchar(10)
AS
IF OBJECT_ID('tempdb..#ATTTempTable') IS NOT NULL
DROP TABLE #ATTTempTable;
GO
-- suppresses affected rows message so RODBC returns a dataset
SET NO COUNT ON;
GO
-- runs make-table action query
SELECT * INTO #ATTTempTable
FROM ETL.ATT.fact_responses fr
WHERE fr.ResponseDateTime > #ResponseDateTime;
GO
And then run the stored procedure in R. You can even pass parameters like the date:
channel <- odbcConnect("<host name>", uid="<uid>", pwd="<pwd>")
raw = sqlQuery(channel, "EXEC sqlServerSp #ResponseDateTime='2015-07-06'")
close(channel)
You can't. See https://msdn.microsoft.com/en-us/library/ms188037.aspx
you have to divide your query into two statements and run them separately.
I am building a SQL Publish Script that will be used to generate a database to our internal servers, and then used externally by our client.
The problem I have is that our internal script will automate quite a few things for us, in which the actual production environment will require these completed manually.
For example, internally we would use the following script
-- Global variables
:setvar EnvironmentName 'Local'
-- Script.PostDeployment.sql
:r .\PopulateDefaultValues.sql
IF ($(EnvironmentName) = 'Test')
BEGIN
:r .\GivePermissionsToDevelopmentTeam.sql
:r .\PopulateTestData.sql
:r .\RunETL.sql
END
ELSE IF ($(EnvironmentName) = 'Client_Dev')
BEGIN
:r .\GivePermissionsToDevWebsite.sql
END
This would generate a script like this:
-- (Ignore syntax correctness, its just the process I'm after)
IF($(EnvironmentName) = 'Test')
BEGIN
CREATE LOGIN [Developer1] AS USER [MyDomain\Developer1] WITH DEFAULT SCHEMA=[dbo];
CREATE LOGIN [Developer2] AS USER [MyDomain\Developer2] WITH DEFAULT SCHEMA=[dbo];
CREATE LOGIN [Developer3] AS USER [MyDomain\Developer3] WITH DEFAULT SCHEMA=[dbo];
-- Populate entire database (10000's of rows over 100 tables)
INSERT INTO Products ( Name, Description, Price ) VALUES
( 'Cheese Balls', 'Cheesy Balls ... mm mm mmmm', 1.00),
( 'Cheese Balls +', 'Cheesy Balls with a caffeine kick', 2.00),
( 'Cheese Squares', 'Cheesy squares with a hint of ginger', 2.50);
EXEC spRunETL 'AUTO-DEPLOY';
END
ELSE IF($(EnvironmentName) = 'Client_Dev')
BEGIN
CREATE LOGIN [WebLogin] AS USER [FABRIKAM\AppPoolUser];
END
END IF
This works fine, for us. When this script is taken on site, the script fails because it cannot authenticate the users of our internal environment.
One item I thought about permissions was to just give our internal team sysadmin privileges, but the test data just fills the script up. When going on site, having all of this test data just bloats the published script and isn't used anyway.
Is there any way to exclude a section entirely from a published file, so that all of the test data and extraeous inserts are removed, without any manual intervention of the published file?
Unfortunately, there is currently no way to remove the contents of a referenced script from the generated file entirely.
The only way to achieve this is to post-process the generated script (Powershell/Ruby/scripting language of choice) to find and remove the parts you care about using some form of string and file manipulation.
Based on: My experience with doing this exact same thing to remove a development-environment-only script which was sizable and bloated the Production deployment script with a lot of 'noise', making it harder for DBA's to review the script sensibly.
I'm trying to run the following UPDATE query from a python script (note I've removed the database info):
print 'Connecting to db for update query...'
db = pyodbc.connect('DRIVER={FreeTDS};SERVER=<removed>;DATABASE=<removed>;UID=<removed>;PWD=<removed>')
cursor = db.cursor()
print ' Executing SQL queries...'
for i in range(len(data)):
sql = '''
UPDATE product.sanction
SET action_summary = '{action_summary}'
WHERE sanction_id = {sanction_id};
'''.format(sanction_id=data[i][0], action_summary=data[i][1])
cursor.execute(sql)
cursor.close()
db.commit()
db.close()
However, it hangs indefinitely, no error.
I'm new to pyodbc, but it should be setup correctly considering I'm having no problems performing SELECT queries. I did have to call CAST for SELECT queries (I've cast sanction_id AS INT [int identity on the database] and action_summary AS TEXT [nvarchar on the database]) to properly populate data, so perhaps the problem lies somewhere there, but I don't know where to start debugging. Converting the text to NVARCHAR didn't do anything either.
Here's an example of one of the rows in data:
(2861357, 'Exclusion Program: NonProcurement; Excluding Agency: HHS; CT Code: Z; Exclusion Type: Prohibition/Restriction; SAM Number: S4MR3Q9FL;')
I was unable to find my issue, but I ended up using QuerySets rather than running an UPDATE query.