ORA-14108: illegal partition-extended table name syntax - database

I have a requirement where I need to run a update script over multiple partitions of a table . I have written a script for it as below:
but it gives
ORA-14108: illegal partition-extended table name syntax
Cause: Partition to be accessed may only be specified using its name. User attempted to use a partition number or a bind variable.
Action: Modify statement to refer to a partition using its name
Any idea how can I circumvent this error?
DECLARE
TYPE partition_names IS varray(1) OF varchar2(20);
curr_partition partition_names;
LENGTH integer;
BEGIN
curr_partition :=partition_names('SM_20090731');
LENGTH := curr_partition.count;
FOR i IN 1 .. LENGTH LOOP
dbms_output.put_line('Current Partition name is: '||curr_partition(i));
UPDATE TABLE_Y PARTITION (curr_partition(i))
SET PARTITION_KEY=TO_DATE('2017-08-21','YYYY-MM-DD')
WHERE ORDER_ID IN
(SELECT ORDER_ID
FROM TABLE_X);
END LOOP;
END;
/

You will have to concatenate the partition name in and use dynamic SQL, i.e.
EXECUTE IMMEDIATE
'UPDATE TABLE_Y PARTITION (' || curr_partition(i) || ')
SET PARTITION_KEY=TO_DATE(''2017-08-21'',''YYYY-MM-DD'')
WHERE ORDER_ID IN
(SELECT ORDER_ID
FROM TABLE_X)';
Whenever you run a SQL SELECT query or an INSERT, UPDATE or DELETE statement from PL/SQL, bind variables are used to pass into the SQL engine the values of any PL/SQL expressions. In particular, a bind parameter will be used for curr_partition(i). However, it seems the PARTITION clause of such queries and statements doesn't support bind parameters. I guess that this is because Oracle tries to create an execution plan for the query or statement before it has the bind parameter values, but if the query or statement specifies a partition, that information is a critical part of the plan and hence cannot be provided in a bind parameter.

Related

Use Sybase triggers to write dynamic statement using all old and new values for creating your own replication transaction statement log?

PROBLEM SUMMARY
I have to write I/U/D-statement-generating-triggers for a bucardo/symmetricDS-inspired homemade bidirectional replication system between Sybase ADS and Postgresql 11 groups of nodes, using BEFORE triggers on any Postgresql and Sybase DB that creates Insert/Update/Delete commands based on the command entered in a replicating source table: e.g. an INSERT INTO PERSON (first_name,last_name,gender,age,ethnicity) Values ('John','Doe','M',42,'C') and manipulate them into a corresponding Insert statement, and UPDATE by getting OLD and NEW values to dynamically make an UPDATE statement, along with getting OLD values to make a DELETE command, all to run per command on a destination at some interval.
I know this is difficult and no one does this but it is for a job and I have no other options and can't object to offer a different solution. I have no other teammates or human resources to help outside of SO and something like Codementors, which was not so helpful. My idea/strategy is to copy parts of bucardo/SymmetricDS when inserting OLD and NEW values for generating a statement/command to run on the destination. Right now, I am snapshotting the whole table to a CSV as opposed to doing by individual command, but by command and looping through table that generates and saves commands will make the job much easier.
One big issue is that they come from Sybase ADS and have a mixed Key/Index structure (many tables have NO PK) and are mirroring that in Postgresql, so I am trying to write PK-less statements, or all-column commands to get around the no-pk tables. They also will only replicate certain columns for certain tables, so I have a column in a table for them to insert the column names delimited by ';' and then split it out into an array and link the column names to the values for each statement to generate a full command for I/U/D, Hopefully. I am open to other strategies but this is a big solo project and I have gone at it many ways with much difficulty.
I mostly come from DBA background and have some programming experience with the fundamentals, so I am mostly pseudocoding each major sequence,googling for syntax by part, and adjusting as I go or encounter a language incapability. I am thankful for any help given, as I am getting a bit desperate and discouraged.
WHAT I HAVE TRIED
I have to do this for Sybase ADS and Postgresql but this question is intially over ADS since it's more challenging and older.
To have one "Log" table which tracks row changes for each of the replicating tables and records and ultimately dynamically generates a command is the goal for both platforms. I am trying to make trigger statements like:
CREATE TRIGGER PERSON_INSERT
ON PERSON
BEFORE
INSERT
BEGIN
INSERT INTO Backlog (SourceTableID, TriggerType, Status, CreateTimeDate, NewValues) select ID, 'INSERT','READY', NOW(),''first_name';'last_name';'gender';'age';'ethnicity'' from __new;
END;
CREATE TRIGGER PERSON_UPDATE
ON PERSON
BEFORE
UPDATE
BEGIN
INSERT INTO Backlog (SourceTableID, TriggerType, Status, CreateTimeDate, NewValues) select ID, 'U','UPDATE','READY', NOW(),''first_name';'last_name';'gender';'age';'ethnicity'' from __new;
UPDATE Backlog SET OldValues=select ''first_name';'last_name';'gender';'age';'ethnicity'' from __old where SourceTableID=select ID from __old;
END;
CREATE TRIGGER PERSON_DELETE
ON PERSON
BEFORE
DELETE
BEGIN
INSERT INTO Backlog (SourceTableID, TriggerType, Status, CreateTimeDate, OldValues) select ID, 'D','DELETE','READY', NOW(),''first_name';'last_name';'gender';'age';'ethnicity'' from __old;
END;
but I would like the "''first_name';'last_name';'gender';'age';'ethnicity''" to come from another table as a value to make it dynamic since multiple tables will write their value and statement info to the single log table. Then, it can be made into a variable and then probably split to link to the corresponding values so the IUD statements can be made which will be executed on the destination one at a time.
ATTEMPTED INCOMPLETE SAMPLE TRIGGER CODE
CREATE TRIGGER PERSON_INSERT
ON PERSON
BEFORE
INSERT
BEGIN
--Declare #Columns string
--#Columns=select Columns from metatable where tablename='PERSON'
--String Split(#Columns,';') into array to correspond to new and old VALUES
--#NewValues=#['#Columns='+NEW.#Columns+'']
INSERT INTO Backlog (SourceTableID, TriggerType, Status, CreateTimeDate, NewValues) select ID, 'INSERT','READY', NOW(),''first_name';'last_name';'gender';'age';'ethnicity'' from __new;
END;
CREATE TRIGGER PERSON_UPDATE
ON PERSON
BEFORE
UPDATE
BEGIN
--Declare #Columns string
--#Columns=select Columns from metatable where tablename='PERSON'
--String Split(#Columns,';') into array to correspond to new and old VALUES
--#NewValues=#['#Columns='+NEW.#Columns+'']
--#OldValues=#['#Columns='+OLD.#Columns+'']
INSERT INTO Backlog (SourceTableID, TriggerType, Status, CreateTimeDate, NewValues) select ID, 'U','UPDATE','READY', NOW(),''first_name';'last_name';'gender';'age';'ethnicity'' from __new;
UPDATE Backlog SET OldValues=select ''first_name';'last_name';'gender';'age';'ethnicity'' from __old where SourceTableID=select ID from __old;
END;
CREATE TRIGGER PERSON_DELETE
ON PERSON
BEFORE
DELETE
BEGIN
--Declare #Columns string
--#Columns=select Columns from metatable where tablename='PERSON'
--String Split(#Columns,',') into array to correspond to new and old VALUES
--#OldValues=#['#Columns='+OLD.#Columns+'']
INSERT INTO Backlog (SourceTableID, TriggerType, Status, CreateTimeDate, OldValues) select ID, 'D','DELETE','READY', NOW(),''first_name';'last_name';'gender';'age';'ethnicity'' from __old;
END;
CONCLUSION
For each row inserted,updated, or deleted; in a COMMAND column in the log table, I am trying to generate a corresponding 'INSERT INTO PERSON ('+#Columns+') VALUES ('+#NewValues+')' type statement, or an UPDATE or DELETE. Then a Foreach service will run each command value ordered by create time, as the main replication service.
To be clear, I am trying to make my sample code trigger write all old values and new values to a column in a dynamic way without hardcoding the columns in each trigger since it will be used for multiple tables, and writing the values into a single column delimited by a comma or semicolon.
An even bigger wish or goal behind this is to find a way to save/script each IUD command and then be able to run them on subscriber server.DBs of postgresql and Sybase platform, therefore making my own replication from a log
It is a complex but solvable problem that would take time and careful planning to write. I think what you are looking for is the "Execute Immediate" command in ADS SQL syntax. With this command you can create a dynamic statement to then be executed once construction of the SQL statement is terminated. Save each desired column value to a temp table by carefully constructing the statement as a string and then execute it with Execute Immediate. For example:
DECLARE TableColumns Cursor ;
DECLARE FldName Char(100) ;
...
OPEN TableColumns AS SELECT *
FROM system.columns
WHERE parent = #cTableName
AND field_type < 21 //ADS_ROWVERSION
AND field_type <> 6 //ADS_BINARY
AND field_type <> 7; //ADS_IMAGE
While Fetch TableColumns DO
FldName = Trim( TableColumns.Name) ;
StrSql = 'SELECT New.[' + Trim( FldName ) + '] newVal' +
'INTO #myTmpTable FROM ___New n' ;
After constructing the statement as a string it can then be executed like this:
EXECUTE IMMEDIATE STRSQL ;
You can pickup old and new values from __old and __new temp tables that are always available to triggers. Insert values into temp table myTmpTable and then use it to update the target. Remember to drop myTmpTable at the end.
Furthermore, I would think you can create a function on the DD that can actually be called from each trigger on the tables you want to keep track of instead of writing a long trigger for each table and cTableName can be a parameter sent to the function. That would make maintenance a little easier.

SSIS Pass variable to Execute SQL Update [duplicate]

I have ssis package in that I'm taking values from flat file and insert it into table.
I have taken one Execute SQL Task in that creating one temptable
CREATE TABLE [tempdb].dbo.##temptable
(
date datetime,
companyname nvarchar(50),
price decimal(10,0),
PortfolioId int,
stype nvarchar(50)
)
Insert into [tempdb].dbo.##temptable (date,companyname,price,PortfolioId,stype)
SELECT date,companyname,price,PortfolioId,stype
FROM ProgressNAV
WHERE (Date = '2011-09-30') AND (PortfolioId = 5) AND (stype in ('Index'))
ORDER BY CompanyName
Now in above query I need to pass (Date = '2011-09-30') AND (PortfolioId = 5) AND (stype in ('Index'))
these 3 parameter using variable name I have created variables in package so that I become dynamic.
In your Execute SQL Task, make sure SQLSourceType is set to Direct Input, then your SQL Statement is the name of the stored proc, with questionmarks for each paramter of the proc, like so:
Click the parameter mapping in the left column and add each paramter from your stored proc and map it to your SSIS variable:
Now when this task runs it will pass the SSIS variables to the stored proc.
The EXCEL and OLED DB connection managers use the parameter names 0 and 1.
I was using a oledb connection and wasted couple of hours trying to figure out the reason why the query was not working or taking the parameters. the above explanation helped a lot
Thanks a lot.
Along with #PaulStock's answer, Depending on your connection type, your variable names and SQLStatement/SQLStatementSource Changes
https://learn.microsoft.com/en-us/sql/integration-services/control-flow/execute-sql-task
SELECT, INSERT, UPDATE, and DELETE commands frequently include WHERE clauses to specify filters that define the conditions each row in the source tables must meet to qualify for an SQL command. Parameters provide the filter values in the WHERE clauses.
You can use parameter markers to dynamically provide parameter values. The rules for which parameter markers and parameter names can be used in the SQL statement depend on the type of connection manager that the Execute SQL uses.
The following table lists examples of the SELECT command by connection manager type. The INSERT, UPDATE, and DELETE statements are similar. The examples use SELECT to return products from the Product table in AdventureWorks2012 that have a ProductID greater than and less than the values specified by two parameters.
EXCEL, ODBC, and OLEDB
SELECT* FROM Production.Product WHERE ProductId > ? AND ProductID < ?
ADO
SELECT * FROM Production.Product WHERE ProductId > ? AND ProductID < ?
ADO.NET
SELECT* FROM Production.Product WHERE ProductId > #parmMinProductID
AND ProductID < #parmMaxProductID
The examples would require parameters that have the following names:
The EXCEL and OLED DB connection managers use the parameter names 0 and 1. The ODBC connection type uses 1 and 2.
The ADO connection type could use any two parameter names, such as Param1 and Param2, but the parameters must be mapped by their ordinal position in the parameter list.
The ADO.NET connection type uses the parameter names #parmMinProductID and #parmMaxProductID.
A little late to the party, but this is how I did it for an insert:
DECLARE #ManagerID AS Varchar (25) = 'NA'
DECLARE #ManagerEmail AS Varchar (50) = 'NA'
Declare #RecordCount AS int = 0
SET #ManagerID = ?
SET #ManagerEmail = ?
SET #RecordCount = ?
INSERT INTO...

SSIS SQL TASK MAX(DATE) to Variable in DATA FLOW

OK this seems like it should be insanely easy, but I cannot figure it out. Every where I look online says to create temp tables and VB scripts and I cannot believe I have to do that. My goal is to insert all the records in a table with a date later than the max date in that destination table.
UPDATE The 2 tables are in two different non linked SQL databases
So:
Select #[User::Dated] = MAX(Dateof) from Table2
Insert into Table2
Select *
From Table1
Where DateOf > #[User::Dated]
I am trying to do this in SSIS. I declared a variable, the SQL execution step looks like it is assigning the single row output to it. But when I got go into the data flow it give me no parameters to choose, when I force the known parameter which is in the project scope it says no parameter exists
Create two OLE DB data sources each pointing at you two databases.
Create a variable called max_date and make its data type String.
Place an Execute SQL Task on the Control Flow, change its connection type to OLE DB and for the connection select the name of the data source that contains Table2. Set the ResultSet to Single Row. Add the following for the SQLStatement:
SELECT CAST(MAX(Dateof) AS VARCHAR) AS max_date FROM Table2
Go to the Result Set pane, click Add and enter the following:
Result Name: max_date
Variable Name: User::max_date
You can now use the max_date variable in an expression to create a SQL statement, for example you could use it in another Execute SQL Task which would use the second Data Connection like so:
"INSERT INTO Table2
SELECT *
FROM Table1
WHERE DateOf > '" + #[User::max_date] + "'"
Or in an OLE DB Source in a data flow like so:
"SELECT *
FROM Table1
WHERE DateOf > '" + #[User::max_date] + "'"
You can do this in a single SQL Task if you want:
Insert into Table2
Select *
From Table1
Where DateOf > (Select MAX(Dateof) from Table2)
If you want to use multiple Execute SQL Task items in the control flow, or want to make use of the parameter in a data flow instead, you have to change the General > Result Set option for your MAX() query to Single Row, then move from General to Result Set and Add a new variable for your result set to occupy.
To use that variable in your INSERT INTO.... query via Execute SQL Task, you'll construct your query with a ? for each parameter and map them in the parameter mapping section. If a variable is used multiple times in a query it's easiest to use a stored procedure, so you can simply pass the relevant parameters in SSIS.

Setting multiple variables from a Execute SQL Task result object with a single row

I have the following sql in an EXECUTE SQL TASK:
SELECT [CnxnStrValue1] as INT_Support_CnxnStr
,[CnxnStrValue2] as Lawson_CnxnStr
,[CnxnStrValue3] as Lawson_HRIS_CnxnStr
FROM [dbo].[InterfaceDBCnxn]
WHERE InterfaceName = ?
The result set is set to an object variable. I also have three string variables to hold the values and typically I would map them to a For Each Loop Container. But, in this case, my result set will always only be one row because InterfaceName is the primary key of the table.
Whats is the best way to set the variables with out using a for each loop container?
Change your result set from Full to Single Row. I use this pattern for my DW loads to get the surrogate key value for my unknown members.
ResultSet set to Single row
Map your parameters as needed. Here, I have 8 variables that get mapped
Given your table is Table and a column is Column_name and Column_name_two you can do something like this.
SELECT #yourVar = Column_name,
#yourSecondVar = Column_name_two
FROM Table
WHERE Table_id = 1

Verify the columns (name and amount) returned by a SQL query

I've a thirdy-part plugin of my program that execute SQL queries (mostly are select). These queries must return a default column order and amount, such:
PACKAGEID (guid), REFDATE (datetime), MODIFYDATE (datetime), PROG (int)
Sometimes happens that some query omit one of the column specified above. In order to avoid furthers errors in the program, I would execute a sort of check just to be sure that each query executed returns the default columns.
I've already use the SQL syntax SET NOEXEC ON and SET NOEXEC OFF and might be useful also in this case. I'm currently using SQL SERVER 2008.
Any hints?
If you're able to put the result set into a temporary table, you can easily count number of columns of the table by using something like:
Select *
From tempdb.Information_Schema.COLUMNS
where TABLE_NAME like '%#temptable%'

Resources