As per below, I want to create a record into another database remotely which will work when I changed c1 to some string in insert query but when I use c1 object it will through error as per the title.
Please note: it's working fine on same remote without #, and using c1 object, tried other solution but no avail, can someone help me with a script for this dbms_lob. And its remote to another remote db insert
set define off;
Declare
c1 clob default ' ';
begin
dbms_lob.append(c1,chr(10));dbms_lob.append(c1,q'[DOCTYPE]');
Insert into SYSTEM_CONFIG#rtfqa (CONFIG_ID,NAME,VALUE_OLD,TYPE,SUB_TYPE,FROM_SYSTEM,TO_SYSTEM,VALUE) values ((select max(config_id)+1 from SYSTEM_CONFIG#rtfqa),'CONTRACT_HASHTAG_EN',null,'system','system',null,null,c1);
end;
ERROR
Insert into SYSTEM_CONFIG#rtfqa (CONFIG_ID,NAME,VALUE_OLD,TYPE,SUB_TYPE,FROM_SYSTEM,TO_SYSTEM,VALUE) values ((select max(config_id)+1 from SYSTEM_CONFIG#rtfqa),'CONTRACT_HASHTAG_EN',null,'system','system',null,null,c1);
end;
>Error report -
>ORA-06550: line 7, column 148:
>PL/SQL: ORA-22992: cannot use LOB locators selected from remote tables
>ORA-06550: line 7, column 1:
>PL/SQL: SQL Statement ignored
>06550. 00000 - "line %s, column %s:\n%s"
>*Cause: Usually a PL/SQL compilation error.
>*Action:
sql developer 18.2.0
DB = Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
There are various restrictions when using distributed LOBs. One way around this is to operate on a local LOB global temporary table as follows:
Create a global temporary table
Load the row and manipulating the LOB in this GTT
Use insert#remote as select from gtt to copy the data over
Which looks like:
create table t (
c1 int, c2 clob
);
create global temporary table gtt as
select * from t
where 1 = 0;
declare
v2 clob default ' ';
begin
dbms_lob.append (
v2,
chr (10)
);
dbms_lob.append (
v2,
q'[DOCTYPE]'
);
insert into gtt (
c1, c2
) values (
1, v2
);
insert into t#loopback
select * from gtt;
end;
/
select * from t;
C1 C2
1
DOCTYPE
Loopback is a database link pointing back to the same database to simulate a remote DB.
If anyone still has the same problem he should run the dblink command through sys user
For example connect to database using system as sysdba.
Related
I am need transfer data from 10.5.15-MariaDB-0+deb11u1 to Microsoft SQL Server 2019 (RTM-CU16) (KB5011644) - 15.0.4223.1 (X64). I have egine connect installed in MariaDB + unix ODBC driver for MS SQL. And I have created these tables (for remote access to MS SQL):
CREATE TABLE `Tabxxx`
ENGINE = CONNECT
TABLE_TYPE = ODBC
BLOCK_SIZE = 10000
CONNECTION='DSN=MSSQL;UID=xxx;PWD=xxx'
TABNAME='DTB_INTERCHANGE.dbo.Tabxxx'
CHARSET = utf8
OPTION_LIST = 'Memory=2'
DATA_CHARSET = utf8;
CREATE TABLE `ExternCommand` (
`cmd` VARCHAR(21000) NOT NULL flag=0,
`number` INT(5) NOT NULL flag=1,
`message` VARCHAR(255) flag=2)
ENGINE = CONNECT
TABLE_TYPE = ODBC
CONNECTION='DSN=MSSQL;UID=xxx;PWD=xxx'
CHARSET = utf8
BLOCK_SIZE = 1
OPTION_LIST='Execsrc=1';
If I am inserting data over this command (Tabyyy is local table in MariaDB):
INSERT INTO `Tabxxx` SELECT * FROM `Tabyyy` WHERE ID < 100;
the command above is execution speed much slower than:
SELECT * FROM `ExternCommand` WHERE `cmd` = " ... "
where ... represents text of type INSERT INTO Tabxx VALUES(...),(...)..., where is the same data as in table Tabyyy for ID < 100.
Can someone explain why this is? How can you speed up the transfer of approx. 500,000 rows from MariaDB to MS SQL?
The fastest method is probably to use a linked server. You can set this up in SSMS.
Then just do
INSERT YourTable WITH (TABLOCKX)
(columns)
SELECT columns
FROM linkedservername.database.schema.table;
If you can add a WITH (TABLOCK) hint to the linked server table also then that will also increase performance, at the cost of locking up that whole table against writes.
I have a scenario where I have several sql server data source tables. I have read only access to these sources. I cannot create permanent tables in Sql Server environment. I can however create temporary tables.
I thought of creating global temporary table out of scenario 1 result set and reference that in scenario 2(again create second global temp table in scenario 2 ) and 3rd global temp table out of third sql code. I have included generic sql below
Finally create a SAS data set out of each of these global temp tables.(We want to ensure all joins, data transformation needs to happen in sql server and not perform it in SAS)
Scenario1-
select * from table1 join table2
on table1.id=table2.id
where table1.product='Apple'
Scenario-2
Above result is then used in another query as
select * from table3 m
left join above _result_table t
on m.id=t.id
and the above result is again referenced further.
I tried researching online to find similar issue implementation and I could not find it.
I have tried below code, but this creates a SAS data set , I want to instead create a global temporary table so that another query such as below can reference it. How do I accomplish that?
proc sql ;
connect to odbc(dsn=abc user=123 pw=****** connection=shared);
create table xyz as select * from
connectoin to ODBC
(
select * from table1 join table2
on table1.id=table2.id
where table1.product='Apple'
);DISCONNECT FROM ODBC;
QUIT;
Your help is greatly appreciated.
Thanks,
Upon further research, I think I got part of the solution,-still working on creating sas data set out of temporary sql server table.
I created a temp table structure lets say ##abc,then i proceeded with following steps
PROC SQL;
CONNECT TO ODBC AS T1(DSN=SRC INSERTBUFF=32767 USER=UID PW="PD." CONNECTION=SHARED);
EXECUTE(
CREATE TABLE ##abc
(id int,
name varchar(50)
)by T1;
EXECUTE(INSERT ##abc
SELECT * FROM SqlServerTable
)by T1;
SELECT * FROM CONNECTION TO T1(SELECT * FROM ##abc);
QUIT;
I got the
I need to compare data across two different DB2 database instances. We're not permitted to set up federation. I've found references saying how to specify data loads from remote databases, and also references on how to specify a database connection including database name, username etc. Ideally I would be able to execute a query against one database, then compare that to the second database either one-by-one (using SQL PL loops etc.), or as a single large join. I've gotten to the point where the SQL PL script can connect to each in turn (and it prompts me for the password to both), but it only recognizes the second one when I attempt to query the table.
What we've tried:
Adding two different CONNECT statements at the beginning.
Declaring a cursor and specifying the database name (this seems to only work when doing loads from one database to another, which we're trying to avoid).
set serveroutput on#
set sqlcompat DB2#
connect to first user myname#
connect to second user myname#
-- run command: db2 -td# -vf test3.sql
begin
declare loop_counter int;
call dbms_output.enable(100000);
set loop_counter = 0;
FIRSTLOOP: for o as ord1 cursor for
select field1, field2 from first.firstschema.firsttable fetch first 10 rows only with ur
do
set loop_counter = loop_counter + 1;
call dbms_output.put_line('Field: '||field1||', other '||field2);
end for;
call dbms_output.put_line('End first program: ');
SECONDLOOP: for p as ord2 cursor for
select field1, field2 from second.secondschema.secondtable fetch first 10 rows only with ur
do
set loop_counter = loop_counter + 1;
call dbms_output.put_line('Field: '||field1||', other '||field2);
end for;
call dbms_output.put_line('After second call');
end#
Ideally, each of the two cursor loops would print 10 rows. In reality, whichever CONNECT was done second is the one that works. For example, if I have the connect to SECOND followed by the connect to FIRST, the first loop works and the second says "..... is an undefined name". If I do the connect to FIRST then the connect to SECOND, the first loop throws the error and I get no output.
SQL PL can connnect only to one database at a time - that is the design.
In your script example, the second connect will close any current connection first.
Federation lets you access remote tables as if they were local.
If you are prevented from using federation , your options include these:
materialising the remote table locally and copying the data
(this can be done via load from remote cursor).
You can then use SQL to compare rows, as both tables are then in the same database.
This is only feasible if you have sufficient capacity to fit both tables in same database, although compression will help here.
not using SQL but instead using another tool
For example: depending on data volumes , and data-types, you could export source/target tables
to flat files and compare the files (diff etc). You could also export to pipe and use in memory comparisons.
Or you could use python or perl or any scripting language and do the comparison in memory in chunks (in all cases
each thread can only connect to a single database at one time).
use third party tools for data comparison.
if you use embedded-SQL , type-2 connect offers another possibility.
On Db2 for IBM i, federation is only available via Db2 LUW box...
However, the following works in Db2 for IBM i...
create or replace function myschema.myudtf ()
returns table (SERVER VARCHAR(18)
, as_of timestamp
, ORDINAL_POSITION INTEGER
, JOB_NAME VARCHAR(28)
, SUBSYSTEM VARCHAR(10)
, AUTHORIZATION_NAME VARCHAR(10)
, JOB_TYPE VARCHAR(3)
)
modifies SQL data
external action
not deterministic
language SQL
specific CHKAWSJOBS
begin
declare insertStmt varchar(1500);
declare global temporary table
GLOBAL_TEMP_MY_JOBS (
SERVER VARCHAR(18)
, as_of timestamp
, ORDINAL_POSITION INTEGER
, JOB_NAME VARCHAR(28)
, SUBSYSTEM VARCHAR(10)
, AUTHORIZATION_NAME VARCHAR(10)
, JOB_TYPE VARCHAR(3)
) with replace;
for systemLoop as systemsCursor cursor for
select * from table( values ('mysys1'),('mysys2'),('mysys3'))
as systems (server_Name)
do
set insertStmt =
' insert into GLOBAL_TEMP_MY_JOBS
select
current_server as server, current_timestamp as as_of
, ordinal_position, job_name, subsystem, authorization_name, job_type
from table(QSYS2.ACTIVE_JOB_INFO(
SUBSYSTEM_LIST_FILTER => ''MYSBS'')) X
where exists (select 1 from ' concat server_name concat '.sysibm.sysdummy1)';
execute immediate InsertStmt;
end for;
return select * from GLOBAL_TEMP_MY_JOBS;
end;
The example above is more complex than your use case, I'm pulling data from a UDTF on the remote system, the trick is the use of a 3 part name in the where clause, which forces the DB to run the entire select statement on the remote machine; with the insert being into the table on the local machine.
You should be able to build a dynamic insert that's just
set insertStmt = 'insert into lcltable
select field1, field2
from ' concat server_name concat table_name
concat ' fetch first 10 rows only with ur';
Don't know for sure that this will work on Db2 LUW, but there's a good chance.
I have one table in SQL server and 5 tables in Teradata.I want to join those 5 table in teradata with sql server table and store result in Teradata table.
I have sql server name but i dont know how to simultaneously run a query both on sql server and teradata.
i want to do this:
sql server table query
Select distinct store
from store_Desc
teradata tables:
select cmp_id,state,sde
from xyz
where store in (
select distinct store
from sql server table)
You can create a table (or a volatile table if you do not have write privileges) to do this. Export result from SQL Server as text or into the language of your choice.
CREATE VOLATILE TABLE store_table (
column_1 datatype_1,
column_2 datatype_2,
...
column_n datatype_n);
You may need to add ON COMMIT PRESERVE ROWS before the ; to the above depending on your transaction settings.
From a language you can loop the below or do an execute many.
INSERT INTO store_table VALUES(value_1, value_2, ..., value_n);
Or you can use the import from text using Teradata SQL Assistant by going to File and selecting Import. Then execute the below and navigate to your file.
INSERT INTO store_table VALUES(?, ?, ..., n);
Once you have inserted your data you can query it by simply referencing the table name.
SELECT cmp_id,state,sde
FROM xyz
WHERE store IN(
SELECT store
FROM store_table)
The DISTINCT function is most easily done on export from SQL Server to minimize the rows you need to upload.
EDIT:
If you are doing this many times you can do this with a script, here is a very simple example in Python:
import pyodbc
con_ss = pyodbc.connect('sql_server_odbc_connection_string...')
crs_ss = con_ss.cursor()
con_td = pyodbc.connect('teradata_odbc_connection_string...')
crs_td = con_td.cursor()
# pull data for sql server
data_ss = crs_ss.execute('''
SELECT distinct store AS store
from store_Desc
''').fetchall()
# create table in teradata
crs_td.execute('''
CREATE VOLATILE TABLE store_table (
store DEC(4, 0)
) PRIMARY INDEX (store)
ON COMMIT PRESERVE ROWS;''')
con_td.commit()
# insert values; you can also use an execute many, but this is easier to read...
for row in data_ss:
crs_td.execute('''INSERT INTO store_table VALUES(?)''', row)
con_td.commit()
# get final data
data_td = crs_td.execute('''SELECT cmp_id,state,sde
FROM xyz
WHERE store IN(
SELECT store
FROM store_table);''').fetchall()
# from here write to file or whatever you would like.
Is fetching data from the Sql Server through ODBC an option?
The best option may be to use Teradata Parallel Transporter (TPT) to fetch data from SQL Server using its ODBC operator (as a producer) combined with Load or Update operator as the consumer to insert it into an intermediate table on Teradata. You must then perform rest of the operations on Teradata. For the rest of the operations, you can use BTEQ/SQLA to store the results in the final Teradata table. You can also put the same SQL in TPT's DDL operator instead of BTEQ/SQLA and get it done in a single job script.
To allow use of tables residing on separate DB environments (in your case SQL-Server and Teradata) in a single select statement, Teradata has recently released Teradata Query Grid. But I'm not sure about exact level of support for SQL-Server and it will involve licensing hassle and quite a learning curve to do this simple job.
I have a Linked Server set up on my host Server: "MyLinkedServer"
I have an SP on my server "HostServer".
I am calling a stored proc on HostServer that updates a table in DatabaseA on MyLinkedServer with values from a table in DatabaseB on MyLinkedServer.
I have other SPs that run fine in the same scenario, but they are doing inserts and deletes, however this SP fails to update the table in DatabaseA (no error returned, just no changed data), and if I change connections to actually run the SP on "MyLinkedServer" it works without a problem.
UPDATE MyLinkedServer.DataBaseA.dbo.MyTable
SET Column1 = db2.Column1
FROM MyLinkedServer.DataBaseA.dbo.MyTable db1
INNER JOIN
(
SELECT TOP 1 Column1
FROM MyLinkedServer.DataBaseB.dbo.MyTable db2
WHERE db2.Id = 2
) AS db2 ON db2.Id = 2
WHERE db1.Id = 1
I believe you'll need to reference the alias you reference in the from statement. Does changing
UPDATE MyLinkedServer.DataBaseA.dbo.MyTable
into
UPDATE db2
fix your issue?