Can anyone help me with creating a replica in EXASOL i.e. I need to copy all the tables including Views,Functions and Scripts from one schema to another schema in the same server.
For Eg.: I want all the data from Schema A to be copied not moved to Schema B.
Many thanks.
Thank you wildraid for your suggestion :)
In-order to copy DDL of all the tables in schema, I've got a simple way that will give us the DDLs for all the tables :
select t1.CREATE_STATEMENT||t2.PK||');' from
(Select C.COLUMN_TABLE,‘CREATE TABLE ’ || C.COLUMN_TABLE ||'(' || group_concat( ‘“’||C.COLUMN_NAME||'“' || ' ' || COLUMN_TYPE || case when (C.COLUMN_DEFAULT is not null
and C.COLUMN_IS_NULLABLE=‘true’) or(C.COLUMN_DEFAULT<>‘NULL’ and C.COLUMN_IS_NULLABLE=‘false’) then
' DEFAULT ' || C.COLUMN_DEFAULT end || case when C.COLUMN_IS_NULLABLE=‘false’ then ' NOT NULL ' end
order by column_ordinal_position) CREATE_STATEMENT
from EXA_ALL_COLUMNS C
where
upper(C.COLUMN_SCHEMA)=upper(‘Source_Schema’) and column_object_type=‘TABLE’
group by C.COLUMN_SCHEMA, C.COLUMN_TABLE order by C.COLUMN_TABLE ) t1 left join
(select CONSTRAINT_TABLE,‘, PRIMARY KEY (’ ||group_concat(‘“’||COLUMN_NAME||'“' order by ordinal_position) || ‘)’ PK
from EXA_ALL_CONSTRAINT_COLUMNS where
constraint_type=‘PRIMARY KEY’ and upper(COnstraint_SCHEMA)=upper(‘Source_Schema’) group by CONSTRAINT_TABLE ) t2
on t1.COLUMN_TABLE=t2.constraint_table
order by 1;
Replace the Source_Schema with your schema name and it will generate the Create statement that you can run on the EXAplus.
For copying the data, I have used the same way that you've mentioned in step 2.
Ok, this question consists of two smaller problems.
1) How to copy DDL of all objects in schema
If you need to copy only small amount of schemas, the fastest way is to use ExaPlus client. Right click on schema name and select "CREATE DDL". It will provide you with SQL to create all objects. You may simply run this SQL in context of new schema.
If you have to automate it, you may take a look at this official script: https://www.exasol.com/support/browse/SOL-231
It creates DDL for all schemas, but it can be adapted to use single schema only.
2) How to copy data
This is easier. Just run the following SQL to generate INSERT ... SELECT statements for every table:
SELECT 'INSERT INTO <new_schema>.' || table_name || ' SELECT * FROM <old_schema>.' || table_name || ';'
FROM EXA_ALL_TABLES
WHERE table_schema='<old_schema>';
Copy-paste result and run it to make the actual copy.
Related
I am very new to Snowflake. Till now I had used Teradata for writing complex SQL queries.
In snowflake, I need to create and call macros (similar to Teradata), where I have to pass date as parameters, and within the function I have to append records in a table. Something along these lines:
CREATE TABLE SFAAP.WS_DIRBNK_DPST.PV_HIGH_RISK_FI_LIST
(
APP_DT DATE
,FI_NAME VARCHAR(50)
);
CREATE OR REPLACE FUNCTION SFAAP.INSERT_FI (DT DATE, CRED CHAR(5))
--RETURNS NULL
--COMMENT='Create list of high risk FI by date'
AS
'
INSERT INTO SFAAP.WS_DIRBNK_DPST.PV_HIGH_RISK_FI_LIST
TO_DATE(DD) --------------Passed Parameter
,FI_NAME
FROM
(
SELECT
FINANCIAL_INSTITUTION AS FI_NAME
,COUNT(DISTINCT CASE WHEN IND_FPFA_FRAUD = 1 THEN APP_ID ELSE NULL END) AS TOT_FPFA_APPS
,COUNT(DISTINCT APP_ID) AS TOT_APPS
,CAST(TOT_FPFA_APPS AS DECIMAL(38,2))/TOT_APPS AS FRAUD_RATE
FROM
(
SELECT
A.*
,C.FINANCIAL_INSTITUTION
FROM BASE_05 A
LEFT JOIN
(
SELECT
BNK_ACCT_NBR_TOK
,BNK_TRAN_TYP_CDE
,ALT_DR_CR_CDE
,TRAN_1_DSC_TOK
,TRAN_DT
,TRAN_AMT
FROM "SFAAP"."V_SOT_DIRBNK_CLB_FRD_CRD"."BNK_DPS_TRAN_RLT_INFO"
WHERE TRAN_DT BETWEEN DATEADD(Day,-90,TO_DATE(DD)) AND TO_DATE(DD) --------------Passed Parameter, does calculation in the 90 days window from the passed date
AND ALT_DR_CR_CDE = TO_CHAR(CRED) --------------Passed Parameter
AND BNK_TRAN_TYP_CDE IN (22901,56003,56002,56302,56303,56102,70302)
AND TRAN_AMT>=5
QUALIFY ROW_NUMBER() OVER(PARTITION BY BNK_ACCT_NBR_TOK, TRAN_DT, TRAN_AMT, BNK_TRAN_TYP_CDE ORDER BY TRAN_DT ASC, TRAN_AMT DESC)=1
) B
ON A.BNK_ACCT_NBR = B.BNK_ACCT_NBR_TOK
LEFT JOIN SFAAP.WS_DIRBNK_DPST.PV_FRAUD_METRICS_03 C
ON B.TRAN_1_DSC_TOK = C.TOKEN_NAME
)SUB_A
GROUP BY 1
)SUB_B
WHERE FINANCIAL_INSTITUTION IS NOT NULL
AND TOT_APPS>=3
AND FRAUD_RATE>=0.20
'
;
I took some guidance from this answer here, but I am still not there yet. Here's the error which I am getting:
Due to lack of experience writing snowflake user-defined functions, I think I am messing up syntax somewhere (could be the way I am passing those two parameters). Comments/suggestions are most welcome.
Thanks in advance.
It looks like SFAAP is your database name, please include your schema name if you are going to use "Fully Qualified Names", or change your session context to use a database and schema and then create the function without the database and schema name.
example:
CREATE OR REPLACE FUNCTION SFAAP.WS_DIRBNK_DPST.INSERT_FI (
Good morning all, I'm trying to group the polygons that touch each other into one polygon.
I use the following formula:
drop table if exists filtre4;
create table filtre4 as
(
select st_unaryunion(unnest(st_clusterintersecting(geom))) as geom
from data
)
It works perfectly when I have less than 6,000,000 items.
example: I have this normal message that appears, with the number of entities created.
https://zupimages.net/viewer.php?id=20/15/ielc.png
But if I exceed 6,000,000 entities, the query ends but no element is created in the table. I have this message which is displayed, but does not return anything to me.
https://zupimages.net/viewer.php?id=20/15/o41z.png
I do not understand.
Thank you.
So, I think you were using PgAdmin to run the queries. Oddly enough, sometimes even if there is a memory error or other runtime errors you will not be notified. (The same happened to me while testing.) In such a case I would recommend you save the query as a sql file and run it with psql to ensure you receive an error message:
psql -U #your_username -d #your_database -f "#your_sqlfiile.sql"
So first what I would do is adjust "work_mem" in the postgresql.conf. The default is 4MB, given your specs you can probably handle more memory per an operation. I would suggest 64MB to start per the following article:
https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
Make sure to restart your server after doing so.
So, I used a comparable data set and received memory issues. Adjusting work_mem was the first part. Geohashing applies to your data set because you want smaller groups of clusters in order to fit the processing in memory, and geohashing allows you to order your geometry elements spatially so that it reduces the amount of sorting operations necessary while running ST_CLUSTERINTERSECTING (I don't think you have attributes to group on from what I understand). Here is what the following example does:
Creates the output table or truncates if it exists, creates a sequence or resets it if it exists
"ordered" Pull geometry from input table, and order by its geohash (*geometry must be in degree units like EPSG 4326 in order to geohash)
"grouped" Use the sequence to put the data in to x amount of groups. I divide by 10,000 here, but the idea is your total number of entities divided by x will give you y groups. Try making groups small enough to fit in memory but large enough to be performant. Then, it takes each group, performs ST_CLUSTERINTERSECTING, unnest, and finally a ST_UNARYUNION.
Insert the value with ST_COLLECT and another ST_UNARYUNION of the geometries.
Here's the code:
DO $$
DECLARE
input_table VARCHAR(50) := 'valid_geom';
input_geometry VARCHAR(50) := 'geom_good';
output_table VARCHAR(50) := 'unary_output';
sequence_name VARCHAR(50) := 'bseq';
BEGIN
IF NOT EXISTS (SELECT 0 FROM pg_class where relname = format('%s', output_table))
THEN
EXECUTE '
CREATE TABLE ' || quote_ident(output_table) || '(
geom geometry NOT NULL)';
ELSE
EXECUTE '
TRUNCATE TABLE ' || quote_ident(output_table);
END IF;
IF EXISTS (SELECT 0 FROM pg_class where relname = format('%s', sequence_name))
THEN
EXECUTE '
ALTER SEQUENCE ' || quote_ident(sequence_name) || ' RESTART';
ELSE
EXECUTE '
CREATE SEQUENCE ' || quote_ident(sequence_name);
END IF;
EXECUTE '
WITH ordered AS (
SELECT ' || quote_ident(input_geometry) || ' as geom
FROM ' || quote_ident(input_table) || '
ORDER BY ST_GeoHash(geom_good)
),
grouped AS (
SELECT nextval(' || quote_literal(sequence_name) || ') / 10000 AS id,
ST_UNARYUNION(unnest(ST_CLUSTERINTERSECTING(geom))) AS geom
FROM ordered
GROUP BY id
)
INSERT INTO ' || quote_ident(output_table) || '
SELECT ST_UNARYUNION(ST_COLLECT(geom)) as geom FROM grouped';
END;
$$;
Caveats:
Change the declare variables to your needs.
Since your input geometry is named 'geom' as geom will fail so I would change SELECT ' || quote_ident(input_geometry) || ' as geom to SELECT ' || quote_ident(input_geometry).
Make sure all of your input geometries are valid or ST_UNARYUNION will fail. Checkout ST_ISVALID and ST_MAKEVALID.
As said before geohashing requires the projection to be in degree units. Checkout ST_TRANSFORM, (I transformed my geometry data to 4326).
Let me know if you have any more questions.
Im learning DB2 and I had a problem while testing some options in my db.
I have 2 tables like this:
Country
=========
IdCountry -- PK
Name
State
=========
IdState -- PK
IdCountry -- FK to Country.IdCountry
Name
Code
And I am using queries like:
SELECT IdState, Name
FROM Tables.State
WHERE IdCountry = ?
Where ? is any working IdCountry, and everything worked fine.
Then I used set integrity in my db2 control center using the default info in the options and the process was successful but now my query isn't giving me any results.
I tried using :
SELECT *
FROM Tables.State
Where IdCountry = ?
and it gives me back results.
While making tests to the table I try adding new States and they appear in the query using column names instead of *, but old records still missing.
I have no clue about what's happening, does anyone have an idea?.
Thanks in advance, and sorry for my poor English.
I'm assuming here that you're on Linux/Unix/Windows DB2, since z/OS doesn't have a SET INTEGRITY command, and I couldn't find anything about it with a quick search on the iSeries Info Center.
It's possible that your table is still in "set integrity pending" state (previously known as CHECK PENDING). You could test this theory by checking SYSCAT.TABLES using this query:
SELECT TRIM(TABSCHEMA) || '.' || TRIM(TABNAME) AS tbl,
CASE STATUS
WHEN 'C' THEN 'Integrity Check Pending'
WHEN 'N' THEN 'Normal'
WHEN 'X' THEN 'Inoperative'
END As TblStatus
FROM SYSCAT.TABLES
WHERE TABSCHEMA NOT LIKE 'SYS%'
If your table shows up, you will need to use the SET INTEGRITY command to bring your table into checked status:
SET INTEGRITY FOR Tables.State IMMEDIATE CHECKED
Using the stored procedure sp_msforeachtable it's possible to execute a script for all tables in a database.
However, there are system tables which I'd like to exclude from that. Instinctively, I would check the properties IsSystemTable or IsMSShipped. These don't work like I expect - I have for example a table called __RefactorLog:
But when I query if this is a system or MS Shipped table, SQL Server reports none of my tables are system tables:
exec (N'EXEC Database..sp_msforeachtable "PRINT ''? = '' + CAST(ObjectProperty(Object_ID(''?''), ''IsSystemTable'') AS VARCHAR(MAX))"') AS LOGIN = 'MyETLUser'
-- Results of IsSystemTable:
[dbo].[__RefactorLog] = 0
[schema].[myUserTable] = 0
and
exec (N'EXEC Database..sp_msforeachtable "PRINT ''? = '' + CAST(ObjectProperty(Object_ID(''?''), ''IsMSShipped'') AS VARCHAR(MAX))"') AS LOGIN = 'MyETLUser'
-- Results of IsMSShipped:
[dbo].[__RefactorLog] = 0
[schema].[myUserTable] = 0
When I look into the properties of the table (inside SSMS), the table is marked as a system object. An object property like IsSystemObject doesn't exist though (AFAIK).
How do I check if a table is a system object, apart from the object property? How does SSMS check if a table is a system object?
Management studio 2008 seems to run some quite ugly following code when opening the "System Tables" folder in the object explorer, the key bit seems to be:
CAST(
case
when tbl.is_ms_shipped = 1 then 1
when (
select
major_id
from
sys.extended_properties
where
major_id = tbl.object_id and
minor_id = 0 and
class = 1 and
name = N''microsoft_database_tools_support'')
is not null then 1
else 0
end
AS bit) AS [IsSystemObject]
(Where tbl is an alias for sys.tables)
So it seems that it's a combination - either is_ms_shipped from sys.tables being 1, or having a particular extended property set.
__refactorlog is, in contrast to what SSMS suggests, a user table. It is used during deployment to track schema changes that cannot be deduced from the current database state, for example renaming a table.
If all your other user tables are in a custom (non-dbo) schema, you can use a combination of the isMSshipped/isSystemTable attributes and the schema name to decide if a table is 'in scope' for your script.
In the past I've worked on the assumption that, in the sys.objects table, column is_ms_shipped indicates whether an object is or is not a system object. (This column gets inherited by other system tables, such as sys.tables.)
This flag can be set by procedure sp_ms_markSystemObject. This, however, is an undocumented procedure, is not supported by Microsoft, I don't think we're supposed to know about it, so I didn't tell you about it.
Am I missing something?
However, there are system tables which I'd like to exclude from that
At least on SQL Server 2008, sp_MSforeachtable already excludes system tables, as this excerpt from it shows:
+ N' where OBJECTPROPERTY(o.id, N''IsUserTable'') = 1 ' + N' and o.category & ' + #mscat + N' = 0 '
For kicks I'm writing a "schema documentation" tool that generates a description of the tables and relationships in a database. I'm currently shimming it to work with SQLite.
I've managed to extract the names of all the tables in a SQLite database via a query on the sqlite_master table. For each table name, I then fire off a simple
select * from <table name>
query, then use the sqlite3_column_count() and sqlite3_column_name() APIs to collect the column names, which I further feed to sqlite3_table_column_metadata() to get additional info. Simple enough, right?
The problem is that it only works for tables that are not empty. That is, the sqlite_column_*() APIs are only valid if sqlite_step() has returned SQLITE_ROW, which is not the case for empty tables.
So the question is, how can I discover column names for empty tables? Or, more generally, is there a better way to get this type of schema info in SQLite?
I feel like there must be another hidden sqlite_xxx table lurking somewhere containing this info, but so far have not been able to find it.
sqlite> .header on
sqlite> .mode column
sqlite> create table ABC(A TEXT, B VARCHAR);
sqlite> pragma table_info(ABC);
cid name type notnull dflt_value pk
---------- ---------- ---------- ---------- ---------- ----------
0 A TEXT 0 0
1 B VARCHAR 0 0
Execute the query:
PRAGMA table_info( your_table_name );
Documentation
PRAGMA table_info( your_table_name ); doesn't work in HTML5 SQLite.
Here is a small HTML5 SQLite JavaScript Snippet which gets the column names from your_table_name even if its empty. Hope its helpful.
tx.executeSql('SELECT name, sql FROM sqlite_master WHERE type="table" AND name = "your_table_name";', [], function (tx, results) {
var columnParts = results.rows.item(0).sql.replace(/^[^\(]+\(([^\)]+)\)/g, '$1').split(',');
var columnNames = [];
for(i in columnParts) {
if(typeof columnParts[i] === 'string')
columnNames.push(columnParts[i].split(" ")[0]);
}
console.log(columnNames);
///// Your code which uses the columnNames;
});
Execute this query
select * from (select "") left join my_table_to_test b on -1 = b.rowid;
You can try it at online sqlite engine
The PRAGMA statement suggested by #pragmanatu works fine through any programmatic interface, too. Alternatively, the sql column of sqlite_master has the SQL statement CREATE TABLE &c &c that describes the table (but, you'd have to parse that, so I think PRAGMA table_info is more... pragmatic;-).
If you are suing SQLite 3.8.3 or later (supports the WITH clause), this recursive query should work for basic tables. On CTAS, YMMV.
WITH
Recordify(tbl_name, Ordinal, Clause, Sql)
AS
(
SELECT
tbl_name,
0,
'',
Sql
FROM
(
SELECT
tbl_name,
substr
(
Sql,
instr(Sql, '(') + 1,
length(Sql) - instr(Sql, '(') - 1
) || ',' Sql
FROM
sqlite_master
WHERE
type = 'table'
)
UNION ALL
SELECT
tbl_name,
Ordinal + 1,
trim(substr(Sql, 1, instr(Sql, ',') - 1)),
substr(Sql, instr(Sql, ',') + 1)
FROM
Recordify
WHERE
Sql > ''
AND lower(trim(Sql)) NOT LIKE 'check%'
AND lower(trim(Sql)) NOT LIKE 'unique%'
AND lower(trim(Sql)) NOT LIKE 'primary%'
AND lower(trim(Sql)) NOT LIKE 'foreign%'
AND lower(trim(Sql)) NOT LIKE 'constraint%'
),
-- Added to make querying a subset easier.
Listing(tbl_name, Ordinal, Name, Constraints)
AS
(
SELECT
tbl_name,
Ordinal,
substr(Clause, 1, instr(Clause, ' ') - 1),
trim(substr(Clause, instr(Clause, ' ') + 1))
FROM
Recordify
WHERE
Ordinal > 0
)
SELECT
tbl_name,
Ordinal,
Name,
Constraints
FROM
Listing
ORDER BY
tbl_name,
lower(Name);