I created a Microstrategy multisources project with parameterized queries activated:
Teradata : lookup tables
Snowflake : Fact tables (primapry database)
When I run a report MSTR create tables on Snowflake and insert data from teradata.
MSTR is doing Multi-row Insert Using Explicitly-specified Values and it's fine for me, but I would like to know how to change the number of values when inserting data (it's note the same number for all tables)
Is there any parameters to change to control the number of rows inserted ?
Thanks
Regards,
Yassine
Related
I have a MSSQL database whose structure is replicated over a Postgres database.
I've enabled CDC in MSSQL and I've used the SQL Server CDC Client in StreamSets Data Collector to listen for changes in that db's tables.
But I can't find a way to write to the same tables in Postgres.
For example I have 3 tables in MSSQL:
tableA, tableB, tableC. Same tables I have in Postgres.
I insert data into tableA and tableC. I want those changes to be replicated over Postgres.
In StreamSets DC, in order to write to Postgres, I'm using JDBC Producer and in the Table Name field I've specified: ${record:attributes('jdbc.tables')}.
Doing this, the data will be read from tableA_CT, tableB_CT, tableC_CT. Tables created by MSSQL when you enable the CDC option. So I'll end up with those table names in the ${record:attribute('jdbc.tables')}.
Is there a way to write to Postgres in the same tables as in MSSQL ?
You can cut the _CT suffix off the jdbc.tables attribute by using an Expression Evaluator with a Header Attribute Expression of:
${str:isNullOrEmpty(record:attribute('jdbc.tables')) ? '' :
str:substring(record:attribute('jdbc.tables'), 0,
str:length(record:attribute('jdbc.tables')) - 3)}
Note - the str:isNullOrEmpty test is a workaround for SDC-9269.
I have to execute CSV records as batch by batch to SQL Server. In the macro, I am framing SQL query based on the record in the foreign key. So, I can not go for BULK Insert here.
I written code to iterate 100 rows and save in String like below:
Insert into table ('name','address','region') values ('Test','xxx-test-address',1);
Insert into table ('name','address','region') values ('Test','xxx-test-address',1);
Insert into table ('name','address','region') values ('Test','xxx-test-address',1);
Insert into table ('name','address','region') values ('Test','xxx-test-address',1);
Now, I am executing 100 SQL inserts against SQL Server. Let's say if error is thrown at 50th row then remaining 50 rows are not executed and error throwing at VBA.
My question is how do I find which row is throwing error?
If can not be achieved in this approach, please let me know the approach to achieve. Since 10000 or more records will be in CSV, I can not execute the records in iteration. This will hit database many times.
Thanks in advance!
Ok I have a database with a table LOOKUP (1st), and the same database on another server also with LOOKUP (2nd).
Is there a way I can insert into the 1st database from the second, if duplicate exist then skip else all other values that is present in 2nd should be inserted into 1st. Basically I want the exact same Database!
The think that confuses me is they are on different servers.
Can I export the one to like excel and import it again and replace my database or anything.
You will have to use 2 MERGE queries if you want to make both the databases identical. This is because the first merge will only insert records that are available in DB1 into DB2. But, DB1 will still not contain the records present in DB2 but not in DB1.
I would suggest you to do this task using SSIS.
You can use 2 sources DB1 and DB2 and a LOOKUP transformation on each source (LKP1 and LKP2).
Then you can insert the No Match output of LKP1 into DB2 as destination and No Match output of LKP2 into DB1 as destination.
This will solve the multi-server issue as well because you can create connection to any server in SSIS.
I have a XML column in a table and it is defined by a schema. I am trying to insert values into this table by using Insert into tbl1 Select * from tbl for xml. But this is failing due to schema validation failure for one of the records. But i want to insert the records which have passed the validation atleast and i can capture the others later. Can someone help me in this.
SQL server validates all dataset, not single row. If you want to validate Row-by-Row using SQL server tools, methods are:
SQLCLR (fastest) link
SSIS (easy to create) - using loop FOREACH you try to insert row into table. All failed rows are redirecting to another table.
TSQL TRY/CATCH Block - insert xml from single row to schema validated variable. Slowest one.
I have one table named "Staff" in access and also have this table(same name) in SQL 2008.
Both table have thousands of records. I want to merge records from the access table to sql table without affecting the existing records in sql. Normally, I just export using OCBC driver and that works fine if that table doesn't exist in sql server. Please advise. Thanks.
A simple append query from the local access table to the linked sql server table should work just fine in this case.
So, just drop in the first (from) table into the query builder. Then change the query type to append, and you are prompted for the append table name.
From that point on, just drop in the columns you want (do not drop in the PK column, as they need not be used nor transferred in this case).
You can also type in the sql directly in the query builder. Either way, you will wind up with something like:
INSERT INTO dbo_custsql
( ADMINID, Amount, Notes, Status )
SELECT ADMINID, Amount, Notes, Status
FROM custsql1;
This may help: http://www.red-gate.com/products/sql-development/sql-compare/
Or you could write a simple program to read from each data set and do the comparison, adding, updating, and deleting, etc.