we have two snowflakes environments with different user login, one is for Development and one is for Testing. both are having same schemas and tables. i want to compare COUNT of all tables from both DEV and TEST. please help with the viable options!!
I could list the COUNT from one environment by referring information_schema.tables. But need help on connect both environment and list the COUNT ...Like DBlink in Oracle
select TABLE_CATALOG,TABLE_SCHEMA,TABLE_NAME,TABLE_TYPE,ROW_COUNT,CREATED,LAST_ALTERED from information_schema.tables;
There is no equivalent of DBLINKS in Snowflake. The easiest solution would be to run the counts in both environments and then compare them in e.g. Excel.
Assuming the Accounts are both in the same Cloud/Region you could share the tables from one to the other and then they'd all be in a single account where you could run the comparison SQL
The closest to the DBLink solution, in Snowflake would be the Data Sharing solution, just from one account share the selected tables on the other account and you can compare data.
If your accounts are in different regions or with other cloud providers then you need to use Database Replication solution, set up replication between accounts and periodically synchronize data when you want to verify it.
Related
i have multiple dabatases (>100) with the identic structure.
For business-monitoring, i have about 80 queries which check information in the database.
Now, i want to execute each of this queries on each of this databases and load the result into splunk.
In splunk, is it possible to define the >100 database-connections once and the 80 queries once and then make some "magic" step to execute each statement on each database?
I don't want to create a new connection for each combination of database and query.
The only way Splunk has to connect to a database "itself" is via DB Connect (docs)
From Splunk's perspective, there is no way to connect to 100 databases without having unique connections to each.
So far as I know, there is no tool that will connect to more than one database without unique connections - that's something database servers enforce in transactional models.
That being said, if you have a way to enumerate all the databases you want to connect to, and a place to save the queries you want to run, you could build either a
scripted-input add-on that could use your language of choice (whatever's available on the Splunk server(s)/endpoint(s) it's running on) to iterate through each database, run each query, and ship the results back to Splunk, or
in similar fashion to the scripted-input option, write a script (or set of scripts) that would execute the queries in question against the databases you're targeting, and submit results to the HTTP Event Collector (HEC) (HL has a great write-up on HEC over here, and here's George Starcher's Python class for HEC)
I have two companies using same application running in Oracle Database. Now the companies are merging and making a single company. The databases are huge with an approximate size of 10 TB.
We wanted to have the application in two databases to be merged and have a single application pointed to both the databases with minimal work.
Help is highly appreciated.
Regards
Bjm
Using DB Links feature in Oracle
For more information you can use below link regarding Database Links:
https://docs.oracle.com/cd/B28359_01/server.111/b28310/ds_concepts002.htm#ADMIN12083
It would enable you to build a SQL statement that references tables from the two different databases.
If you want to access the data in instance B from the instance A, you can use below query and edit the details accordingly:
CREATE DATABASE LINK dblink_example
CONNECT TO xxusernamexx IDENTIFIED BY xxpasswordxx
USING
'(DESCRIPTION=
(ADDRESS=
(PROTOCOL=TCP)
(HOST=xxipaddrxx / xxhostxx )
(PORT=xxportxx))
(CONNECT_DATA=
(SID=xxsidxx)))';
Now you can execute the below query to access the table:
SELECT * FROM tablename#dblink_example;
You can perform any operation DML, DDL, DQL using DB Link
. I have two databases in same azure sql server .i want that both database interact to each other using trigger. i.e If any record is inserted in Customer table of first database the trigger gets fired and record is inserted in another database.
We had / have the same problem with triggers that we use for insert-update-delete where we write a record to Database-1 that has the primary table, but also updates Database-2 where we hold "archive" versions of the tables.
The only solution we have identified and are testing is to bring all of the tables into a single database and separate the different tables under separate database schemas in the one database.
Analysis so far of this approach looks promising.
I think what you're trying to do is not allowed in Sql Azure. From my expertise what you are trying to do is a bad practice on-premise as well (think backups-restore and availability issue scenarios).
You should move the dependency in the application and have the application update both databases, as appropriate.
Anyway, if you want to continue with this approach please take a look over Elastic Query feature: https://learn.microsoft.com/en-in/azure/sql-database/sql-database-elastic-query-overview
Please let me know if I can help with something
I have the following scenario
I have 4 different (sql server) databases (legacy), one for each geo (NA,AP,LA,EMEA). The schema is the same in all the db's.
I am in the process of creating a front-end which will go across 4 different db's based on the users selection.I am thinking of using Entity framework. The db's are on different servers. What is the best way to create the entities? should i create 4 different edmx? there will be scenarios when the users results need to come from one or more db's
Thanks,
Nagendra
If databases are exactly same you can create edmx file only for one of databases (the mapping will be same for all DBs) and use 4 ObjectContext instances with different connection strings. The problem here can be with your second requirement. Querying more DBs means that you have to query each DB separately and merge/union results in memory on the application server. So such scenario is not very good for advanced querying where you need to run complex queries on all databases at the same time.
Here's my problem: I have a website that uses two different nonidentical sql server databases: one that is used to store information about the website users, the other stores information about my online retail store. But my hosting plan would only let me deploy a single database. Hence I want to combine my two databases, to form a single database.
Is there an easy way to combine the two databases into one, instead of creating every single table separately? The two databases do not share any data/columns/tables in common.
Can someone plz let me know how to get through this? I would really appreciate any help.
Thanks!
You can script the objects and import the data, ensuring that any dependencies are created in the right order.
If you need to maintain any sort of logical separation, you can also use SCHEMAs within the database (starting with SQL Server 2005) to organize them into two distinct areas - this would most likely require an application change, however.
If tables is all you want to move around, I suggest you use the Import and Export Data tool to import all the tables of database A into database B.
If you have views, SPs, etc. I suggest you generate scripts for all of them and run them scripts on the destination database once you have transferred the tables.
Use the Import Data tool to move the one database's tables and data from the second database into the first one.