In snowflake, is there way re-share like a symbolic link? - snowflake-cloud-data-platform

I currently have data sources that are being consumed by a lot of data consumers. I am now given a new set of data sources to replace existing ones. But I want this new sources to be named like previous data sources but pointed to new data sources. Like a symbolic link in Linux. Is there such a feature in snowflake? Like a symbolic table or db link?
I know I can use a view to wrap this table but there’s a performance issue when using a view especially when it’s a billion records.

Related

Azeroth Core - Don't see data in table areatable_dbc

I wanted to change some stuff regarding zones in db, but see no data in tables. Is it possible that this is protected from viewing/editing? Or this data is just not used anymore by the core and I need to look somewhere else.
Table areatable_dbc
I'm new to this stuff so any advice will be helpful. Thank you
That table can be used to override data from the area.dbc file
From this topic:
Those tables must be there allowing people to create their set of data
or just customize the dbc. The cases are essentially the following:
tables empty, DBC files installed (classic way)
tables with some custom data + DBC files installed (to easily customize DBC that are not needed to be installed in the client too)
no DBC files installed and personal data inside those tables (you're using azerothcore for your MMO project)
There is a file attached to the release 3.0.0-dev containing the default data for the dbc tables if you need to customize them.
Note: keep in mind that certain dbc changes require also client modification, there are very few dbc that are exempt by the client mod.

Talend how to copy one db to another

I need to copy a lot af tables from one db to another. And i want to do it in a fast way. So, what it the fastest way to do that? I'm new to talend, i know that is possible to do something like that: toracleinput->map->toracleoutput. But it will take a lot of time to do it for 40 tables.
If you want to transfer all the tables then you can use tTransferDatabase
refer document pages for more details.
You can download tTransferDatabase component from above link and then install it with TOS.
once that is done this component will be displayed on palate. drag and drop it on job designer. configure like below.
it will ask for source connection and target connection.
Create tOracleConnection for Target & source and provide it to tTransferDatabase component.
Select Migration type " schema & data".
See image for more details.

Generating several similar SSIS packages (file data source to DB)

Is there a way to automatically generate SSIS packages? I need to create a lot of SSIS packages that just erase data from one table and import data from a text file. The file name matches table name and the column headers are in the first line of the file.
For more detailed information:
I am working on a project in which I have to separate two systems that are currently coupled (one system has direct access to the other's database). After the modifications, one system will provide data through txt files to be loaded in the other database.
We have to use SSIS to load data into the database from the text files.
The text files will be provided in CSV format with column headers in the first line.
The tables from both databases have matching column names, and all we need to do is clear the table and load data from the files.
I have more than one hundred tables with different number of columns. Do I need to create each package manually?
I'm familiar with 2 free options.
EzAPI might be a good place if you're a .NET heavy shop or just really want to geek out with the API. This approach allows you to control the pretty much the entire package generation but at the cost of coding time. I find EzAPI generally easier than working with the base COM/.NET libraries for SSIS.
Biml is an interesting beast. Varigence will be happy to sell you a license to Mist but it's not needed. All you would need is BIDSHelper and then browse through BimlScript and look for a recipe that approximates your needs. Once you have that, click the context sensitive menu button in BIDSHelper and whoosh, it generates packages.
I did this just using vb, I passed in the table names as a command parameter and used vb to generate the insert and clear, worked a charm... I can try and dig it out tomorrow when I'm back in the office but it was pretty simple. There didn't seem to be any other way to say "just get x and export it", "just take y and import it into z" so vb it had to be. In fact come to think of it I think I actually used a small xml file to pass the table info for export and then determined the table name for import from the csv file name. To be clear, this was only one package but it could dynamically choose the number of imports/exports it did. Further clarification this was vb within ssis as a processing step

How can I import Landslide CRM data into Salesforce?

I'm importing some customer information from Landslide CRM into Salesforce.
Anyone have advice on the best methodology for doing the import?
It seems like the Apex Data Loader is the best way to go, but I don't know
if there are any issues with handling the objects in question, or if there
might be a specific tool or script to perform this migration.
Any experience with this import in specific or importing data into Salesforce
in general would be appriciated.
Importing Data to salesforce can be achieved in multiple ways depending on the type of data nd the requirements you have.
The first thing to do is get your data into CSV files so you'll need to find a way to export the dat afirst. For UTF-8 encoded data don't use Excel use something like OpenOffice (only required if you have UTF-8 Characters)
If its account and contact data for example. There is an import wizard available in Setup > Administration Setup > Import Business Accounts/Contacts
Next Option is as you say to use the Apex Data Loader. This probably the best approach.
The first thing and this is critical for big migrations is to Create a Field on your account object which will be a Unique Field for reference purposes. When creating this field set it as an External ID field and populate it with a unique reference for your accounts, the same goes for anything else which will be a parent. (you'll see why shortly.)
Next use the Insert option in the Data Loader to load the data mapping all the fields, especially the External Id
Now when you upload child objects use the Upsert option and map your Account Id via the External Id created earlier. This will match the accounts using your unique Id instead of you having to use the Salesforce id, saves alot of time.
Repeat the same for other objects and you should be good to go.
Apologies for the lack for structure here... doing this while in work and don't have alot of time but hope this helps.
The data loader works great for most types of imports. The one suggestion I would give you is to create a new custom field on your target objects (presumably Account and Contact) called "Landslide ID" or similar, identify it as an external ID field, and then import the primary keys from your source system into this field (along with the "real" data).
Doing this achieves a couple things - first, you have an easy unique link back to the source data for troubleshooting or tracing back to the source system. Second, if you find yourself in a situation where you need to import more fields or related data from the original source system, you'll be able to do so in an easy and correct way. It's just a good standard practice to adopt when doing data migrations -- it's almost no additional effort and can save you many hours in the future.

storing database values in source control

We have a table in our our database that stores XSL's and XSD's that are applied to XML documents created in our application. This table is versioned in the sense that each time a change is made, a new row is created.
I'm trying to propose that we store the XSL's and XSD's as files in our Source control system instead of relying on the database to track the history. Each time a file is updated, we would deploy the new version to the database.
I don't seem to be getting much agreement on the issue. can anyone help me out with pros and cons of this approach? Perhaps I'm missing something.
XSL and XSD files are part of the application and so ought to be kept under source control. That's just obvious. Even if somebody wanted to catgorise them as data they would be reference data and so - in my book at least - would need to be kept under source control. This is because reference data is part of the application and so part of its configuration. For instance, applications which use the database to store values for drop downs or to implement business rules need to be certain that it holds the right version of the data.
The only argument for keeping multiple versions of the files in the dtabase would be if you might need to process older versions of the XML files. This depends on the nature of your application. Certainly I have worked on systems where XML files / messages came from external (third party) systems, where we really had no control over the format of the messages sent. So for a variety of reasons we needed to be able to handle incoming XML regardless of whether its structure was current or historical. But is is in addition to storing the files in a source control repository, not instead of.

Resources