Can I get DDL for a database clone in snowflake? - snowflake-cloud-data-platform

We have different clients and each client has its own database. Now for different clients our roles are named something like write_role_clientID. Now for any new client I want to clone an older DB but the roles should be for that clientID. I know I can get a DDL to create a DB but that doesn't include any privileges/grants. If somehow I can get the DDL for a clone because in a clone the privileges are also inherited by the objects then I might be able to manipulate the sql using code.

SELECT GET_DDL('database', 'your_cloned_database') should work fine over a cloned database. However, I'd be cautious using clones for new clients as they'll have access to data from the old client unless you go through every table and TRUNCATE all the tables (and be cautious around Time Travel!).
It's probably easier to do something like SELECT REPLACE(GET_DDL('database', 'original_database'), 'old_client_id', 'new_client_id') (you could probably even just EXECUTE that, but YMMV).

Related

Insert Data From Access Database to SQL Server

Desired result and why:
I have a lot of old Access databases that we are trying to get to SQL Server, and I'm essentially trying to make the Access DB the "middleman" so our old programs can still read/write to them but the information will also be saved in SQL Server. We need the middleman because of how interconnected these tables are through various programs we are rewriting in modern languages. Once we rewrite all of them we will cut the cord and live in SQL Server, but this will take a lot of time.
What I've tried:
We tried creating a linked table to SQL Server and renaming it so it would take the place of the original table. After doing this the table stopped receiving data so we quickly reverted back.
In order to investigate this I created Table B which is just another linked table to SQL Server, and then tried using the After Insert macro on Table A to send any new rows to the linked table but nothing happens. If I manually add a record to Table B it carries over to SQL Server just fine, but I can't get Table A to send data to Table B. I created Table C that is just a local access table and if I manually add a record to Table A it does show up in Table C. No errors at all, it just doesn't do what I need it to do.
I'm lost on how to accomplish this and open to any help or suggestions on how to move forward with this. One thing to note though, is that most of the access databases I have are not using forms at all which is I'm trying to take the macro route instead of any VBA. I need these to trigger without any interaction from the user.
You should use the tool dedicated to this task:
SQL Server Migration Assistant for Access (AccessToSQL)
Ok, there are from comments some new and signficant moving parts here.
For example, data is to be migrated to sql server. As noted, EVEN in access land, all and every table needs and should have a PK for the "basic" data base operations. While it is possible to do some work, and say some importing of data, the instant one wants some forms, VBA code and starts to build a working applcation? Then all tables should have a PK.
And of course if you moved the data to sql server, then it not going to make a lot of sense to have OTHER applcations attempt to modify the linked tables in access, since the data is not in Access anymore!!! Those other sources in theory should thus also hit sql server, and not attempt to use what amounts to a link on a linked table.
However, it does depend. For example, if you use vb.net code and say open a access database, you CAN in fact have that vb.net code open a access table, and in fact it can be a linked table. (however, it would make a WHOLE lot more sense for the vb.net code to open and hit sql server - introduction of a link on a link is going to be problematic.
However, in testing, I have found that say vb.net can open a access table, and even if it is a link, then access will translate though the jet engine (the access data engine), and you can do this.
However, data macros and table triggers on existing access tables? They might work on linked tables, but you of course need to ensure that the linked table does allow edits, and allows inserts. Only AFTER one has verified that you can click on a linked table to sql server - can edit, and then add should one mess around with data macros and triggers on say local tables.
it also depends on what the new software tools and platform is being used here.
But, from a basic database point of view - and general data mangement?
All code, and designs should assume, and be designed around the assumption that each row of data has a PK. This is not always possible, but is a RARE use case.
Practical data management - and use of a database should from both table designs, and from workflow designs, and from a developer point of view assume the concept of a PK row id. Without such assumptions, then you not in the software industry anymore - but in a hack field, and one that will result in great future difficulty when attempting to build work flows and build general information systems.
So, with above in mind: Your table B - it has to work as a valid sql server table.
The sql server table(s). They need a PK, and after linking to sql server, you can open up the linked table in access. Test if edits work, test if adding works, and even perhaps test if delete works. Only AFTER such time, do you now want to start testing any code or other operations from the Access client side.
Introduction of using a linked table from another application? That is a foggy area, but I can confirm for example that say .net oleDB provider will and can open a access database and use + consume even linked tables.
You also don't mention if you using sql logon, or windows auth for the sql server linked tables. But if you using sql logons, then when linking a table, you see this check box - and you want to ensure you selected this when linking the table(s) in question:
Note that you ONLY get this prompt on the first time create of the table link - additional use of the linked table manager (such as re-fresh links) does not offer this prompt. If you don't select the save password option, then you often see a sql logon prompt when you attempt to open a linked table in access.

SSDT implementation: Alter table insteed of Create

We just trying to implement SSDT in our project.
We have lots of clients for one of our products which is built on a single DB (DBDB) with tables and stored procedures only.
We created one SSDT project for database DBDB (using VS 2012 > SQL Server object Browser > right click on project > New Project).
Once we build that project it creates one .sql file.
Problem: if we run that file on client's DBDB - it creates all the tables again & it deletes all records in it [this fulfills the requirements but deletes the existing records :-( ]
What we need: only the update which is not present on the client's DBDB should get update with new changes.
Note : we have no direct access to client's DBDB database for comparing with our latest DBDB. We only can send them some magic script file which will update their DBDB to the latest state.
The only way to update the Client's DB is to compare the DB schemas and then apply the delta. Any way you do it, you will need some way to get a hold on the schema thats running at the client:
IF you ship a versioned product, it is easiest to deploy version N-1 of that to your development server and compare that to the version N you are going to ship. This way, SSDT can generate the migration script you need to ship to the client to pull that DB up to the current schema.
IF you don't have a versioned product, or your client might have altered the schema or you will need to find a way to extract the schema data on site (maybe using SSDT there) and then let SSDT create the delta.
Option: You can skip using the compare feature of SSDT altogether. But then you need to write your migration script yourself. For each modification to the schema, you need to write the DDL statements yourself and wrap them in if clauses that check for the old state so the changes will only be made once and if the old state exists. This way, it doesnt really matter from wich state to wich state you are going as the script will determine for each step if and what to do.
The last is the most flexible, but requires deep testing in its own and of course should have started way before the situation you are in now, where you don't know what the changes have been anymore. But it can help for next time.
This only applies to schema changes on the tables, because you can always fall back to just drop and recreate ALL stored procedures since there is nothing lost in dropping them.
It sounds like you may not be pushing the changes correctly. You have a couple of options if you've built a SQL Project.
Give them the dacpac and have them use SQLPackage to update their own database.
Generate an update script against your customer's "current" version and give that to them.
In any case, it sounds like your publish option might be set to drop and recreate the database each time. I've written quite a few articles on SSDT SQL Projects and getting started that might be helpful here: http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html

Creating a New Database from Within a Stored Procedure

Due to an employee quitting, I've been given a project that is outside my area of expertise.
I have a product where each customer will have their own copy of a database. The UI for creating the database (licensing, basic info collection, etc) is being outsourced, so I was hoping to just have a single stored procedure they can call, providing a few parameters, and have the SP create the database. I have a script for creating the database, but I'm not sure the best way to actually execute the script.
From what I've found, this seems to be outside the scope of what a SP easily can do. Is there any sort of "best practice" for handling this sort of program flow?
Generally speaking, SQL scripts - both DML and DDL - are what you use for database creation and population. SQL Server has a command line interface called SQLCMD that these scripts can be run through - here's a link to the MSDN tutorial.
Assuming there's no customization to the tables or columns involved, you could get away with using either attach/reattach or backup/restore. These would require that a baseline database exist - no customer data. Then you use either of the methods mentioned to capture the database as-is. Backup/restore is preferrable because attach/reattach requires the database to be offline. But users need to be sync'd before they can access the database.
If you got the script to create database, it is easy for them to use it within their program. Do you have any specific pre-requisite to create the database & set permissions accordingly, you can wrap up all the scripts within 1 script file to execute.

Tools to update tables in SQL server 2000/2005

Is there any handy tool that can make updating tables easier? Usually I got an Excel file with the original value in one column and new value in another column. Then I write a formula in Excel to create the 'update' statement. Is there any way to simplify the updating task?
I believe the approach in SQL server 2000 and 2005 would be different, so could we discuss them both? Thanks.
In addition, these updates usually request by "non-programmer" (which means they don't understand SQL, so it may not feasible to let them do query), is there any tool that can let them update the table directly without having DBAs do this task? Also, that tool needs to limit the privilege to only modify certain tables. And better has a way rollback the change.
Create a DTS package that will import a csv file, make the updates and then archives the file. The user can drop the file in a specific folder designated for the task or this can be done by an ops person. Schedule the DTS to run every hour, day, etc.
In case your users would insist that they keep using Excel, you've got several different possibilities of getting the data transferred to SQL Server. My preferred one would be to use DTS/SSIS, as mentioned by buckbova.
However, another method is by using OPENROWSET(), which makes it possible to query your Excel file as if it was a table. I wrote a small article about it here: http://blog.hoegaerden.be/2010/03/29/retrieving-data-from-excel/
Another approach that hasn't been mentioned yet (I'm not a big fan of letting regular users edit data directly in the DB), any possibility of creating a small custom application for them?
There you go, a couple more possible solutions :-)
Valentino.
I think the best approach is to expose a view on your data accessible to users who are allowed to do updates, and set up triggers on the view to perform the actual updates on the underlying data. Restrict change to only the columns they should be changing.
This technique can work on SQL Server 2000 and 2005.
I would add audit triggers on the underlying tables so you can always track changes.
You'll have complete control, and they can connect to it with Access or whatever and perform their maintenance.
You could create some accounts in SQL Server for these users and limit their access to only certain tables and columns along with onlu select / update / insert privileges. Then you could create an access database with linked tables to these.

How to partially migrate a database to a new system over time?

We are in the process of a multi-year project where we're building a new system and a new database to eventually replace the old system and database. The users are using the new and old systems as we're changing them.
The problem we keep running into is when an object in one system is dependent on an object in the other system. We've been using views, but have run into a limitation with one of the technologies (Entity Framework) and are considering other options.
The other option we're looking at right now is replication. My boss isn't excited about the extra maintenance that would cause. So, what other options are there for getting dependent data into the database that needs it?
Update:
The technologies we're using are SQL Server 2008 and Entity Framework. Both databases are within the same sql server instance so linked servers shouldn't be necessary.
The limitation we're facing with Entity Framework is we can't seem to create the relationships between the table-based-entities and the view-based-entities. No relationship can exist in the database between a view and a table, as far as I know, so the edmx diagram can't infer it. And I cannot seem to create the relationship manually without getting errors. It thinks all columns in the view are keys.
If I leave it that way I get an error like this for each column in the view:
Association End key property [...] is
not mapped.
If I try to change the "Entity Key" property to false on the columns that are not the key I get this error:
All the key properties of the
EntitySet [...] must be mapped to all
the key properties [...] of table
viewName.
According to this forum post it sounds like a limitation of the Entity Framework.
Update #2
I should also mention the main limitation of the Entity Framework is that it only supports one database at a time. So we need the old data to appear to be in the new database for the Entity Framework to see it. We only need read access of the old system data in the new system.
You can use linked server queries to leave the data where it is, but connect to it from the other db.
Depending on how up-to-date the data in each db needs to be & if one data source can remain read-only you can:
Use the Database Copy Wizard to create an SSIS package
that you can run periodically as a SQL Agent Task
Use snapshot replication
Create a custom BCP in/out process
to get the data to the other db
Use transactional replication, which
can be near-realtime.
If data needs to be read-write in both database then you can use:
transactional replication with
update subscriptions
merge replication
As you go down the list the amount of work involved in maintaining the solution increases. Using linked server queries will work best if its the right fit for what you're trying to achieve.
EDIT: If they're the same server then as suggested by another user you should be able to access the table with servername.databasename.schema.tablename Looks like it's an entity-framework issues & not a db issue.
I don't know about EntityToSql but I know in LinqToSql you can connect to multiple databases/servers in one .dbml if you prefix the tables with:
ServerName.DatabaseName.SchemaName.TableName
MyServer.MyOldDatabase.dbo.Customers
I have been able to click on a table in the .dbml and copy and paste it into the .dbml of the alternate project prefix the name and set up the relationships and it works... like I said this was in LinqToSql, though have not tried it with EntityToSql. I would give it shot before you go though all the work of replication and such.
If Linq-to-Entities cannot cross DB's then Replication or something that emulates it is the only thing that will work.
For performance purposes you probably want either Merge replication or Transactional with queued (not immediate) updating.
Thanks for the responses. We're going to try adding triggers to the old database tables to insert/update/delete records in the new tables of the new database. This way we can continue to use Entity Framework and also do any data transformations we need.
Once the UI functions move over to the new system for a particular feature, we'll remove the table from the old database and add a view to the old database with the same name that points to the new database table for backwards compatibility.
One thing that I realized needs to happen before we can do this is we have to search all our code and sql for ##Identity and replace it with scope_identity() so the triggers don't mess up the Ids in the old system.

Resources