Updating Column Name of a Table together with its dependent UDFs/USPs - sql-server

I don't know if this already has an answer as I wasn't able to find it, but if so please provide the link.
Column names are used in User Defined Stored Procedures and Functions when accessing certain columns in a table. But what if I want to edit the column name in a table and I have called that column name a lot of times in multiple usp's/udf's.
I am currently using SQL Server and obviously it will not let me update a table column since it is called in multiple existing usp's/udf's. Is there a way to update the column name of a table together with the column names in those usp's/udf's?

Related

Datafactory - dynamically copy subsection of columns from one database table to another

I have a database on SQL Server on premises and need to regularly copy the data from 80 different tables to an Azure SQL Database. For each table the columns I need to select from and map are different - example, TableA - I need columns 1,2 and 5. For TableB I need just column 1. The tables are named the same in the source and target, but the column names are different.
I could create multiple Copy data pipelines and select the source and target data sets and map to the target table structures, but that seems like a lot of work for what is ultimately the same process repeated.
I've so far created a meta table, which lists all the tables and the column mapping information. This table holds the following data:
SourceSchema, SourceTableName, SourceColumnName, TargetSchema, TargetTableName, TargetColumnName.
For each table, data is held in this table to map the source tables to the target tables.
I have then created a lookup which selects each table from the mapping table. It then does a for each loop and does another lookup to get the source and target column data for the table in the foreach iteration.
From this information, I'm able to map the Source table and the Sink table in a Copy Data activity created within the foreach loop, but I'm not sure how I can dynamically map the columns, or dynamically select only the columns I require from each source table.
I have the "activity('LookupColumns').output" from the column lookup, but would be grateful if someone could suggest how I can use this to then map the source columns to the target columns for the copy activity. Thanks.
In your case, you can use the expression in the mapping setting.
It needs your provide an expression and it's data should like this:{"type":"TabularTranslator","mappings":[{"source":{"name":"Id"},"sink":{"name":"CustomerID"}},{"source":{"name":"Name"},"sink":{"name":"LastName"}},{"source":{"name":"LastModifiedDate"},"sink":{"name":"ModifiedDate"}}]}
So you need to add a column named as Translator in your meta table, and it's value should be like the above JSON data. Then use this expression to do mapping:#item().Translator
Reference: https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-schema-and-type-mapping#parameterize-mapping

Creating Array from column data using Dynamic SQL

I'm working on creating some stored procedures to automatically mirror data from multiple servers/databases (40+) into one central server. I have created a table that has the column names from the databases that I am referencing like this:
What I'm wanting to do, is essentially grab the COUNT(ColumnID) based on the #TableID table variable that I declare. From there, the central server will already have corresponding columns for each of these sets of columns and the reference table also has the name of the central databases columns for each of these columns listed in the same table for the TableID and ColumnID. I want to pull these column names into an array and/or string where I can EXEC a dynamic Query such as EXEC('SELECT '+#ColumnsString+' FROM [LinkedServer].'+#TableName+'');
I already have the dynamic Linked Server's setup and working. But I'm looking for a way to either store the multiple column names into a single array that I can reference to create a string variable and also to reference for UPDATE queries. IE: SET #ColumnString = (#arrayvalue[1]','+#arrayvalue[2]+','+#arrayvalue[n]+'');
Is there a function available in SQL Server that could accomplish this?

generate script to create only a single row

I was referencing this question How to export all data from table to an insertable sql format?
while looking for a way to create an insert statement for a single row from a table without having to manually write it since the table has many columns. In my case I simply followed the steps listed then performed a ctrl-f search in the resulting script for the record I wanted then copied and pasted that single line to another query window but this would be terrible if I had hundreds of millions of rows. Is there a way to get the same functionality but tell the script generator I only want rows where id = value? Is there a better way to do this using only the out of the box Microsoft tools?
There is no way to do this, but you can do it by using a temp table
Create a new table by inset into and select those records which you want to insert.
Create the script and change the table name by using find and replace.
finally drop that temporary table.

SQL Server + triggers: How to get information about the inserted/deleted tables

Is there any way to get information about the updated/deleted tables in a trigger, meaning
which columns are in the inserted/deleted tables?
which data types do they have?
The background for this question is: I would like to create a 'generic' trigger which can be used without having need to be adapted for the table in question.
Dummy code:
foreach column in inserted table
get column name and data type and column (e.g. to exclude text columns or columns with a special name)
if the value has changed do some logging into another table
Unfortunately I could not find the needed information so far; I have to admit that I don't know the proper key words which have to be used.
It would even be helpful to get a list of functions (e.g. UPDATE()) which can be used in triggers to query information about the inserted/deleted tables.
The TSQL code should work on MS SQL Server >= 2008.
TIA

Is using multiple tables an advisable solution to dealing with user defined fields?

I am looking at a problem which would involve users uploading lists of records with various field structures into an application. The 2nd part of this would be to also allow the users to specify fields to capture information.
This is a step beyond anything ive done up to this point where i would have designed a static RDMS structure myself. In some respects all records will be treated the same so there will be some common fields required for each. Almost all queries will be run on these common fields.
My first thought would be to dynamically generate a new table for each import and another for each data capture field spec.Then have a master table with a guid for every record in the application along with the common fields and then fields that specify the name of the table the data was imported to and name of table with the data capture fields.
Further information (metadata?) about the fields in the dynamically generated tables could be stored in xml or in a 'property' table.
This would mean as users log into the application i would be dynamically choosing which table of data to presented to the user, and there would be a large number of tables in the database if it was say not only multiuser but then multitennant.
My question is are there other methods to solving this kind of varaible field issue, im i going down an unadvised path here?
I believe that EAV would require me to have a table defining the fields for each import / data capture spec and then another table with the import - field - values data and that seems impracticle.
I hate storing XML in the database, but this is a perfect example of when it makes sense. Store the user imports in XML initially. As your data schema matures, you can later decide which tables to persist for your larger clients. When the users pick which fields they want to query, that's when you come back and build a solid schema.
What kind is each field? Could the type of field be different for each record?
I am working on a program now that does this sorta and the way we handle it is basically a record table which points to a recordfield table. the recordfield table contains all of the fields along with the field name of the actual field in the database(the column name). We then have a recorddata table which is where all the data goes for each record. We also store a record_id telling it which record it is holding.
This is how we do it where if each column for the record is the same type, then we don't need to add new columns to the table, and if it has more fields or fields of a different type, then we add fields as appropriate to the data table.
I think this is what you are talking about.. correct me if I'm wrong.
I think that one additional table for each type of user defined field for the table that the user can add the fields to is a good way to go.
Say you load your records into user_records(id), that table would have an id column which is a foreign key in the user defined fields tables.
user defined string fields would go in user_records_string(id, name), where id is a foreign key to user_records(id), and name is a string, or a foreign key to a list of user defined string fields.
Searching on them requires joining them in to the base table, probably with a sub-select to filter down to one field based on the user meta-data, so that the right field can be added to the query.
To simulate the user creating multiple tables, you can have a foreign key in the user_records table that points at a table list, and filter on that when querying for a single table.
This would allow your schema to be static while allowing the user to arbitrarily add fields and tables.

Resources