Finding all pseudo related data within an SQL Server database - sql-server

I have a requirement to change a "broken" computed column in a table to an identity column and as part of this work update some of the field values. This column is a pseudo primary key so doesn't have any constraints defined against it. I therefore need to determine if any other tables in the database contain a pseudo foreign key back to this column.
Before writing something myself I'd like to know if there is a script/tool in existence that when given a value (not a column name) can search across the data in all of the tables within an SQL Server database and show where that value exists?
Thanks in advance.

Quick google found this page/script:
http://vyaskn.tripod.com/search_all_columns_in_all_tables.htm
I don't personally know of a pretty GUI-interfaced utility that'll do it.

Related

Add unique constraint and deduplicate column data

I am working on a Spring project which uses PostgreSQL and Liquibase. I need to add a unique constraint to a specific column in a table. The table already has a lot of entries and some of them violate the new unique constraint.
Since the application is in production, dropping the table is not an option. I need to implement some sort of modification to the data in the column, so that duplicates get indexed (e.g. we have 2 entries with the value 'foo', after the operation these entries should look something like 'foo' and 'foo2').
So far I've only implemented the change which adds the unique constraint, but I have yet to implement this modification. Is there any functionality in either PostgreSQL or Liquibase which might address this issue?
You need to create an SQL UPDATE query (or queries) that will modify the database and implement the logic that updates duplicates and sets unique values to them.
Then use change type sql in liquibase to instruct liquibase to run that query.

Find foreign keys based on data

I am looking at a database which has almost no foreign keys defined.
Is there a tool that can perform some data analysis/heuristics and "guess" the relations based on data. I am looking for some kind of report, which can be used as a manual guide/checklist.
I had a similar problem - Every Table had a Object_ID column... But had secondary IDs too.
All were of a wierd GUID-ish form.
I ended up writing a brute force scanner (using Dynamic sql from informtion_schema.columns)
Of course this approach relied on the values being globally unique... If you have a bunch of int identity cols and no way to connect the Tables then you are in a bit of trouble!
Perhaps there is a timestamp column or a DateTime defaulting to GetDate() - you could use this to identidy records in different tables that are created at approx the same time.
A lot depends on your schema...

Concurrency error with MS-Access Linked Tables

I am linking tables to a SQL 2008R2 DB via MS Access Linked Tables.
I am getting this warning when I want to change the data in an Access linked table where the underlying SQL table has more than one bit field in it:
The record has been changed by another user since you started editing
it. If you save the record, you will overwrite the changes the other
user made
I don't have any problems when there is only one bit field in the table. It's really a strange error imho. Has any one else encountered this before and found a work around for it by any chance?
I've seen this sort of issue in working with linked tables in general with SQL. I'm not sure why you're seeing the issue specifically with bit fields. Try adding a 'ts' column with the datatype of timestamp (rowversion) to the table and relink it in Access.
I know this is an old question, but maybe my answer will benefit others since I struggled with same and other similar issues.
I had similar error and was mostly able to get around it. One thing that may help is to use SQL Profiler on the database and watch the SQL commands made by Access while you are trying to add a new row.
Few things to check..
1) Verify that you have an ID column in the table set as the Primary key and AutoNumber
2) If this involves a master/child relationship between another table, in the Access Database Tools "Relationships", specify the relationship and the join type between these types.
3) If a join between tables, then play around with the primary column and foreign column being exposed in the query.
Using the SQL Profiler, I would see where it would try to find the row to update based on other columns besides the primary key. e.g.
update table
set ...
where id = 5 and data1 = somevalue and data2 == othervalue
When doing this, I would sometimes get the same error since I may have edited other values in the new row and therefore the complex where clause would fail. What you want is to have the update rely totally on the primary key.

How do I manage identities with ETL?

I need help figuring out a workflow and I'm not sure how to go about it... Let's say I'm transforming (ETL?) data from Table A to Table B. Table A has a composite primary key A.a+A.b+A.c, while Table B has just an automatically populated identity column. How can I map the composite keys from A back to the identities created when inserting into B?
Preferably I would like to not have any columns in table B related to A's composite key because there are many other tables that need to undergo the same operation but don't have the same composite key structure.
If I understand you correctly, you can't relate records from table B back to the records of table A after the transformation unless you somehow capture a mapping between A's composite key and B's identifier during the transformation.
You could add a column to A and pre-compute the identifiers to be used when inserting into B. Then you would have a mapping. This could also be done using a separate mapping table, if you don't want to add a column to A.
If you don't want to override the default assignment of identifiers, then you will have to capture them during the load. Oracle provides the returning clause for insert in PL/SQL for this purpose. I'm not sure about SQL Server. It may also be possible to accomplish this by using a trigger on B to insert into a separate mapping table or update a column in A. Though that's likely to slow down your load considerably.
If nothing else, you could create additional columns in B to hold the keys of A during the load, query out the mappings into a separate table afterwards, and then drop the extra columns.
I hope that helps.
Ask yourself exactly what you need the original keys for. The answer may vary depending on the source system. This may lead you to maintain a "source system" column and a "original source keys" column. The latter may need to be a comma-delimited list of the original keys.
Or, you may find that you never actually need to map back, so don't need to keep anything.

How to add a constant column when replicating a database?

I am using SQL Server 2000 and I have two databases that both replicate (transactional push subscription) to a single database. I need to know which database the records came from.
So I want to add a fixed column specified in the publication to my table so I can tell which database the row originated from.
How do I go about doing this?
I would like to avoid altering the main databases mostly due to the fact there are many tables I would need to do this to. I was hoping for some built in feature of replication that would do this for me some where. Other than that I would go with the view idea.
You could use a calculated column Use the following on the two databases:
ALTER TABLE TableName ADD
MyColumn AS 'Server1'
Then just define the single "master" database to use a VARCHAR column (or whatever you want) that you fill using the calculated columns value.
You can create a view, which adds the "constant" column, and use it as a replication source.
So the solution for me was to set up the replication publications to allow transformations and create a DTS package for each site that appends the siteid into the tables to keep the ids unique as I can't use guids.

Resources