SQL Server table is allowing multiple columns with the same name - sql-server

I have inherited a table that has a multiple column. I have tried to delete and re-make the column, but it simply comes back as 3 columns. Is this a consequence of the geography type?
What I am trying to do is create a documentation feature, so that if anyone in my company adds or changes a table, this list will be updated. However, if I try to set a key on TableName && ColumnName, unsurprisingly I get an error.
Of the 200+ tables in the database, this is the only one with this error.
What can I do to resolve it?

Related

Using Dynamic SQL in a trigger to identify changes

I'm in the process of building a brand-new database, and want every table I create to have a corresponding audit table which would track any data changes.
In order to avoid having to hard-code every table column, what I would like to do is use Dynamic SQL to review each column in the table (with the exception of the Identity column) and work out whether or not the column has been changed, using the Inserted and Deleted tables to do so. By doing that, I could then theoretically add columns to the tables without having to re-create the triggers associated with the tables.
Is such a thing possible or am I running down a blind alley?

Adding new dimensions to data warehouse (adding new columns to fact table)

I am building an OLAP database and am running into some difficulty. I have already setup a fact table that includes columns for sales data, like quantity, sales, cost, profit, etc. The current dimensions I have are Date, Location, and Product. This means I have the foreign key columns for these dimension tables included in the fact table as well. I have loaded the fact table with this data.
I am now trying to add a dimension for salesperson. I have created the dimension, which has the salesperson's ID and their name and location. However, I can't edit the fact table to add the new column that will act as a foreign key to the salesperson dimension.
I want to use SSIS to do this, by using a look up on the sales database which the fact table is based on, and the salesperson ID, but I first need to add the Salesperson column to my fact table. When I try to do it, I get an error saying that it can't create a new column because it will be populated with NULLs.
I'm going to take a guess as to the problem you're having, but this is just a guess: your question is a little difficult to understand.
I'm going to make the assumption that you have created a Fact table with x columns, including links to the Date, Location, and Product dimensions. You have then loaded that fact table with data.
You are now trying to add a new column, SalesPerson_SK (or ID), to that table. You do not wish to allow NULL values in the database, so you clear the 'allow NULL' checkbox. However, when you attempt to save your work, the table errors out with the objection that it cannot insert NULL into the SalesPerson_SK column.
There are a few ways around this limitation. One, which is probably the best if you are still in the development stage, is to issue the following command:
TRUNCATE TABLE dbo.FactMyFact
which will remove all data from the table, allowing you to make your changes and reload the table with the new column included.
If, for some reason, you cannot do so, you can alter the table to add the column but include a default constraint that will put a default value into your fact table, essentially a dummy record that says, "I don't know what this is"
ALTER TABLE FactMyFact
ADD Salesperson_SK INT NOT NULL
CONSTRAINT DF_FactMyFact_SalesPersonSK DEFAULT 0
If you do not wish to put a default value into the table, simply create the column and allow NULL values, either by checking the box on the design page or by issuing the following command:
ALTER TABLE FactMyFact
ADD Salesperson_SK INT NULL
This answer has been given based on what I think your problem is: let me know if it helps.
Dimension inner join with fact table, get the values from dimensions and insert into fact...
or else create the fact less fact way

Concurrency error with MS-Access Linked Tables

I am linking tables to a SQL 2008R2 DB via MS Access Linked Tables.
I am getting this warning when I want to change the data in an Access linked table where the underlying SQL table has more than one bit field in it:
The record has been changed by another user since you started editing
it. If you save the record, you will overwrite the changes the other
user made
I don't have any problems when there is only one bit field in the table. It's really a strange error imho. Has any one else encountered this before and found a work around for it by any chance?
I've seen this sort of issue in working with linked tables in general with SQL. I'm not sure why you're seeing the issue specifically with bit fields. Try adding a 'ts' column with the datatype of timestamp (rowversion) to the table and relink it in Access.
I know this is an old question, but maybe my answer will benefit others since I struggled with same and other similar issues.
I had similar error and was mostly able to get around it. One thing that may help is to use SQL Profiler on the database and watch the SQL commands made by Access while you are trying to add a new row.
Few things to check..
1) Verify that you have an ID column in the table set as the Primary key and AutoNumber
2) If this involves a master/child relationship between another table, in the Access Database Tools "Relationships", specify the relationship and the join type between these types.
3) If a join between tables, then play around with the primary column and foreign column being exposed in the query.
Using the SQL Profiler, I would see where it would try to find the row to update based on other columns besides the primary key. e.g.
update table
set ...
where id = 5 and data1 = somevalue and data2 == othervalue
When doing this, I would sometimes get the same error since I may have edited other values in the new row and therefore the complex where clause would fail. What you want is to have the update rely totally on the primary key.

NDbUnit does not set a primary key field specified in the XML when the column is an Identity column

I am using NDbUnit to unit test my data access layer.
Everything has been working fine when constructing the XSD and associated XML files that are used to populate various tables with rows of data. However I have just noticed that I am unable to set the PK directly through XML for an integer PK column when it is an identity column.
i.e. When the database automatically handles incrementing and setting the PK on a row insertion, NDbUnit isn't able to override this and set it itself (as far as I can see).
Is there some way for NDbUnit to override this identity column value and set it directly from the XML or am I stuck with the auto-incremented value that SQL Server creates for the inserted row? Or is there another pattern that I should use to insert a row with an identity column and then subsequently use that value as the FK on another table's row?
Update:
I discovered that when you perform the NDbUnit database operation you need to set the parameter to InsertIdentity not just Insert:
INDbUnitTest database = new NDbUnit.Core.SqlClient.SqlDbUnitTest(connectionString);
database.PerformDbOperation(DbOperationFlag.InsertIdentity);
However even after I made this change, I now get the following error as the rows are being inserted:
Cannot insert explicit value for identity column in table 'myTable'
when IDENTITY_INSERT is set to OFF.
This would leave me to believe that the InsertIdentity flag being set on the NBDUnit method is not actually setting the IDENTITY_INSERT to OFF for the specific table within SQL Server.
Any suggestions on why this wouldn't be happening?
Better late than never hopefully but I experienced the same problem today.
On your XSD, highlight the identity column and hit F4. When the Properties pane comes up you'll probably see that AutoIncrement is set to False. Change this to True and it should start to work. Not sure why this happens on random occasions but hopefully this will help you out.

SQL Server trigger - copy row before updating

I'd like to copy a table's row before updating and I'm trying to do it like this:
CREATE TRIGGER first_trigger_test
on Triggertest
FOR UPDATE
AS
insert into Triggertest select * from Inserted
Unfortunately, I get the error message
Msg 8101, Level 16, State 1, Procedure first_trigger_test, Line 6
An explicit value for the identity column in table 'Triggertest' can only be specified when a column list is used and IDENTITY_INSERT is ON.
I assume it's because of the id-column; can't I do something like 'except' id? I do not want to list all the columns in the trigger as it should be as dynamic as possible...
You can't, basically. You'll either have to specify the columns, or use a separate table:
CREATE TRIGGER first_trigger_test
on Triggertest
FOR UPDATE
AS
insert into Triggertest_audit select * from deleted
(where Triggertest_audit is a second table that looks like Triggertest, but without the primary key/identity/etc - commonly multiple rows per logical source row; not I assumed you actually wanted to copy the old values, not the new ones)
The problem happens because you are trying to set an identity column in Triggertest.
Is that your plan?
If you want to copy the new identity columns from INSERTED into Triggertest, then define the column in Triggertest without IDENTITY
If Triggertest has it's own IDENTITY columns, use this:
insert into Triggertest (col1, col2, col3) select col1, col2, col3 from Inserted
After comment:
No, you can't without dynamic SQL to detect what table and find all non-identity colums.
However, if you add or remove columns you'll then have a mis-match between trigger table and Triggertest and you'll get a different error.
If you really want it that dynamic, you'd have to concat all columns into one or use XML to ignore schema.
Finally:
Do all your tables have exactly the same number of columns and datatypes and nullability as TriggerTest... because this is the assumption here...
IF you want the table to be built each time the trigger runs then you have no choice but to use the the system tables to find the columns and create a table with those column definitions. Of course your first step will have to be to drop the existing table or the trigger won't work the second time someone updates a record.
However, I think you need to rethink this process. Dropping a table then creating a new one every time you change a record is a seriously bad idea. How is this table in anyway useful when it may get wiped out and rebuilt every second or so?
What you might consider doing instead is create a dynamic process to create the Create trigger scripts that have the correct information for that table but which are not dynamic. Then your configuration people need to run this process every time table changes are made.
Remember it is critical for triggers to do two things, run as fast as humanly possible and account for proccesing all the records inthe batch (triggers should never have row-by-row proccessing or other slow processses or assume only one row will be in inserted or deleted tables) Dynamic SQL in a trigger is porbably also a bad idea as you can't test out all the possibilites beforehand and can bring your whole production server to a screaming halt when some unexpected thing happens.

Resources