I am in the process of converting an Access database to SQL Server 2005. I have successfully migrated the data and original schema using SSMA and am now in the process of normalizing the database, which requires me to add a few unique identifiers.
Some of the columns we had were previously created using an AutoNumber data type, which is fine. However, I need to create meaningless but unique identifiers for other data, so I am using the int data type with the Identity Specification property. I am seeding at '101' to keep this data above the range that currently exists for data that already has unique identifiers, as they will eventually reside in the same table.
My problem is that when I create a new int with Identity Specification with a seed value of '101' and an increment of '1', the numbers start at '1'. I have attempted to reseed with:
USE dbMyDatabase
DBCC checkident(tblMyTable, reseed, 101)
to no avail. Any suggestions would be greatly appreciated. Thanks in advance!
The solution was to create the column by using a SQL query manually. Adding it through the "New Column..." option produced incorrect results every time. Now that I added it with
USE dbMyDatabase
ALTER TABLE tblMyTable
ADD fldID INT IDENTITY(101,1)
it works just fine.
Related
I am new to using ssis and am on my third package. We are taking data from Oracle into Sql Server. On my oracle table, the unique key is called recnum and is numeric(12,0). In this particular package, I am trying to take the record from oracle, lookup in a sql server table to see if that unique key is found, and if not add the record to the sql server table. My issue is it wouldn't find a match. After much testing, I came up with the following method that works. But I don't understand why I had to do this.
How I currently have it working:
I get the data from oracle. In my next step, I added a derived column that uses the oracle column. (The expression is just that field, no other formatting.) Then in the lookup I use the derived column instead of the column from Oracle.
We had already done this on another table where the unique key was numeric(8,0) and it worked ok without needing a derived column.
SSIS is very fussy about data types, lookups only work nicely if data types match.
Double click on the Data Path lines between Data Flow objects to check data types. I use Data Conversion tasks or CAST statements to force matching data types when I use lookups.
Hope this helps.
I have been using ISQL (SQLAnywhere 12) to import data from CSVs into existing tables using INPUT INTO and never ran into a problem. Today I needed to import data into a table containing an auto-increment column, however, and thought I just needed to leave that column blank, so I tried it with a file containing only 1 row of data (to be safe). Turns out it imported with a 0 in the auto-increment field instead of the next integer value.
Looking at the Sybase documentation, it seems like I should be using LOAD TABLE instead, but the examples look a bit complex.
My questions are the following...
The documentation says the CSV file needs to be on the database server and not the client. I do not have access to the database server itself - can I load the file from within ISQL remotely?
How do I define the columns of the table I'm loading into? What if I am only loading data into a few columns and leaving the rest as NULL?
To confirm, this will leave existing data in the table as-is and simply add to it using whatever is in the CSV?
Many thanks in advance.
Yes. Check out the online documentation for LOAD TABLE - you can use the USING CLIENT FILE clause.
You can specify the column names in parens after the table name, i.e. LOAD TABLE mytable (col1, col2, col3) USING CLIENT FILE 'mylocalfile.txt'. Any columns not listed here will be set to NULL if the column is nullable or the equivalent to an empty string if it's not - this is why your autoincrement column was set to 0. You can use the DEFAULTS ON clause to get what you want.
Yes, existing data in the table is not affected.
I have a regular table in SQL Server 2012 and I want to sort the data in a certain query by creation date of the records.
The problem is I don't have a column that holds this data for each record.
Is there a way of doing that without the designated column?
Maybe there is some kind of a built-in creation date information that exists in the database and I can access it somehow...
If you don't have a date column, you cannot sort by created date. There is no built-in create date per row that I know of. You have some other options though. If you have an identity column (auto increment), you can order by that to find which row was added first.
You could perhaps use the location of the row in data pages like this answer mentions: Equivalent of Oracle's RowID in SQL Server
There is no built-in creation date information for rows.
There are some (commercial) tools that can sometimes extract that information from the transaction logs. That's a capability used in emergencies, not during normal operations.
I am working on a project creating a log table of sorts for failed jobs. Since the step_id can change depending on step order, I wanted to use the step_uid as a unique identifier for a step. With that in mind, is seems that the step_uid in msdb's sysjobsteps table is a nullable column, and I am relying on that column NOT being null as of yet.
Does anyone know why or when that column would ever be null. No examples exist on my current server.
By looking at the source code of the sp_add_jobstep_internal stored procedure, we can realize that the step_uid will allways be filled-in by this procedure.
Moreover, the sp_write_sysjobstep_log stored procedure assumes that step_uid cannot be null (it copies its value into the sysjobstepslogs table, where step_uid is defined as NOT NULL).
I think the step_uid column was defined as nullable only because it did not exist in SQL Server 2000. However, since SQL Server 2005 it seems to be always filled-in.
Does anyone know how the SchemaCompare in Visual Studio (using 2010 currently) determines how to handle [SQL Server 2008R2] database table updates (column data type, optionality, etc)?
The options are to:
Use separate ALTER TABLE statements
Create a new table, copy the old data into the new table, rename the old table before the new one can be renamed to assume the proper name
I'm asking because we have a situation involving a TIMESTAMP column (for optimistic locking). If SchemaCompare uses the new table approach, the TIMESTAMP column values will change & cause problems for anyone with the old TIMESTAMP values.
I believe Schema Compare employs the same CREATE-COPY-DROP-RENAME (CCDR) strategy as VSTSDB described here: link
Should be able to confirm this by running a compare and scripting out the deploy, no?