I am working on a project creating a log table of sorts for failed jobs. Since the step_id can change depending on step order, I wanted to use the step_uid as a unique identifier for a step. With that in mind, is seems that the step_uid in msdb's sysjobsteps table is a nullable column, and I am relying on that column NOT being null as of yet.
Does anyone know why or when that column would ever be null. No examples exist on my current server.
By looking at the source code of the sp_add_jobstep_internal stored procedure, we can realize that the step_uid will allways be filled-in by this procedure.
Moreover, the sp_write_sysjobstep_log stored procedure assumes that step_uid cannot be null (it copies its value into the sysjobstepslogs table, where step_uid is defined as NOT NULL).
I think the step_uid column was defined as nullable only because it did not exist in SQL Server 2000. However, since SQL Server 2005 it seems to be always filled-in.
Related
We need to extract a SQL Server source into ODI in our Oracle Database.
In this source we have a difference between a NULL and an empty string. We need to capture this difference into ODI. Something like nvl(attribute, 'XXX') so that an empty string becomes a NULL into Oracle or something like that.
But in the physical mapping, coming from SQL Server, ODI always uses a temporary C$ table (which already is an Oracle table). After that C$ table, my 'nvl' gets applied but in Oracle a null and an empty string are handled the same.
Does anyone know how to handle this issue?
Thanks!
In the logical mapping you can apply to the target column the ANSI SQL function coalesce(attribute, 'XXX'), which is valid SQL Server syntax.
If you set the parameter Execute on Hint: Source the function will be applied to the SELECT statement on the Source before inserting into the C$ table.
I am new to using ssis and am on my third package. We are taking data from Oracle into Sql Server. On my oracle table, the unique key is called recnum and is numeric(12,0). In this particular package, I am trying to take the record from oracle, lookup in a sql server table to see if that unique key is found, and if not add the record to the sql server table. My issue is it wouldn't find a match. After much testing, I came up with the following method that works. But I don't understand why I had to do this.
How I currently have it working:
I get the data from oracle. In my next step, I added a derived column that uses the oracle column. (The expression is just that field, no other formatting.) Then in the lookup I use the derived column instead of the column from Oracle.
We had already done this on another table where the unique key was numeric(8,0) and it worked ok without needing a derived column.
SSIS is very fussy about data types, lookups only work nicely if data types match.
Double click on the Data Path lines between Data Flow objects to check data types. I use Data Conversion tasks or CAST statements to force matching data types when I use lookups.
Hope this helps.
It seems that SSDT does not publish column COLLATION, even though it detects a change during comparison process.
An issue appears that if you change a column COLLATION on a specific column in a table, and try to publish the change, the SSDT will ignore it when creating a publish script.
Here is a similar issue described on msdn forums, detected long ago, that is still reproduced.
I have been using SSDT version 14.0.60629.0
Does the SSDT still have this issue, or is there a valid workaround?
Update
This issue is only for the columns which are using a User-Defined Data Type.
Update
(added steps to reproduce, and corrected the question text):
Steps to reproduce:
1. Start with a database and note the collations(this is the one I have, a DB on my Dev server):
Current COLLATION setup is:
ServerSQL_Latin1_General_CP1_CI_AS
DatabaseSQL_Latin1_General_CP1_CI_AS
TableSQL_Latin1_General_CP1_CI_AS
User-Defined Data Type (dt_Source AS varchar(20))SQL_Latin1_General_CP1_CI_AS
Column (Source AS dt_source)SQL_Latin1_General_CP1_CI_AS
2.Then change the database collation.
USE master;
ALTER DATABASE [<db_name>] COLLATE SQL_Latin1_General_CP1250_CS_AS
New COLLATION setup will be:
ServerSQL_Latin1_General_CP1_CI_AS
DatabaseSQL_Latin1_General_CP1250_CS_AS
TableSQL_Latin1_General_CP1250_CS_AS
User-Defined Data Type (dt_Source AS varchar(20))SQL_Latin1_General_CP1250_CS_AS
Column (Source AS dt_source)SQL_Latin1_General_CP1_CI_AS
Previous column collation (SQL_Latin1_General_CP1_CI_AS) will remain, and SSDT Compare mechanism will not be able to detect any change.
This will lead to an error message, if I try to create a Foreign Key constraint on this column, referencing another, newly populated column, in another table, because the Publish Script from Comparison was built without knowing the true collation.
For, example, this produces an error, because column collations are different:
ALTER TABLE [FCT].[Inventory] WITH NOCHECK
ADD CONSTRAINT [FK_Inventory_Source] FOREIGN KEY ([Source]) REFERENCES [DIM].[Source] ([SourceCode]);
Make sure you ENABLE "script database collation" in the publish settings (tab: general)
source: https://dba.stackexchange.com/questions/128002/ssdt-publish-window-what-does-checkbox-enable-mean
then it might take multiple publications
first it does on db level, later on table/column level
Is there a nice way before I alter a table (e.g. remove a column), to see if that this will break any stored procedures?
I am trying to do this in MS SQL Server
Use the query here to search all stored procedures for the table and column name. You will probably still want to look at the code for each one you find to verify that it will or won't break.
you can use the following query to search for the table name in any stored procedures:
SELECT name
FROM sys.procedures
WHERE Object_definition(object_id) LIKE '%Your_Table_Name%'
I suggest you:
Make sure you have a separate environment (DEV)
Use the sample code from here to create a proc that confirms all objects in the database can be recompiled
How to Check all stored procedure is ok in sql server?
Use it - I can guarantee you will already have failing objects before you remove your column
Remove your column and use it again to see if more things broke
The more mature approach to this is to put your database into a database project and build that. But you can't do this until your database is valid.
I am in the process of converting an Access database to SQL Server 2005. I have successfully migrated the data and original schema using SSMA and am now in the process of normalizing the database, which requires me to add a few unique identifiers.
Some of the columns we had were previously created using an AutoNumber data type, which is fine. However, I need to create meaningless but unique identifiers for other data, so I am using the int data type with the Identity Specification property. I am seeding at '101' to keep this data above the range that currently exists for data that already has unique identifiers, as they will eventually reside in the same table.
My problem is that when I create a new int with Identity Specification with a seed value of '101' and an increment of '1', the numbers start at '1'. I have attempted to reseed with:
USE dbMyDatabase
DBCC checkident(tblMyTable, reseed, 101)
to no avail. Any suggestions would be greatly appreciated. Thanks in advance!
The solution was to create the column by using a SQL query manually. Adding it through the "New Column..." option produced incorrect results every time. Now that I added it with
USE dbMyDatabase
ALTER TABLE tblMyTable
ADD fldID INT IDENTITY(101,1)
it works just fine.