Error on job that retrieves data from an external FTP server - sql-server

I started in a large company as an IT consultant. One of my tasks is to manage an application that has a SQL database.
I have very limited knowledge of SSIDB and SQL Server Management Studio - but I am willing to learn.
The SQL database is updated with external data. This can be done by users directly in the application, but it also happens through a scheduled job. The job runs from SQL manager. The job has only two steps, one of which is to execute a dtsx package.
The dtsx package is set up retrieves data from an external FTP server and merges the data into the database. The job was made by my predecessor and has run flawlessly for a very long time.
Now we are in the situation where the FTP, supplying the data, has been changed.
I have therefore been inside the Connection managers and changed to the new FTP server.
Running the jobs however we still get the following error message:
Failed to configure a connection property that has the following path: \Package.Connections[FTPConnection].Propterties[ChunkSize]. An error occurred while setting the value of property “ChunkSize”. The error returned is 0x80020009 “The ValidateDates has been migrated. The package must be saved to retain migration changes.”
I have checked the Connection managers and the ChunkSize is unchanged from when the job was working correctly. ChunkSize is set to 1000, both in the Connection manager, but also in the dtsx package itself.
When I have searched for the problem, it is mentioned that it may have something to do with the connection to the FTP server. So I have checked the connection to the FTP server from the server where the SQL database is located - and there is a connection. In addition to this, I have also made sure that there is a firewall rule that allows traffic between the two servers. This is ensured across protocols and port 20-22
When the job itself is run, however, no traffic leaves the server. So I believe the problem is with the job itself.
Edit: after having done a validation of the package i have gotten the following.
Failed to configure a connection property that has the following path: \Package.Connections[FTPConnection].Propterties[ChunkSize]. An error occurred while setting the value of property “ChunkSize”. The error returned is 0x80020009 “The ValidateDates has been migrated. The package must be saved to retain migration changes.”.
: at Microsoft.SqlServer.IntegrationServices.Server.ISServerExec.ConnectionParametersManager.ConfigureProperties(Sting parameterName, object parameterValue)
at
Microsoft.SqlServer.IntegrationServices.Server.ISServerExec.ConnectionParametersManager.ConfigureProperties()
at
Microsoft.SqlServer.IntegrationServices.Server.ISServerExec.ProjectOperator.ValidatePackageWithReference(Int64 validationId, Int64 infold, Int64 projectId, String packageName, Int64 versionId, Nullable'1 referenceId, Project isserverProject, Boolean OfflineMode)
I hope my description is comprehensive enough - otherwise please do write follow-up questions.
ps english is not my first language. sorry if something didn't turn out too well.

Related

SQL Server Agent Job stops SSIS Step with "unexpected error" and without any error informations

I am dealing with my problem on some Windows Server 2019 (Core) with one running SQL Server 2019 CU4 instance each.
What we try to do
We are currently building a data warehouse with distributed databases. The individual layers of the DWH are located on one database server each. The data exchange between the layers/servers takes place via SSIS ETLs, which use Linked Servers to reach the other layers and drag and drop data. Each layer also has its own SSIS service instance and executes the corresponding SSIS packets.
The SSIS packages are called by SQL Server Agent jobs. We have a job that executes the SSIS packets (#1), which in turn calls another job (#2) as the last step, which after a short wait time executes the calling job (#1). Thus, controlled by schedules, a loop is created and data is continuously transferred with ETLs.
I hope this was not too much unnecessary background
The error
Basically the job is running and there are numerous successful executions. However, we are observing interruptions at job #1 without helpful information regarding the error. This means that the job history log refers to the SSIS log, which again only contains an "unexpected termination". In the SSIS log, we only see behavior that indicates that the ETL packet active at that time stopped after validation. Depending on the log level, nothing is logged at all, not even the execution of single packages of the project. The package where this error occurs is different and not limited to a specific one.
What I have already tried
Re-create the jobs and SSIS Enviroments by hand (scripted before)
Using the 32Bit Runtime
Upgrade the SSIS project/package version to
2019
Increase the log level to "verbose"
Patching the SQL Server to CU4
Save ssis dump files (couldn't find them or they weren't created)
Search Windows and SQL Server Logfiles
Does anyone have some suggestions or some ideas how to become more error specific informations?
Thank you very much and take care :)
UPDATE We have an error message (OLE DB 0xC0202009 and 0X80004005)!
In order to exclude the use of environments as a cause, I manually set the parameters in the SSIS job step instead of overwriting them by selecting an environment.
Long story short: Today it turns out that the parameter for an OLE DB Connection String is not passed correctly.
The following is specified as a parameter in the job step:
However, the following connection string is specified in the context of the error message:
Please note that some arguments are added twice to the parameter (red).
What could have caused that?

SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER- have tried everything

I have created a SSIS package and I am trying to run it locally. We use package configurations that point to sql tables and a XML config file. The package ran successfully for about a week, even when deployed to a SQL Server Agent Job in our STAGE environment.
Now, the only way I can get the package to run is by not using the Package Configurations and choosing EncryptSensitivewithPassword. If I change the package to DontSaveSensitive, I continuously get the error below:
An OLE DB record is available. Source: "Microsoft SQL Server Native Client 11.0" Hresult: 0x80040E4D Description: "Login failed for user 'Test_User'.".
Error: 0xC020801C at AgentCompany, Lookup [37]: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "Test" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquireConnection method call failed.
It is so strange that about a week ago, this package ran fine with the configurations and the DontSaveSensitive Option.
I have updated the config file to ensure that it is establishing the connection string to the appropriate database. I also test the connectivity on the connection managers and they all test successfully.
I also double checked the SQL Database where the user is trying to connect to ensure that it has permissions there and it does.
I am very confused. Please Help!
Updating dtsconfig file
Re-creating the connection managers
Making some DFT task DelayValidation to true
Changing the RunTime to 32 bit
EncrpytPasswordSensitive with package configs removed---This works but this is not the standard at my company and this is not how I developed and tested the package before
When you open/run a package, an OnInformation event is fired that says something like
The package is attempting to configure from the XML file "c:\ssisdata\so_56776576.dtsconfig".
When Visual Studio/SSDT opens/runs a package which says it uses configuration but for reasons, cannot get them, you should then see messages like
Warning loading so_56776576.dtsx: Failure importing configuration file: "c:\ssisdata\so_56776576.dtsconfig"
and
Warning loading so_56776576.dtsx: The configuration file "c:\ssisdata\so_56776576.dtsconfig" cannot be found. Check the directory and file name.
and
Warning loading so_56776576.dtsx: Failed to load at least one of the configuration entries for the package. Check configuration entries for "Configuration 1" and previous warnings to see descriptions of which configuration failed.
If someone has manually edited the config file and broken the XML, you'd see a warning like
Cannot load the XML configuration file. The XML configuration file may be malformed or not valid
The important thing to note with regard to configuration - if a configuration cannot be found, SSIS will continue along with the design time values. That is why it is crucial to check the warnings emitted when your package runs. If you are running manually, ensure that you have /rep ew specified so you report Errors and Warnings.
Guesses as to root cause
The package has the protection level of EncryptSensitiveWithUserKey which means the AD credentials of the package creator are used to hash things that might have sensitive information in them. I could be using AD authentication in my connection string and specify that the connection should be trusted but that entire block is still going to get encrypted against my Active Directory account. When you come along and attempt to maintain the package, it's not going to be able to decrypt the sensitive data as you are not me.
The two ways around that are to use a shared key (EncryptSensitiveWithPassword/EncryptPackageWithPassword) which is cumbersome to deal with plus it goes against the whole spirit of secrecy since everyone knows the secret. The other approach as you've identified is DontSaveSensitive and that's my go to for all of this.
The problem to be overcome is that with DontSaveSensitive is that every time you save, SSIS is going to wipe out any knowledge of user name and password from places that might be holding on to it - like a connection manager. The 2005/2008 strategy to hedge against this was to use Configuration or explicit overrides at run time to supply user name and password. My typical approach was to use configuration based on a table instead of XML as I was better at securing sensitive data in a table than I was mucking with ACL on the file system. The other challenge we had with multiple developers and file based configuration was that either everyone had to set their file systems up the same (and we developers are unique rainbow snowflakes so that's unlikely) or we need to use a network shared file which is great until someone adds their own values to it and breaks it or removes your changes or any of a host of other challenges.

SQL Server 2014, SSDT : deploying .dtsx packages to live, problems with connection managers & variables

Summary: A "master" database houses a set of connection strings. A .dtsConfig XML file is used to point the packages to this database. Dynamic connection assignment accomplished by using package variables and expressions on the connections. Works flawlessly in the development environment, but once deployed to live it falls over.
I'm currently running into issues when deploying .dtsx packages to a production environment. The issue is specifically related to the connection manager when the jobs run. The history log reports a ... network ... error with a Login timeout error as the reason.
(For reference, I'm using Visual Studio 2013 with SQL Server Data Tools)
I have a table in a master database that holds the connection strings that it needs to process.
The packages check for a configuration file, that points them to the database as stipulated in (1).
The connections are retrieved and are placed in an object variable.
The variable is mapped to a foreach loop container, where a set of connection string variables are mapped to the relevant columns.
The packages then progress as normal.
Some notes:
When I did the development, I provided default values on my network for the connection strings.
I have checked the connection string parameters and formatting inside the database, and they conform to Microsoft's specification.
Our implementers installed SSDT on a client's QA server, where I altered the connection variables to point to their network. This solved the problem, but it is not sustainable (in my mind at least).
So my question is: how do I get my production deployments to work correctly with dynamic connection management assignment without having to alter the connection string variables inside each package on a per client basis?
Any help will be appreciated.
After doing more research by posing plenty of questions to DuckDuckGo, I finally got my answer here.
A quote from the above blog:
I had a set of SSIS packages running for my client using the third option listed above. The packages worked fine for ages until one fine day when they failed. The logs showed the packages had failed validation and I discovered that all the packages had their connection managers’ DelayValidation property set to False. The variable used to set the connection string had a default value pointing to the DEV server. These packages in production were actually trying to validate against DEV database though the connection string was dynamically set via a variable to point to PROD. This was dangerous as the jobs will not run if DEV server was down, which is exactly what had occurred.

What state is my SQL server database in when msdeploy fails on user creation?

I am using msdeploy (version 2) to transfer a database from machine A to machine B.
On in the database on machine A there are some users that do not exist on machine B, thus the transfer (partially) fails with the message:
Error Code: ERROR_SQL_EXECUTION_FAILURE
More Information: An error occurred during execution of the database script.
The error occurred between the following lines of the script: "3" and "5".
The verbose log might have more information about the error.
The command started with the following: "CREATE USER [someDomain\someUser] FOR LOGIN [someDomain"
Windows NT user or group 'someDomain\someUser' not found.
Check the name again. http://go.microsoft.com/fwlink/?LinkId=178587
The database seems to be transfered, except for the user creation. Does anyone know what state the database is in after this failure?
Is there any way I can transfer the database without the users (or better without specific users) using msdeploy?
Web Deploy uses SMO (SQL Management Objects) to script out and apply the scripts for SQL databases, and exposes most of the SMO settings with the dbfullsql provider (so, most of these options: http://msdn.microsoft.com/en-us/library/microsoft.sqlserver.management.smo.transfer_properties.aspx). If you want to skip the users due to this kind of login-not-exists or user-not-found error, you should be able to do this by adding the scripting option: copyAllUsers=false to the source of the sync. For example:
msdeploy.exe -verb:sync -source:dbfullsql="Data Source=.\SQLExpress;Initial Catalog=MySourceDb;User Id=localUser;Password=LocalPass",copyAllUsers=false -dest:dbfullsql="Data Source=RemoteSQLServer;Initial Catalog=MyDestDb;User Id=remoteUser;Password=RemotePass"
Incidentally, I am surprised you note the db appears to have been sync'd - I would expect this is not actually the case. If you have the permissions for it, Web Deploy will create the database if it did not already exist when it initially tries to make the connection, but your failure occurred very early in the script execution, and I believe Web Deploy dbfullsql syncs are transacted by default (the db creation is separate from the script execution and is not transacted). Thus the db may exist where it did not pre-sync, but I wouldn't expect the data to be present in it.

SQL Agent Job - Connection may not be configured correctly or you may not have the right permissions on this connection?

I'm getting this error when running an SSIS package through SQL Agent
Failed to acquire connection "ORACLE ADO.NET". Connection may not be configured correctly or you may not have the right permissions on this connection.
When I log on as the SQL Agent User and run the ssis package directly it is fine. When I then execute it through the SQL agent job, it fails.
I've read around extensively on this topic, and it seems a lot of the advise concerns how you are logged in, configuring of proxy accounts, etc, etc, etc, none of which has been helpful.
I am logging onto an Oracle database with an ADO.NET conncetion. The connection string is as follows (datasource, userid and password have been changed):
Data Source=DATASOURCE;User ID=userid;Password=password;Persist Security Info=True;Unicode=True;
I'm loading this from a registry setting using package configuration. To check that I am getting the correct string, I am writing it into a temporary log table. I am definately getting the string I need from the correct registry setting.
I've tested the oracle login credentials though PL/SQL developer, and it lets me login just fine.
As far as I can tell, as I'm using an explicit user name and password for the Oracle connection it just shouldn't matter who the SSIs pacakge is run as. The only point of failure that Ican see would be the reading of the information from the registry, but that seems fine.
I'm really quite baffled, I must confess, and would appreciate any help some of the splendid experts here can offer.
Many thanks,
James
Ok, tracked this one down after quite a lot of pain.
It was working fine on one environment, but not another, so I fired up Process Monitor (http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx) and ran a package through the SQL Agent job, comparing which system entities were hit on each enviroment.
On the failing environment, at the point of the bulk transfer operation, the package attempted to get the Oracle 11 client DLL, and then hung.
I knew that this was installed, and, moreoever, the DLL path was a system environment setting. After further investigation it was revealed that the server had not been rebooted since the Oracle Client install and the SQL Server Agent process had not bee recycled.
Yes, can you believe it, the old helpdesk fix "Can you reboot your computer?" worked.
Sigh!
We had issues at a client with running packages connecting to Oracle before stored on our sql server instance. The work around we found was to change the package property, protection level, to "Dont save Sensitive Data" and for security purposes, we encrypted the username and password in the package configuration that was decrypted by a udf in sql server. Of course, before you try the whole encryption part, I would recommend putting the username and password in the package configuration without encrypting the values to see if changing the protection level setting is the solution to your specific problem. I hope this helps.
I was getting this error when tnsnames.ora file did not have a valid entry for the environment

Resources