One parameter file for different repositories Informatica PowerCenter - file

I have a parameter file which is assigning DB Connections for one repository which stands for test. It is referring to the folder where the workflow and session is like the following.
[ORANGE_REMIGRATION.WF:wf_m_remigration_payments_test.ST:s_m_remigration_payments_test]
I would like to know whether I can use one parameter file to assign DB Connections for different environments e.g. when the repository is PROD then the workflow should write in that environment. I need to know whether we can use repository names in the parameter file e.g.
[MDM_TEST.ORANGE_REMIGRATION.WF:wf_m_remigration_payments_test.ST:s_m_remigration_payments_test]
whereas MDM_TEST would be the repository and then list the DB Connections and then another list in the same parameter file for MDM_PROD. Is this possible or is there another way to do this?

Here's the description of Parameter File sections. Repository is not allowed.
However if you migrate workflows between environments and use different Integration Service, there are different values for $$ParamFileDir (or $$PMRootDir in general). So if you refer to your file using the variable, migrated workflow will use the parameterfile for the given environment. Hence DEV WF would use DEV connections, PROD one would use PROD connections. No actions needed. More can be found here.

I assume, your have distinct physical servers for DEV & PROD. If you define folder structure mirror each other like
/share/param/Paramfile1 ===> Dev server
/share/param/Paramfile1 =====> Prod Server
you can use the same param file. These param files can be configured via respective workflow properties.

Related

Azure Data Factory - File System to Oracle Cloud Storage

Is it possible to copy files from an on-prem File System to Oracle Cloud Storage. Note that we are not concerned with the data inside the files.
In simple terms it's as if copying files from one folder to another.
Here is what I have tried:
Created Self-Hosted Runtime for the file system (testing on my local machine)
Created Linked Service for File System
Linked Service for Oracle Cloud Storage (OCS)
Data Set of File System
Data set of Oracle (OCS)
However, I get error saying that my C:\ can not be resolved in step 2. when connection is tested.
and
In 5. it says not able to sink because it is not supported under OCS. At this point it seems like it is not possible to copy files into OCS?
I tried different configurations to see if OCS can be used as a drop container for files.

Make a custom SSIS package to reuse

I have created a SSIS package which is used to connect to the FTP server and retrieve all the xml files from that server. I have used mostly variables like sftp_server, username, etc., How can I make it a customisable package?
In other words, How can i plug in with other project and pass all the required fields and do the same thing there.
This is my package
Since your screen shot looks like it's from SSIS 2012+, you can make your variables into parameters. Then you can call the same package from multiple jobs, and in the job step where you call the package, you can go into the Configuration tab and set the values of the package parameters for that job.
So different jobs can all call the same package and all pass their own values for FTP Server, Credentials, Local Paths, etc.
This assumes you are using the Project Deployment Model that became available in 2012. Otherwise you can do the same thing using Config files instead.

Database Project Over Multiple Environments?

How can you have environment specific table values in your database project and make sure that they only deploy to the environment you are deploying to with Release Management? We have been using Release Management for some time now, but only for .NET code. We are somewhat new to the DACPAC realm, but have found it easy to set up and use via release management. However, now we want to extend this capability to a table that has configuration variables per environment. How do we make this part of our database project and make sure that each environment has its own unique version of data?
Use SSDT for publishing the database schema and reference data; don't use it to manage environment settings.
Personally, I would (and have) run a secondary script post-deployment that configured environment-specific values. This is no different than putting the correct values in the web.config file of a web application post-deployment. It's something you manage within your deployment tool.
Ignoring the release management part to the question (because it depends what mode you use and whether you store configuration variables in RM etc) you can certainly pass in environment specific values into your dacpac execution (for use in 'postdeploy' data scripts) using sqlcmd variables defined in a tokenised publish file.
Broadly the process is:
Use standard sqlcmdvar syntax in your post deploy script e.g insert into table values '$(my_env_var)'
update the database project properties (sqlcmd tab) to include your new variable which ensures your dacpac expects a value when executed
Generate a publish.xml file (which should now include a node)
create a publish.release.xml file which contains transform instructions to update the value of your node to introduce a token e.g. ##my_env_var##
update your database project file(.sqlproj) to include instructions to transform publish.xml on build using the contents of publish.release.xml
Its quite long winded but what you get out of the above is a dacapac + tokenised publish file in your build output ready to be detokenised and executed by your deployment process..be that RM or any other tool.

How to deploy ssrs report on production server

I had created few SSRS reports on the development environment. Now I need to deploy those reports to the production environment. The production environment server name & database names are different from development environment.
Kindly let me know the proper procedure for deployment.
Create two different ptoject configurations in the configuration manager - name them f.e. "Development" and "Production". Then, in the project properties, set up the server names and report folders for both prod and dev configurations as necessary. By doing this you will be able to choose a right configuration when deploying and your reports will be deployed to a correct server.
It is not that straight forward for different databases though. What I would suggest here is to create a shared datasources in both development and production environments with the same name, configure the connecction strings properly and use the shared datasource in your reports. You will need to create the datasources only once, and all the consiquent deployments will still refer to those datasources.
Hope it helps.
To deploy the report go to Project > Project properties and the following window shall open up
Fill the server name in TargetServerURL (marked 1 in red). Your server name should be http://{servername}/RerportServer.
After this you can use a custom folder where you want to deploy the reports (this is marked as 2). Keep the rest as same.
Since you have different DB names as well you can do a number of things to address this issue.
You can create a shared Data Source like #Alexey pointed out.
Press CTRL+Alt+D to open up Report Data menu and create a new Data Source, copy your query there and redesign.
Hope this helps.

How do I use a different database connection for package configuration?

I have an SSIS Package that sets some variable data from a SQL Server Package Configuration Table. (Selecting the "Specify configuration setings directly" option)
This works well when I'm using the Database connection that I specified when developing the package. However when I run it on a server (64 bit) in the testing environment (either as an Agent job or running the package directly) and I Specify the new connection string in the Connection managers, the package still reads the settings from the DB server that I specified in development.
All the other Connections take up the correct connection strings, it only seems to be the Package Configuration that reads from the wrong place.
Any ideas or am I doing something really wrong?
The only way I was able to do this was to use Windows Environment Variables. You can specify things like connection strings and user preferences in environment variables, and then pick up those environment variables from your SSIS Task.
I prefer to use Server Aliases in the SQL Client Configuration. That way, when you decide to point the package to another SQL Server it is as simple as editing the alias to point to the new server, no editing necessary in the SSIS package. When moving the package to a live server, you need to add the aliases, and it works.
This also helps when you have a real painful naming convention for servers, the alias can be a more descriptive name than the actual machine name.
I didn't actually understand your question completely but I store my connection settings in a configuration files usually one for each environment like dev, production etc. The packages read the connection settings from the config files when they are run.
When you're creating a job to call the SSIS package, and you're setting up the step, there is a tabbed area. The default tab is where you set the package name, and the next tab over is where you can set the configuration file. Have a config file for each package, and change for the server (dev, test, prod). The config file can be put directly on the dev, test, and prod servers, and then point to them when setting up that job.
If u are using SQL Server Package Configuration then all the properties of the packages will come from SQL Server table - Please check that
SSIS security the way it stands is terrible. No one will be able to support things when I am out of the office. The job never reads from the configuration file...I give up. It only works when I edit the string in the Data sources tab. However the password gets lost if you happen to go into the job a second time. Terrible design, absolutely horrible. You would think that when you specify a xml file in the job step it would read the connection string from there that is defined, but it does not. Does this really work for anyone else?
Goto the package properties and set deployment True. This should work for what you have done.
I had the identical question, and got the same answer, i.e. you cannot edit the connection string used for package configurations hosted in SQL Server, except if you specify that the SQL Server connection string should be in an environment variable.
This unfortunately does not work in my dev setup, where two environments are hosted on the same machine. I ended up following Scott Coleman's approach as detailed on SQL Server Central [Free sign-up and a good site]. The trick is that you create a view to store your configuration settings on one central server, and then use the machine that connects to it to determine which environment is active.
I used that approach, but also used the User connecting to the environment to make a determination, because my test and dev setups run on the same SSIS instance, but as different user names. Scott suggests in the comments that the application name should be set, but this cannot be changed in the package execution job step, so it was not an option.
One other caveat that I found was that I had to add "Instead of" triggers to my view to do the inserts, updates and deletes for configuration variables.
We want to keep our package configs in a database table, we know it gets backuped with our other data and we know where to find it. Just a preference.
I have found that to get this to work I can use an environment variable configuration to set the connection string of the connection manager that I am reading my package config from. (Although I had to restart the SQL Server agent before it could find the new environment variable. Not ideal when I deploy this to Production)
Looks Like when you run an SSIS package as a step in a scheduled task it works in this order:
Load each of the Package Configs in the order they appear in the Package Configuations Organiser
Set the Connection Strings from the Data sources tab in the Job Step properties of the Scheduled Job
Start running package.
I would have expected the first 2 to be the other way around so that I can set the data source for my package config from the scheduled job. That is where I would expect other people to look for it when maintaining the package.

Resources