I had created few SSRS reports on the development environment. Now I need to deploy those reports to the production environment. The production environment server name & database names are different from development environment.
Kindly let me know the proper procedure for deployment.
Create two different ptoject configurations in the configuration manager - name them f.e. "Development" and "Production". Then, in the project properties, set up the server names and report folders for both prod and dev configurations as necessary. By doing this you will be able to choose a right configuration when deploying and your reports will be deployed to a correct server.
It is not that straight forward for different databases though. What I would suggest here is to create a shared datasources in both development and production environments with the same name, configure the connecction strings properly and use the shared datasource in your reports. You will need to create the datasources only once, and all the consiquent deployments will still refer to those datasources.
Hope it helps.
To deploy the report go to Project > Project properties and the following window shall open up
Fill the server name in TargetServerURL (marked 1 in red). Your server name should be http://{servername}/RerportServer.
After this you can use a custom folder where you want to deploy the reports (this is marked as 2). Keep the rest as same.
Since you have different DB names as well you can do a number of things to address this issue.
You can create a shared Data Source like #Alexey pointed out.
Press CTRL+Alt+D to open up Report Data menu and create a new Data Source, copy your query there and redesign.
Hope this helps.
Related
I have a SQL Server database project (.sqlproj) which I am using as part of a CI/CD pipeline to deploy database changes. I would like to deploy the same code to two databases (Dev and Production) but each with a slightly different configuration:
In Dev, I have an Azure AD group Database-Dev-Developers:
CREATE USER [Database-Dev-Developers] FOR EXTERNAL PROVIDER;
In Production, I have an Azure AD group Database-Prod-Developers:
CREATE USER [Database-Prod-Developers] FOR EXTERNAL PROVIDER;
I can find no way to alter which scripts are build/published based on the configuration. Ideally I'd like to be able to specify the project configuration at build time (Debug/Release), which changes the output.
I have tried adding conditional expressions for the relevant files in the .sqlproj file, but this has no effect:
Condition=" '$(Configuration)' == 'Debug' "
You should look into using a Token Replacement step in your pipeline. You can add different variable values for Dev vs Prod to replace the tokens with. Then you just need one tokenized configuration file that can be used for both Dev and Prod.
I'm not exactly sure how kosher it is to use tokens in a .sqlproj file, it depends on what configurations you're trying to replace. But I've seen it used very successfully on ...config.json files in modern .NET Core based projects.
Another thing you can look into is File Transformations. I don't have any experience using these though.
I have found a partial solution to this problem. One can create a publish profile, which contains instructions to ignore certain object types. See this helpful blog post which details the process, summarised below:
In Visual Studio, right-click SSDT project
Publish -> Advanced
Select the 'drop' tab, and check 'Drop objects in target but not in source'
Check 'Do not drop...' next to the object types you wish to ignore. For me this was 'Do not drop users' and 'Do not drop roles'
Save the publish profile
Extra step for Azure DevOps Azure SQL Database deployment task, specify the generated publish profile xml file in the 'Publish Profile' setting.
This has the drawback that non-sensitive security settings (such as role membership) cannot be deployed, but this was a trade-off I was willing to make in my situation.
I have successfully created my first SSRS project in Visual Studio. The deployment process requires setting up the TargetServerURL and the TargetServer Version. These are the only two items that I know are correct. The tutorial I have been watching does not go into the other items and does not clarify what they are and what they are used for. What are the following items referring to?
TargetDatasetFolder
TargetDataSourceFolder
TargetReportFolder
TargetReportPartFolder
The default settings for OverwriteDatasets and OverwriteDataSources was False and this is probably why my deployment attemtp threw a nondescript error. So, now, perhaps if I try again, my deployment will create these folders on the server by force, but I owuld rather not do this because the database manager has already given me the names of the folders where I should deploy. So, how are these Folders arranged? Please advise.
TargetDataSourceFolder: The name of the folder in which to store the published shared data sources. If you do not specify a folder, the data source is published to the same folder as the report. If the folder does not exist on the report server, Report Designer creates the folder when the reports are published.
TargetDataSetFolder: the same but for your shared data set you want to publish.
TargetReportFolder: The name of the folder in which to store the published reports. By default, this is the name of the report project. If the folder does not exist on the report server, Report Designer creates the folder when the reports are published.
You can write a path (finance/dept1/...) in this case, you'll deploy your report (or datasets or datasources) following this path.
Here is an exemple by default from microsoft:
About 'overwrite dataset' and 'overwrite datasource' (it's about 'shared dataset' and 'shared datasource') it depends on the architecture you chose (or if you have already something created) on the server.
I think the best way is to let them as False. If they don't exist, the deployment will create them. If they exist, you'll just get a warning (if I remember) and the report you'll be deploying should link your report to those dataset and datasource already created. Futhermore, probably you have other reports linked to those shared datasource/dataset and if you overwrite them, you'll probably raise some issues when you'll run those other reports. You have to put 'True' when you want to modify the dataset/datasource
I have a Visual Studio 2010 Database project, from which I want to generate a script
that simply puts up this database to another machine. The problem is that i can't find a
solution for this.
As I started the project, I imported the shema from a database on my development pc.
The Schema Objects were generated and all tables and scripts where under 'Schema Objects -> Schemas -> dbo'. Over the time, some things changed, some where added. And by using right-click -> deploy,
the changes were made to my local database successfully.
But now I want to deploy to another machine. The problem is, that in the release folder of the project, there is only a xml dbschema file containing all tables and scripts that i can't import
with sql management studio (or i just can't find out how) and the a deployment script which is nothing more than some checks followed by the pre- and post- deployment script, but without any tables or scripts in it.
So please, how do i export the database from Visual Studio, so i can easily put it up on another machine?
Marks--
You likely have already resolved this, but I thought I should answer your questions for the benefit of others.
Yes, you can deploy from Visual Studio to different machines. You can also do it from the command line, using VSDBCMD. And you can create a WIX project to give a wizard for others to install it with.
If you can connect to the target database from your dev PC, you can deploy to it. To do this:
Select another Configuration from the Solution Configuration drop down. Normally, the Project will come with "Debug" and "Release" baked in. You can add another configuration to allow you to deploy to various targets by clicking "Configuration Manager."
Right-click your Project and select 'Properties', or simply double-click Properties under the project.
Click the Deploy tab. Notice that the Configuration: drop-down shows the same selected configuration as "active."
Change the Deploy Action to "Create a deployment script (.sql) and deploy to the database."
Next to Target Connection String, click "Edit" and use the dialog to create your deployment connection to the target database.
Fill in the Target database name, if different.
For each Deployment Configuration (e.g., Debug, Release, etc.), you will probably want a separate Deployment configuration file. If you click "New," you can create one for the current configuration. The new file will open, and you can check and uncheck important things about the deployment.
Note: If you check Always re-create the database, the script will DROP and CREATE your database. You will lose all your data on the target! Be careful what you select here. Most people leave that unchecked for a Production target. I check it for Development or Local because I want a fresh copy there.
Save your changes to the file and to Properties.
To deploy to the target, be sure to select the correct Configuration. Click Build/Deploy [My Database Name]. You probably should experiment with this so you are familiar with how it works before trying it on a live environment.
Good practices: build a similar environment to production ("Staging") and deploy there first, to test the deployment, and always back up the database before deploying, in case something goes wrong.
For more info, please see:
Working with Database Projects
Walkthrough: Put an Existing Database Schema Under Version Control
Visual Studio 2010 SQL Server Database Projects
Is it's possible to point your Visual Studio to your new target database? 1. Properties of your Database project, Deploy tab, set the fields in Target Database Settings.
Now when you generate a deploy script, the resulting SQL file will be the various CREATe / ALTER / DROP etc that will align the target database with your schema.
You could always create an empty database and then do a schema compare in Visual Studio between your database project and the new empty database. You can amend the generated schema update script to also create the database (since the script will be to update an existing empty database)
We are creating several SSIS packages to migrate a large database as part of a release cycle.
We may end up with about 5-10 SSIS packages.
As we have 4 environments (dev, QA, staging, production, etc.), is there an efficient way to change the destination server for each SSIS package as they go through the different server environments? Ideally, there could be a script that is run that would take as a parameter the server that was needed.
You could use a configuration file to store the connection strings for the servers. Then as you moved from environment to environment, you would simply change the config file. To simply create a config file, on the control surface of your package,
1) right click and choose Package Configurations from the context menu.
2) Check the box for Enable package configurations if it is not already selected,
3) then Click the Add... button.
4) Click next on the dialog,
5) then add a Configuration file name: and click next.
6) In the Objects View, Under Connection Managers, expand your connection, then expand Properties and check the box next to ConnectionString.
7) Then click next
8) then finish.
You now have an xml file named what you named it in step 5 above. You can edit this file with a text editor and change the connection string to map to whichever server you need it to before each run.
Once created you can share the config file between multiple packages as long as the objects referenced are named the same between the packages.
This is a rudimentary tutorial on configurations, there are many ways of saving configurations of which this is only one. For more information on configurations consult your favorite SSIS book
We use a config table that stores the configurations for the server. But config files work well too. We like the table because we are doing reporting on SSIS package meta data and it's easier to grab this data (along with a lot of other data we store as well) when stored in a table.
William Todd Salzman's answer covers most points. I have a couple more to add:
Make sure the pacakge ProtectionLevel property is DontSaveSensitive
If you are working with different shipping environments, then a SQL Server table as a source for the package configurations is maybe not for you, as you will require one central database containing all the connection strings for all the servers.
Having worked with package configurations retrieved from the registry, you will need to be aware that these settings are retrieved from the HKEY_CURRENT_USER hive. This has implications for when the package is run through a SQL Agent Job.
I have an SSIS Package that sets some variable data from a SQL Server Package Configuration Table. (Selecting the "Specify configuration setings directly" option)
This works well when I'm using the Database connection that I specified when developing the package. However when I run it on a server (64 bit) in the testing environment (either as an Agent job or running the package directly) and I Specify the new connection string in the Connection managers, the package still reads the settings from the DB server that I specified in development.
All the other Connections take up the correct connection strings, it only seems to be the Package Configuration that reads from the wrong place.
Any ideas or am I doing something really wrong?
The only way I was able to do this was to use Windows Environment Variables. You can specify things like connection strings and user preferences in environment variables, and then pick up those environment variables from your SSIS Task.
I prefer to use Server Aliases in the SQL Client Configuration. That way, when you decide to point the package to another SQL Server it is as simple as editing the alias to point to the new server, no editing necessary in the SSIS package. When moving the package to a live server, you need to add the aliases, and it works.
This also helps when you have a real painful naming convention for servers, the alias can be a more descriptive name than the actual machine name.
I didn't actually understand your question completely but I store my connection settings in a configuration files usually one for each environment like dev, production etc. The packages read the connection settings from the config files when they are run.
When you're creating a job to call the SSIS package, and you're setting up the step, there is a tabbed area. The default tab is where you set the package name, and the next tab over is where you can set the configuration file. Have a config file for each package, and change for the server (dev, test, prod). The config file can be put directly on the dev, test, and prod servers, and then point to them when setting up that job.
If u are using SQL Server Package Configuration then all the properties of the packages will come from SQL Server table - Please check that
SSIS security the way it stands is terrible. No one will be able to support things when I am out of the office. The job never reads from the configuration file...I give up. It only works when I edit the string in the Data sources tab. However the password gets lost if you happen to go into the job a second time. Terrible design, absolutely horrible. You would think that when you specify a xml file in the job step it would read the connection string from there that is defined, but it does not. Does this really work for anyone else?
Goto the package properties and set deployment True. This should work for what you have done.
I had the identical question, and got the same answer, i.e. you cannot edit the connection string used for package configurations hosted in SQL Server, except if you specify that the SQL Server connection string should be in an environment variable.
This unfortunately does not work in my dev setup, where two environments are hosted on the same machine. I ended up following Scott Coleman's approach as detailed on SQL Server Central [Free sign-up and a good site]. The trick is that you create a view to store your configuration settings on one central server, and then use the machine that connects to it to determine which environment is active.
I used that approach, but also used the User connecting to the environment to make a determination, because my test and dev setups run on the same SSIS instance, but as different user names. Scott suggests in the comments that the application name should be set, but this cannot be changed in the package execution job step, so it was not an option.
One other caveat that I found was that I had to add "Instead of" triggers to my view to do the inserts, updates and deletes for configuration variables.
We want to keep our package configs in a database table, we know it gets backuped with our other data and we know where to find it. Just a preference.
I have found that to get this to work I can use an environment variable configuration to set the connection string of the connection manager that I am reading my package config from. (Although I had to restart the SQL Server agent before it could find the new environment variable. Not ideal when I deploy this to Production)
Looks Like when you run an SSIS package as a step in a scheduled task it works in this order:
Load each of the Package Configs in the order they appear in the Package Configuations Organiser
Set the Connection Strings from the Data sources tab in the Job Step properties of the Scheduled Job
Start running package.
I would have expected the first 2 to be the other way around so that I can set the data source for my package config from the scheduled job. That is where I would expect other people to look for it when maintaining the package.