I'm trying to go about setting up my BIRT reports and the iServer they sit on such that the database the Data Sources connect to are determined by the environment. Our setup is that currently there is just one iServer instance and many environments running a tomcat webapp that hit it (this may be the problem...).
Essentially the ideal is that the report connects differently in these places:
Local developement, which is running a local tomcat instance of the application which talks to the iPortal/iServer. Local database, but should be able to easily change to other databases for debugging etc.
QA deploy, qa database
Production deploy, production database
I've seen two options for how to fix this:
First option is to bind the Data Source to a configuration file in resources somewhere. Problem here is that if you have only one iServer, its resources are local to the server it is on, and not where the webapp. So, if I understand it correctly, this does not provide the flexibility I'm looking for.
Second option is to pass in all the connection info as report parameters and get the application to determine the correct parameters to send in. This way the application could pull from a local configuration file. This option would work, but I'm weary of the security (or lack thereof) in passing around connection info/credentials.
Does anyone have a better option? Or have people just run local iServer instances for developement? I can see running an iServer for each environment may simplify this issue and allow the reports released to production to be updated and tested in a QA environment without disrupting production, so maybe that is the solution.
One possible approach would be to set each of the connection properties conditionally in the Property Binding section of the Edit Data Source dialog, based on the value of a hidden parameter indicating which environment is to be accessed.
An example of this approach can be found here.
You mention that you are looking for an option for development, including the possibility of a local iServer. I think this would be overkill. Do you Dev & initial testing in BIRT; you do not need an iServer to run the report. If you need resources on the iServer to run & test the report you can reference those through the Server explorer in BIRT Pro. Once you are ready to deploy, I would follow Mark's strategy above using property bindings on the data source itself. That is as close to a best practice as exists for this migration requirement as exists in BIRT.
Related
I am trying to understand the best way to migrate code when working with Snowflake. There are two scenarios, one where we have only one snowflake account and that houses all environments (dev, test, prod). And the other has two accounts (non-prod, prod). With the second option, I was planning to create a separate script that will set up the correct database name which is based on environment (dev_ent_dw, prod_ent_dw) and then I will refer these as variables when creating objects. Example -
set env = ‘dev’;
set db = $env || ‘_ent_dw.’;
Right now we’re running everything manually so the devops team will run these upfront before running ddl scripts. We may do something similar with the former scenario but I am wondering if folks can share best practices of dealing with this as I am sure it would be common topic at large enterprises.
We have different accounts for each environment (dev, qa, prod). We use Azure DevOps for change management within our team, including the Git repos and Azure Pipelines for deploying scripts via schemachange.
We do NOT append an environment to objects, as that is handled by the account.
Developers write migration scripts and check them into source control. We then create Version folders and move/rename the migration scripts for deployment, and run a pipeline to execute the changes. The only thing we need to change is the URL to deploy against, and that is handled within the pipeline itself. The nice thing here is that we do not need to tweak anything going from different branches in source control.
We have not been doing much with creating clones for development work, as we usually only have one developer working on changes to a set of objects at a time. We are exploring ways to improve our process, but what we have works fairly well for our current needs.
By the end of last week our central IT Department introduced SCCM and applied it to a bunch of clients in our division. My colleagues and I work as so called "IT-Partner" in a 1st level support for a few hundrets of colleagues. Now we're facing some problems with our new SCCM System (installed packages do not work etc.) Now we'd like to "reset" applications so the SCCM Agend will reinstall them. I've read something about the detection methods but unfortunatelly I do not really know how they work nor I know where those methods are saved. I want to "analyse" those methods so I know which file to modify / delete that the agent will reinstall the application.
By the way, how much time does SCCM take from "assigning" a package to applying to the client?
Assuming you only have the client and no access to the SCCM Console the detection methods can be found using WMI. They are stored in root\ccm\CIModels in the Class Local_Detect_Synclet.
The format is XML in one column and it is designed so that all kinds of detection methods can basically be represented in the same style so it's not very readable but you should be able to get some basic understanding about the detection method used.
Keep in mind this is only true if the software was deployed in the "new" (introduced in sccm 2012) application format and not for the "old" package/program format.
If you want more detail I once tried to automate the process of triggering a reinstall for any given application but ultimately failed due to problems with the chache/distribution point. I posted all my findings here.
So from an application POV. When you deploy an app the detection method is setup in SCCM to determine wether or not the application installed successfully. This detection method could be configured a variety of ways. For example, it could check to see if the msi code is installed to determine success, it could check the .exe and compare it to a specific version, or even check a registry file for existence. In order to change/modify these detection methods you should be an SCCM admin and be able to login to the console. From there you would select the specific application or package you want to analyze and click through the properties of the deployment.
I'm working on an OpenCart project. (Note: this is my first time dealing with it.)
I want to somehow implement my usual workflow of:
working on localhost, experimenting, etc,
deploying the changes to the production server (sometimes to a staging server before that),
adding the database changes.
Now, how should I achieve this?
What I already did with GIT is I created an automated deployment flow, which consists of the following:
building a deployment version (Checking out master/HEAD's upload/ directory, and removing the upload/install directory.),
copy the upload/ dir's contents to the target server.
This work fine, but won't solve the database migration issue.
I think it's not even as simple as updating certain tables in the target server's database from my local database, since for example: the "settings" table contains data that's specific to the environment.
So I can't just overwrite the settings table with my local version.
It seems to me, that the easiest - and ugliest - solution would be to develop on the prod server in parallel to the localhost changes. So for example: If I install a module, which causes changes in the database, then I would need to replicate every step I took in the local environment installing and placing that module. Same goes for every admin setup I take. (Meta changes, etc.)
This sounds awfully painful to me, so I hope there's a better solution out there other than doing every database-related change twice...
Thanks in advance!
Twist to the standard “SQL database change workflow best practices”
Background
ASP.NET/C# Web App
MS SQL
Environments
Production
UAT
Test
Dev
We create patch scripts (XML and sql) that are source controlled in Mercurial. We have cmd line utility that installs patches to DB (utitlity.exe install –patch) from a Release folder the build packages. Patches have meta data that helps with when patch should run and we log patches installed in a table in the target DB. All these were covered in the 3 year old question:
SQL Server database change workflow best practices
Our Problem/Twist
I think this works well for tables, views, functions and stored procedures. We struggle with application configuration data. Here are some touch points on application configurations.
New client. BA performs system study and fit analysis. Out of this comes a configuration word document of what application configurations need to be setup. Note some of these may also come in phases over time. We need to get these new configurations into the system for the developer and client UAT.
Developer works on feature request or bug fix. A new configuration change comes out of that change. The configuration needs to make it into the system for testing and promotion to UAT and up.
QA finds that the developer missed an associated configuration change. That configuration needs to make it into the system for promotion to UAT and up.
Build goes to UAT. Client performs acceptance testing but find they really want to change another unassociated configuration and have it promoted with the changes. In other words they found they want to change a business process by a configuration. The configuration needs to make it into the system for promotion to PRD.
As the client operates in PRD they may tweak application settings. These configurations need to make it into the system for future development and testing.
The general issue is making sure we are accounting for all the configurations and accidently not miss any during promotions which causes grief.
Our Attempts At A Process
a. We have had member of the QA team to write patches (xml and sql) and check those in. This requires a build to make sure those get into the package. With this approach it really just took care of item 1 above and we fell apart on the other items. The nice thing is for the items that made it into the patches it was just an install with the utility.
b. A developer threw together a Config page on the application. All the configurations could be uploaded and downloaded via XML document but it requires the app to be running. For item 1, member of QA team would manually setup configurations in the application and then would download the Config.xml file. This XML file would be used to upload configurations in other environments. We would use text diff tool to look at differences between config.xml files from different environments. This addressed item 1 and the others items but had problems. Problems were not all configurations made it into the XML document (just needs to be fixed by developer), some of the configurations didn’t have a UI in the application so you still had to manually go to the database on some, comparing the XML document with text diff was difficult at time (looked mostly due to sorting but I’m sure there are other issues), XML was not very human readable and finally the XML document did not allow for deleting existing incorrect or outdated configs.
c. Recently we went with option B, but over time for a new client we just started manually tracking configs and promoting them manually by hand (UI and DB) through the promotions. Needless to say lots of human errors.
So we have been looking at solutions. Eventually it would be great to get as much automation in as possible. I’m looking at going with the scripting approach and just focusing on process, documentation and looking at using Redgate data compare in addition to what we had been doing with compare on config.xml. With Redgate we have to create views though and there is no way to create update scripts from that approach except to manually update the scripts. It does at least allow a comparison without the app running. I’m also looking at pulling out the configs from our normal patches and making it a system independent of the build (utility.exe –patch –config). When I say focus on process it will be things like if we compare and find a config change either reported by client or not, we still script it, just means we have to have a process in place to quickly revalidate config install before promoting to the next level. As for documentation looking at making the original QA document a living document instead of just an upfront document. The goal is to try and enhance clarity and reduce missing configurations during promotion. Unfortunately it doesn’t improve speed of delivery.
Does anyone have any recommendations or best practices to pass along. Thanks.
Can I ask exactly what you mean by application configuration. I'm interpreting that as both:
Config files in the web application
Static reference data inside the database
Full disclosure I work for Red Gate. You might be interested in taking a look at Deployment Manager, it's a deployment tool that deploys applications, databases and configuration. It's free for up to 5 projects and target servers.
The approach it uses is to package application code and the database state into packages. These packages can be deployed into dev, test, staging and production environments. The same package is deployed to each environment.
Any application configuration that needs to change between environments is handled in one of the ways below:
Variable substitution in web.config. The tool allows you to specify override values for variables in these files, and set these per environment/server
Substituting the web.config file per environment.
Custom powershell scripts that are run pre/post deploy. You could use these to execute custom SQL based on the environment or server.
Static data within the database, using SQL Source Control's static
data feature. I've written a blog post about how to supply
different sets of static data to different environments/customers.
This allows you to source control the application configurations and deploy them to different environments.
We are working on a project where database requirement is not clear. So we are building a database agnostic application.
See my previous question here: Database Agnostic Application
Now I want to test my Spring application DAO with multiple database. I've written number of test cases using TestNG and DBUnit.
When I run these test in a CI environment, I want them to test the application against all the configured databases. I've installed the databases on the 'test server'.
e.g. I want something like this:
for ( each database configured ) {
run each dao test
}
Not sure what is the best way of doing this? And help is welcome.
Thanks,
Adi
If you want to be database independent, you have to test against every single database system you want to support. There are very fine differences which leak through Hibernate.
What I did in the past was to make the test retrieve their database configuration through some System Property. Typically by using hibernate_.property instead of the default hibernate.property. Then setup CI Jobs, which set the property to different values and provide one hibernate_xxx.property for every database to test against. I did this using JUnit Rules, to have the logic in one place. Don't know the apropriate tool for TestNG
I'm not to fond of the loop construct you are hinting at, because it might make it difficult to run a test suit against a single specific database.
I'm also not to fond of dbunit, because it seems to make maintaining testdata rather painful. I prefer in most cases a handcrafted DSL. Have a look at some articles I wrote about it:
http://blog.schauderhaft.de/2011/03/13/testing-databases-with-junit-and-hibernate-part-1-one-to-rule-them/
http://blog.schauderhaft.de/2011/03/20/testing-databases-with-junit-and-hibernate-part-2-the-mother-of-all-things/
http://blog.schauderhaft.de/2011/03/27/testing-databases-with-junit-and-hibernate-part-3-cleaning-up-and-further-ideas/
If you're building a database agnostic application and not using any of the inherent features of a specific database vendor, then the scope of your test cases should be to test the setup, manipulation, and accessing of the data through the DAO objects and less with the testing of the actual database backend. Hibernate 3.5 has dialects available for both Oracle 11g and DB2, so if you were writing test cases that tested the integration of the database agnostic application with a specific database vendor, then really what you are doing is testing that the hibernate dialects do as they say they do (which I'm sure has been covered by test cases in the hibernate project).
In other words, in your case I would think that the testing should focus more on the DAO retrieving the data that you think that it will retrieve after you've set that data up, and in-memory databases are fine for that.
Now all that said, both DB2 and Oracle have very good documentation related to setup. Indeed, both of them have "wizards" to do that. If you still think that it's prudent to test adding data to the database and retrieving it from the physical, non-in-memory database, then I would recommend setting up a "test database" environment and pointing your datasource to that during your continuous integration tests. If you're using Hudson or Jenkins for CI, you can set it up to run a script after the build completes that will truncate the database tables so that the next round of tests work from a blank slate.
EDIT:
I just saw the updates that you posted to your question, so let me address them. Since you already have the databases setup and configured then what you really want to do is dynamically select what the database should be. One way to do this would be to setup your datasource using System properties that can be inherited from a properties file, and running your tests in a "DB2-test" environment and an "Oracle-test" environment. Using this method, you'll have to setup the datasource programmatically and have it read system environment variables to determine which database it connects to. This would essentially require you to change your CI script to run the DB2-test environment first, then the Oracle-test environment following that -- your test suites will run twice.
Hope this helps!
Unit 4.9 has a new Feature: TestRule
You should be able to write a rule, that repeat a test for different databases.
There is this stack overflow question: How to Re-run failed JUnit tests immediately?
It is a slightly different question, but the solution should be the same technique.