How to input source and target database parameters in IBM Data Replication 11.4.0.4? - cdc

Though IBM states it supports Oracle, there is no clear cut documentation on how to provide the input parameters like the TNS names inside either the IIDR configuration tool or the Management Console.
IIDR Configuration not allowing Oracle Parameters
Management Console not allowing Oracle either

The screenshot linked refers to the configuration parameters of the IBM CDC agent for Db2, not for Oracle Database.
Current CDC version (11.4+) has a unified installer for all agent types on the particular operating system, however, each source and target type needs to be installed separately. The type of the agent to be installed is asked by the installer.
IBM CDC documentation lists the necessary configuration steps for Oracle: https://www.ibm.com/docs/en/idr/11.4.0?topic=databases-configuring-cdc-replication-engine-oracle
Please note the "CDC for XStream" agent configuration as well, for newer Oracle Database versions using some features like long object names and encryption: https://www.ibm.com/docs/en/idr/11.4.0?topic=xstream-configuring-cdc-replication-engine-oracle
There are also videos with step-by-step configuration, for example: https://www.youtube.com/watch?v=K6veQyQMz94

Related

SSIS operational configuration of server instance, database, and schema?

In order to enable operational management of data integration processes developed in SSIS, I am seeking to be able to externally configure:
server (data source)
database (catalog)
schema
From what I have seen, all of these are typically hardcoded into SSIS packages through the Connection Manager and in SQL statements. This hardcoding limits the DBA from being able to allocate resources differently and, if there is ever a change, requires every package to be modified if Package Deployment is being used.
It appears that the Project Deployment would reduce this somewhat, but no eliminate it.
Target environment is SQL Server 2016 and VS 2017.
How can the server, database, and schema be externalized from the package?
SSIS has a robust facility for configuring packages per environment. You can configure any property in the package externally. This can be done in SQL Agent and even from the command line at runtime. Configurations can be stored in config files, system environment variables, a SQL table, etc. However, the modern way of configuring packages is through the project deployment model.
Here is the gist of how it works:
Add a parameter at the package or project level
reference that parameter in an expression which configures the property you want to set, i.e. the server name or initial catalog
Deploy the project to an instance of SSIS
In SSIS, add an environment and configure the variable. This can even be passwords which are securely stored
Add a reference to that environment from the project, and finally reference which environment you want to use at runtime.
The first link below shows a dialogue that was created for configuring connection managers with parameters. Please note that the package will store the default values, but when you create an environment as noted above, this allows you to easily set it at runtime.
As for configuring a schema, this is possible as well, by using parameters, but you would need to use expressions for your SQL queries and setting the destination. I would avoid making schemas variable across environments. This will present a lot of effort and complexity for very little flexibility that is offered in return. Please read up on these links and good luck!
How to parameterize connection managers
All about parameters in SSIS

Quickly changing SSIS-packages data source parameters for easy migration

I would need to migrate a SQL database from Sybase to MS SQL Server. Before doing the actual migration on the production server I first created an SSIS-package with SQL Server Management Studio's Import/Export Wizard for testing with other databases. The test migration was successful and I would now like to deploy my SSIS-package to the real servers.
However, it seems I cannot simply run the package in Management Studio choosing different data sources for it - it only runs on the same databases for which it was created. Now, it can be edited in something called SQL Server Business Intelligence Development Studio (or BIDS for short)(I am using the SQL Server 2008 version), but going through every data flow task changing the destination source manually for each of the ~ 150 tables I am moving is ineffective and also introduces a possibility for error.
I there a way to quickly change what data source is to be used for ALL destination sources in ALL the flow tasks of an SSIS-package? If not, what simple method is there for testing migration with test databases first and simply changing the data sources when deploying?
I am using ODBC data sources, but for some the package shows OLE-sources in BIDS instead.
I hope I was clear enough. If you have additional questions, please ask! Thank you!
I would use a variable for the ConnectionString property of the connection manager. A package level configuration can be very useful for accomplishing this task. Several ways to do this. I prefer to use a table in SQL Server that holds all the configurations for all packages. This can be especially effective if you have multiple packages and need to dynamically change a set of connection managers across those multiple packages.
The basic steps are:
Opposite click on your SSIS design surface and select "Package Configurations..."
Create a package level configuration of Configuration Type "SQL Server"
Store your connection in a Configuration table in SQL Server
Alter your Connection Manager to use a variable for the ConnectionString Property
Populate that variable from the Configuration table via your package level configuration
When it comes time to switch from Test to Production, simply update the connection string in your configuration table
These screenshots can help...
This is part of a larger package management framework that I implemented using this book:
Microsoft SQL Server 2008 Integration Services: Problem, Design, Solution
I highly recommend it. Should take less than a day to set it up. Book has step by step instructions.
This question and its associated answers also helpful.

Redshift with SSIS/SSDT

Has anyone been successful using Amazon Redshift as a source or destination ODBC component in SQL Server Data Tools 2012?
I've installed the PostgreSQL drivers provided by Amazon and have successfully tested a connection in the Windows ODBC driver administrator but keep running into arcane error messages when I choose my saved DSN and try to pull a table listing.
Redshift is based on quite an old version of Postgres (8.0). Postgres has changed quite a bit since then and the Postgres tools have changed with it. When downloading any tools to use with Redshift you will probably need to use previous versions from several years ago.
The table listing problem is particularly annoying but I have yet to find a version of psql that can properly list Redshift tables. As an alternative you can use the INFORMATION_SCHEMA tables to find this kind of info, and in my opinion this is what SSIS/SSDT should be doing by default.
I would not expect SSIS to be able to load data into Redshift reliably, i.e. create a Redshift destination. This is because Redshift does not really support INSERT INTO as a way to load data. If you use INSERT INTO you will only be able to load ~10 rows per second. Redshift can only load data quickly from S3 or DynamoDB using the COPY command.
It's a similar story for all other ETL tools I've tried, notably the open source tools Pentaho PDI (aka Kettle) and Talend Open Studio. This is particularly annoying in Talend's case as they have Redshift components but they actually try to use INSERT INTO for loading. Even Amazon's own ETL tool Data Pipeline does not yet have support for Redshift as 'node'.
I have been successful. Try installing both the 32-bit and 64-bit versions of the PostgreSQL ODBC drivers.
Also, in your Project Properties under 'Configuration Properties' > 'Debugging', set 'Run64BitRuntime' to False.
You can also try specifying the connection string in Connection Manager. For example:
Driver={PostgreSQL ANSI};
server=redshiftdb.d113klxjd4ac.us-west-2.redshift.amazonaws.com;uid=;database=;port=5432

Azure Database Installation Error "Invalid Object name 'Categories' "

I am completely ignorant in relation to databases and servers etc. Please bear with me.
I am trying to install a program called RealProspect 2009 which allows both local and remote sql database installation. Both types are done using the program installation .exe.
I have an azure account on which I have set up a server, and a database. During the program installation I am asked to provide the SQL server address, SQL server name, SQL username and SQL password. Using the information provided in the Azure online tools, I input all of this information into the fields and the program commences installing the database on the remote location. If I use incorrect information in these fields the installation returns an error and tells me it cannot log in, or the IP is not allowed etc., so I know it's actually attempting to connect and verifying the connection credentials.
When I use the correct server and login information the program proceeds. It spends several minutes "Creating the Tables". When it finishes doing that it attempts to begin "Installing Default Data (Categories)". At this point the program stops and I get the error in the subject line of this post "Invalid Object name 'Categories' "
I don't know enough to tell you what I don't know about this process.
I just signed up for Azure specifically because hosting the database with Azure is like $5-10 per month and I want myself and several other participants to be able to use the software with a common database. I created the server and database using the gui "tools/how to" from within the online Azure portal and I have never written a script, or accessed the server/database using anything other than the online GUI.
Thank you in advance for any help you may be able to provide. I hope i'm not too much of a speed bump to your day.
P.S. - For what it's worth you can download a free trial of the software from realinvestorsoftware.com and see if you could install it on a remote server. Maybe you can better see what I see and tell me how to do it on my own?
SQL Azure is VERY similar to SQL Server but there are a few features that SQL Azure doesn't support. That said, I'd be surprised if the app's installer is using any of the features that are unsupported by SQL Azure. My guess is that there's a bug in their installation scripts that might fail on more modern versions of SQL Server (note, their app installs on SQL Express 2005 which is no longer in mainstream support).
Just a couple of other thoughts for you: You get keys to install the app on two machines but:
"If you would like to install on more than two computers, then after you order your copy of RealProspect you can login to your customer account on this website and order additional activation keys for only $97 each."
Because you're going to be paying several hundred dollars anyway, and because (you yourself admit) you're not a database expert, it may be less cost, stress and hard-work to use their $27 per month database hosting service. That way you can concentrate on building your business while they take care of the technology.
[Update: 3/27/2013 # 23:05]
Another option Chris presented was to install the app and database locally and then migrate the database to Azure.
While this is potentially feasible, it requires some finesse to execute.
Microsoft provides a DB migration guide presenting several (pretty manual) options.
You might also want to read this thread which discusses how to migrate your DB via a DACPack.
Another option is to download and use the SQL Azure Migration Wizard which should do most of the heavy-lifting for you and make your DB migration simpler.
However, note that it is possible that the DB the app uses may use features of SQL Server that are not supported on SQL Azure. Hopefully this isn't the case, but be aware that this may be an issue.
Good luck :)
Chris,
I think SQL Database Migration Wizard v3.9.10 & v4.0.13 will solve your problem, I have used this tool several time to migrate db from local machine to sql azure, the most beauty of this tool it also highlights the error or sql which couldn't be migrated to Azure, so we can easily find alternate syntax of such sql queries

SSIS Logging in SQL 2012

In the 2008R2 version I was using SSIS logging to a sysssislog table in a defined database. 2012 brings now the concept of Integration Services Catalogs that have their own SSISDB db. Is it still necessary to use the logging to sysssisslog tables or is the information that ends there now somewhere in SSIS DB (what i would expect, since all the reporting for SSIS execution is based on this db as well).
The logging you are familiar with remains available to you with the 2012 release of SQL Server. That said, database logging no longer has to be explicitly defined in your package if you are using the Project Deployment model (SSISDB).
Out of the box, you'll get Basic logging level when you run a package. The other options are none, performance and verbsose. Read more about how to set these and other execution parameters via MVP Phil Brammer. Matt Masson of the actual SSIS team points out what events those levels correspond to in his post on What Events are Included in the SSIS Catalog Log Levels.
Finally, SSIS Reporting Pack is an open source project from MVP Jamie Thomson that provides different insight into the basic data being captured in the new integration services catalog.
My thoughts: necessary no. But if you already have a framework built out culling data from that log (we use it for an alerting system), you are supported to keep using it. If you run integration services packages from multiple servers, there is no functionality to combine the logging from all those disparate SSISDB catalogs to provide insight into your entire universe. You can get that if you all the packages log to a centralized server using the classic technique.

Resources