Earlier we have used Microsoft OLEDB JetProvider in SSIS package. After recent update from Microsoft, now we are facing issues with SSIS package. So we have decide to export data to excel using open XML. What should be the best approach for implementation since still we are using (xls) version 1997-2003.
Note: We already tried Microsoft Access Database engine 2010 Redistributable.
from my point of view, you have the following options (all about the ScriptTask unfortunately):
Call REST API and create a document there (using Open XML SDK). It's easy to develop, support and deploy
Use Open XML SDK directly in the ScriptTask
I would recommend following the first approach, but it all depends on your system though
UPDATE:
Following the first option, you have to develop a small Web API Service. Here is the link with an example on C#
Per the second option, in order to use external DDLs, such as OpenXML, you have to register it in the GAC (if the installer doesn't). Here is the link with an example of using external libraries.
If you are going to follow this option, I would recommend you develop a DDL that would work with Open XML directly and have simple API for calling it from SSIS Script Task. You will register your DDL in GAC and have a link in Script task. It will help you avoid a number of debugging issues.
Related
Perhaps I am just too new to SSIS or have not really understood the basic concept. But I am a programmer who like to reuse as much as possible.
We have several SSIS projects that a have many things in common. E.g. we have programmed a flow to handle errors in specific way. Today we copy and paste this flow into each new project. It would be more convienient to refer to an external project/package so we can enhance the error handler centrally instead of copying it into each an every project. We think about something like the good old DLL concept.
The only ways we have found so far is to exchange data via DB tables or to use real external libraries. We would prefer to use as much built-in functionality as possible.
I have not found any literature or tutorial to modularize SSIS projects. Would be great to see the best practice here.
There is no way to reuse your C#/VB code written in Task Scripts of SSIS package even inside the same package.
You can put codes into a DLL (assembly) and put that into GAC.
Then you can add as reference into the project inside a Task Script and use your classes.
This is the only way to reuse code in SSIS.
On modularizing SSIS packages. You have several possibilities:
Pack defined actions into SSIS Custom Task or Custom Transformation, which is a specific DLL, and then - use (refer) it in your SSIS packages. Have you changed your Custom Component and installed the updated version - next Package runs will use the updated logic. Complexity - high, you have to implement specific interfaces and develop UI to be used inside Visual Studio developer.
Code driven SSIS package generation. You define SSIS package logic in some code, and then use the code to generate SSIS packages. Either it could be done with C# ManagedDTS class (see examples) or with EzAPI classes which provide some abstraction over ManagedDTS, or with BIML script language.
Complexity - med/high, you either have to generate SSIS package in C# after modelling it in Visual Studio, or in BIML script language.
Package generation is practical if you have multitude of similar packages. Otherwise it does not pay off; single package can be created in Visual Studio.
Why SSIS is not modular. My two cents - it was designed for non-programmers who can draw dataflows in designer. This can be done on ad-hoc basis, fast and with low personnel level. Besides, SSIS was created right after 2000's.
Nowdays, the approach changed, we are talking CI/CD etc, but SSIS concept stays the same.
I am working on Adobe CQ. I created 2-3 versions(1.2,1.2,1.3) for a particular page in my author instance. Now I tried to package my content page and installed it in another instance. I couldn't see the versions of the page which I installed in another instance.
Can anyone help me out doing this?? I want to migrate my content pages along with their versions from one CQ instance to another??
We are in the same situation. You can extract prior version details using the packaging approach, but you will be precluded from reloading them in due to the new Oak security model. The next issue is that you would need to extract and transform the data, and then reinsert due to the node ID's potentially differing, especially if you are using partial data sets to extract.
Where we have gotten to, and are proving now, is to use the new migration tool to move content from instance to instance, which purportedly has a version extract tool. I will update details here when we get our results back.
UPDATE:
We have tested the CRX2OAK migration tool, and it indeed does move versions across. Using the tool, you can specify filters to only migrate a subset of content, which will then drag the version details across as well.
It seems this approach works quite well for both single tenancy and multi tenancy approaches as it used to using a package for content.
Unfortunately, it can't be used as a portable backup system, as it is an instance to instance solution. It does, however, work well for blue/green deployment strategies.
Versions are stored by path '/jcr:system/jcr:versionStorage' in AEM.
To transfer pages with their versions just create a package with filters for content which you want to move and the version storage path as well, download package and install in other AEM.
If anyone comes across this question like me, here is the summarised answer:
You can use crx2oak utility available from link below to migrate pages and page version across instances:
https://repo.adobe.com/nexus/content/groups/public/com/adobe/granite/crx2oak/
This is a powerful utility with multiple uses (especially in upgrades) as documented in links below:
https://docs.adobe.com/docs/en/aem/6-2/deploy/upgrade/using-crx2oak.html
https://jackrabbit.apache.org/oak/docs/migration.html
The source and destination repositories need to be offline while running this utility so best to plan ahead for this type of migration.
HTH
I have created a Web Content Management library for use in WebSphere Portal. At the moment I'm using import-wcm-data to import the library, then I need to add some additional propeties to 2-3 files on the server under Resource Environment Providers and then restart particular services so those changes are detected.
Can anyone explain the benefits of using a paa over writing a simple bash (or similar) script to automate this process?
I don't understand if I get any advantages when using paa, or is paa even capable of updating properties files and restarting services?
I have been working intensively with PAA files and I must say that it is a very stable way of deploying a app requirering multiple depl steps and components.
It does need a startup process but is well worth it in a multi server environment.
You can do all the tasks that you can do in a Ant file as well as using the wsadmin script interface. I only update res env settings and the such in WAS and do not touch any props files for that reason since all settings are stored in WAS.
In my experience, a PAA is not a good method if you're merely importing a content library.
I don't think I understand why you are doing the import manually and not syndicating, but even if there's a good reason not to syndicate, the PAA process was too involved and required too many precursor actions (deleting libraries, remove PAA, deploy PAA and then activate the portliest) to be a viable option for something as simple as importing a WCM library.
Since activating the portlets I was importing with the PAA was an extra step, I don't believe you can restart applications either.
I'm using the GDR release of VSTS Database edition source control the DB and generate deployment scripts. It works pretty well but the problem is that it only seems to handle scripting and deploying the schema. It stops short of handling scripting and deployment of the actual data itself (i.e. the lookup and standing data which also deployed with the DB).
I know it's easy enough to write the deployment scripts by hand, but is this what every one does? Is there a recommended way of deploying data with the VSTS deployment engine? Is there some tooling that help with this - I don't mean a full product like SQLCompare, just something that fills the gap with VSTS DB.
Thanks in advance.
Kaneda
The VSTS: DB best practices blog advocates using post-deployment scripts to insert reference data into temporary tables, then update the target tables based on the delta (ie update x inner join temp where x.something <> temp.something)
There's some suggestions floating around that this might make a powertool, and at least one MVP has written a tool to generate those scripts.
(NB: I haven't tried this - I only just found out about it myself)
Personally I would still stick with RedGate if I had any choice in the matter.
GDR comes with a data comparison engine, but as far as I've been able to tell so far a data comparison can't even be stored in a project (let alone be properly supported by it) - so it's pretty ad-hoc. Unlike a Schema Compare, there is no File \ Save As.
The comparison engine can be automated via DDE but that's automation within the Visual Studio IDE, and not really suitable for some kind of scripted installation process. As much as anything there's no way I could see to specify which tables to include in the comparison (since all you get to do via DDE is open the wizard for the user to select)
Alternatively all the functionality appears to reside in Microsoft.VisualStudio.TeamSystem.DataPackage.dll , but since the API documentation hasn't been written yet (the help doco that comes with GDR is full of errors as it is) it's going to be a bit of a hit-and-miss adventure to work out where to start.
As someone who's used RedGate's SqlCompare, SqlDataCompare and their respective APIs to do this before, much of the GDR functionality seems a bit half-baked to me.
What I will probably do this time round is sync the data with a SSIS package (export to CSV at build time / import from CSV at install time), but I'd far rather be using the SqlDataCompare API (or SqlPackager) right now.
Sorry if this is overly simplistic.
I've decided that I want to use an SQLite database instead of a MySQL database. I'm trying to wrap my head around how simple SQLite is and would like a simple, one answer tutorial on how to use SQLite with the Zend Framework, where to put my SQLite database in my directory structure, how to create the database, etc.
#tuinstoel is correct, attaching to an SQLite database implicitly creates it if it does not exist.
SQLite also supports a command-line client that is more or less like MySQL's command shell, allowing you to issue ad hoc commands or run SQL scripts. See documentation here: http://www.sqlite.org/sqlite.html
Of course you need to change the Zend_Db adapter in your ZF application. ZF supports only an adapter to the PDO SQLite extension. SQLite doesn't support user/password credentials. Also since SQLite is an embedded database instead of client/server, the "host" parameter is meaningless.
$db = Zend_Db::factory("pdo_sqlite", array("dbname"=>"/path/to/mydatabase.db"));
One more caveat: when you get query results in associative-array format, some versions of SQLite insist on using "tablename.columnname" as the keys in the array, whereas other brands of database return keys as simply "columnname". There's an outstanding bug in ZF about this, to try to compensate and make SQLite behave consistently with the other adapters, but the bug is unresolved.
If you make a connection to a not existing database, a database is created on the fly. (You can turn this behavour off)
This is now covered in the Zend Framework quickstart tutorial (version 1.9.5 as of this writing). Just make a new project (with zf command line tool. look here for a great tutorial on setting it up), add these lines to your config.ini file and you're good to go:
; application/configs/application.ini
[production]
resources.db.adapter = "PDO_SQLITE"
resources.db.params.dbname = APPLICATION_PATH "/../data/db/databaseName.db"
Now when you ask for your default database adapter, it will use this one. I would also recommend downloading the quickstart tutorial source code and making use of the load.sqlite.php script. You can create a schema and data file and load the database with these tables/columns/values. It's very helpful! Just check out the tutorial. It's all in there.
This answer was moved out of the question into a CW answer to disavow ownership over the content.