I want to create a crosstab function for the PostgreSQL database version 8.0.1. Because it was implemented after PostgreSQL 8.4.0, I am not table to take advantage of the same function.
Is there a way for that?
PostgreSQL 8.0.1?!?
PostgreSQL 8.0.x is unsupported, but if you're still running 8.0.1 you don't mind about that because you haven't applied any patches. You are running a release with multiple known data loss bugs. You need to upgrade to - at minimum - the latest 8.0.x release, then start planning your move to 9.0 or 9.1 fairly urgently.
That'll solve your current problem as a handy side-effect.
Related
I have a project which required migrating all the stored procedure from SQL Server to Hadoop ecosystem.
So the main point makes me concerned that if HPL/SQL is terminated or not up-to-date as in http://www.hplsql.org/new. It shows latest updated features HPL/SQL 0.3.31-September 2011,2017
Has anyone been using this open source tool and this kind of migration is feasible basing on your experience? Very highly appreciated your sharing.
I am facing the same issue of migrating a huge amount of stored procedures from traditional RDBMS to Hadoop.
Firstly, this project is still active. It is included in Apache Hive since version 2.0 and that's why it stopped releasing individually since September 2017. You could download the latest version of HPL/SQL from the hive repo.
You may have a look at the git history for new features and bug fixes.
I wanted to ask if I understand correctly the upgrade of Postgres. Currently version 9.6 is installed on the server. I plan to raise version 12. Is the easiest solution to do a dumpall (because there are several databases on this server and I do not want to do only the whole separately) then remove the old version of postgres and install a newer one? I wanted to make sure because I found some examples. Should I still do something about this flow? I am asking for guidance.
Best is to install the old and the new version of PostgreSQL side by side.
Then you can either use pg_dumpall (from the newer version!) and psql to dump and restore, or you can use pg_upgrade.
The documentation covers that in detail.
We are migrating from Datastage 7.5.3 to 11.7.1. I was wondering whether we need to upgrade to an intermediate version of Datastage? Is there any conversion tool available? Any inputs from people who have experience in a similar upgrade are appreciated. Thanks
There is no option for in-place upgrade from DataStage v7 directly to Information Server v11.
You will need to install Information Server 11.7.1 (either to same machine in side-by-side config if machine has enough resources for both environments, or to a new server). You can then export all of your existing DataStage jobs in v7 environment to dsx file that you can then import into the new environment.
More information on migration steps can be found here:
https://www.ibm.com/support/knowledgecenter/SSZJPZ_11.7.0/com.ibm.swg.im.iis.productization.iisinfsv.migrate.doc/topics/top_of_map.html
Though this document does not list specific steps for DataStage v7.5, the steps for DataStage v8 are equivalent as long as you export jobs as dsx files since istool did not exist in DataStage v7.
There have been many changes to DataStage between versions 7.5 and 11.7 which you need to be aware of when moving jobs from old release to new release. We have documented these changes for DataStage 8.5, 8.7, 9.1 and 11.3 releases. Since you are jumping past all these releases, all the documents are relevant and I will link them below and HIGHLY recommend reviewing them as they can affect behavior of jobs and also result in errors. In some cases we document in these technotes environment variables that can be set which will switch back to the old behavior.
Additionally, in the last few releases a number of the older enterprise database stages for various database vendors have been deprecated in favor of using newer "Connector" stages that did not exist in v7.5. For example, DB2 Enterprise stages should be upgraded to DB2 Connector, Oracle stages to Oracle connector, etc.
We have a client tool, the Connector Migration tool which can be used to create new version of job with the older stages automatically converted to connector stages (you will still need to test the jobs).
Also, when exporting jobs from v7.5, export design only...all jobs need to be recompiled at new release level so exporting executable is waste of space in this case.
If you do have a need to also move hash files and datasets to new systems, there are technotes on IBM.com that dicuss how to do that, though I cannot guarantee the format of datasets have not changed between 7.5 and 11.7.
You will find that in more recent releases we have tightened error checking such that things which only received warnings in past may now be flagged as errors, or conditions not reported at all may now be warnings. Examples of this include changes to null handling, such as when field in source stage is nullable but target stage/database has field as not nullable. Also there are new warnings or errors for truncation and type mismatch (some of those warnings can be turned off by properties in the new connector stages)
Here are the recommended technotes to review:
Null Handling in a transformer for Information Server DataStage Version 8.5 or higher
https://www.ibm.com/support/pages/node/433863
Information Server Version 8.7 Compatibility
https://www.ibm.com/support/pages/node/435721
InfoSphere DataStage and QualityStage, Version 9.1 Job Compatibility
https://www.ibm.com/support/pages/node/221733
InfoSphere Information Server, Version 11.3 job compatibility
https://www.ibm.com/support/pages/node/514671
DataStage Parallel framework changes may require DataStage job modifications
https://www.ibm.com/support/pages/node/414877
Product manual documentation on deprecated database stages and link to Connector Migration Tool:
https://www.ibm.com/support/knowledgecenter/en/SSZJPZ_11.7.0/com.ibm.swg.im.iis.conn.migtool.doc/topics/removal_stages_palette.html
Thanks.
I have strange problem.
I tried to move database from one server to another using pgAdmin III.
Database was created on server with PostgreSQL 8.4.9 and I wanted to move it on second server with PostgreSQL 8.2.11.
To do It, I used "backup" option and saved file, after that I used "restore" option on new database. Tables are loaded but there aren't any functions in new database.
Maybe it is because of different postgreSQL versions?
Does anyone know the reason? Any solution?
If the functions aren't around, double-check that plpgsql is available as a language. It's available by default nowadays, but making it available used to require a create language statement.
That said, I'd echo the comments: you really should be upgrading to a 9.x Postgres version that is still supported, rather than downgrading from an unsupported version to one that is even older.
I'd recommend to do it via pg_dump from an interactive session and export the complete database to one ore more sql files. There you can use the -s switch to have only the schema which should include created functions. Having this SQL file, you can also easier backport your changes or debug if something not applying to the old fallow.
I'm working on an AIR application that uses a local SQLite database and was wondering how I could manage database schema updates when I distribute new versions of the application. Also considering updates that skip some versions. E.g. instead of going from 1.0 to 1.1, going from 1.0 to 1.5.
What technique would you recommend?
In the case of SQLite, you can make use of the user_version pragma to track the version of the database. To get the version:
PRAGMA user_version
To set the version:
PRAGMA user_version = 5
I then keep each group of updates in an SQL file (that's embedded in the app) and run the updates needed to get up to the most recent version:
Select Case currentUserVersion
Case 1
// Upgrade to version 2
Case 2
// Upgrade to version 3
Case etc...
End Select
This allows the app to update itself to the most recent version regardless of the current version of the DB.
We script every DDL change to the DB and when we make a "release" we concatenate them into a single "upgrade" script, together with any Stored Procedures which have changed "since last time"
We have a table that stores the version number of the latest patch applied - so upgrade tools can apply any newer patches.
Every Stored Procedure is in a separate file. Each starts with an "insert" statement to a logging table that stores Name of SProc, Version and "now". (Actually an SProc is executed to store this, its not a raw insert statement).
Sometimes during deployment we manually change an SProc, or rollout odds & ends from DEV, and comparing the log on client's TEST and PRODUCTION databases enables us to check that everything is at the same version.
We also have a "release" master-database, to which we apply the updates, and we use a restored backup of that for new installations (saves the time of running the scripts, which obviously increase over time). We update that as & when, because obviously if it is a bit stale the later patch scripts can be applied.
Our Release database also contains sanitised starter data (which is deleted, or sometimes adopted & modified, before a new installation goes live - so this is not included in any update scripts)
SQL Server has a toolbar button to script a change - so you can use the GUI tools to make all the changes, but rather than saving them generate a script instead. (actually, there is a checkbox to always generate a script, so if you forget and just press SAVE it still gives you the script it used after-the-fact, which can be saved as the patch file)
What I am considering is adding a SchemaVersion table to the database which holds a record for every version that exists. The last version of the SchemaVersion table is the current level of the database.
I am going to create (SQL) scripts that perform the initial setup of 1.0 and thereafter the upgrade from 1.0 to 1.1, 1.1 to 1.2, etc.
Even a fresh install to e.g. 1.2 will run through all these scripts. This might seem a little slow, but is only done once and on an (almost) empty database.
The big advantage of this is that a fresh install will have the same database schema as an upgraded install.
As I said: I am considering this. I will probably start implementing this tomorrow. If you're interested I can share my experiences. I will be implementing this for a c# application that uses LINQ-to-entities with SQL Server and MySQL as DBMSes.
I am interested to hear anybody else's suggestions and ideas and if somebody can point me out an open source .Net library or classes that implements something like this, that would be great.
EDIT:
In the answer to a different question here on SO I found a reference to Migrator.Net. I started using it today and it looks like it is exactly what I was looking for.
IMO the easiest thing to do is to treat an update from e.g. 1.0 to 1.5 as a succession of updates from 1.0 to 1.1, 1.1 to 1.2, and so forth. For each version change, keep a conversion script/piece of code around.
Then, keep a table with a version field in the database, and compile into the the app the required version. On startup, if the version field does not match the compiled-in version, run all the required conversion scripts, one by one.
The conversion scripts should ideally start a transaction and write the new version into the database as the last statement before committing the transaction.