We want to run pgrouting 2.x on our test server. Additionally, we want existing applications still run on pgrouting 1.x.
Does anyone know, if it's possible installing and running them in parallel?
Currently, we run on Postgres 9.1.9 and PostGIS 2.0.1.
No, I do not think you can do this for the same postgresql version because both versions use a shared library file librouting.so and this file is not compatible between the old and new versions of pgrouting. If you install Postgresql 9.1 and 9.2 for example then you can install pgrouting 1.x on 9.1 and pgrouting 2.x on 9.2 without a problem. In hind sight, maybe I should have done a better job of change the file names to avoid this, but I didn't so its not going to work.
Also I do not think pgrouting 1.x will work with PostGIS 2.0.1 because they removed a lot of functions that pgrouting 1.x uses. It might be possible to solve this problem if you load the PostGIS legacy.sql file.
Related
how can i perfom full data migration from ElasticSearch 5.6.16 to 7.2.0. I have an application running version 5.6.16. No i have to update some of the data to version 7.2.0. The manufacturer has written / provided an own tool for the migration, but this requires that the new installation (7.2.0) has been installed on a new separate server. But this is the only the last option for me. So what's an easy and good way to do this on the same machine? Or would it be an solution to install the new version (7.2.0) on the same machine with different port and then do my stuff as this would be two servers?
First backup the data and the re-import after installing new version? Did i get problems with the indexes (i read something about this.. that this could result in errors)
You have few questions but I will try to answer two important ones,
How can you run the two different version of elasticsearch on a single machine
Answer : it is possible although not recommended in production environment, as you guess it right, by running these two version on different ports.
How to migrate from ES 5.6 to 7.2
Answer: Elasticsearch provides the backward compatibility till last major version, so if you are upgrading to 7.X, than ES 6.X indices can be backed up and re-imported in ES 7.X but you can't do this for 5.X indices.
Note: Refer upgrade elasticsearch official doc for detailed explanation and process.
i am new to clickhouse So, i want to update my clickhouse version 1.1.54231 to 19.6.2.11 in Production Environment I have few doubts.
Do i need to take a backup of my data before Upgrade of version or upgrade process takes care of data fail overs and corruptions .
To upgrade i am using this command please suggest if any better way possible
sudo apt-get update install --only-upgrade clickhouse-*
I am using Ubuntu 16.04 in Production any precautions i should need to take while upgrading.
which is the most stable 19.* version i should need to go with right now.
It is recommended to set up a testing environment first if you don't have one yet. Install the same ClickHouse version you currently have in production, run realistic workload there, see if everything continues to run fine when you upgrade ClickHouse, fix/adjust if not.
You should also check all "Backward Incompatible Change" sections of https://clickhouse.yandex/docs/en/changelog/ for all versions you are upgrading over.
Normally people use the latest version marked "stable" that works fine in their testing/preproduction environments.
RPM seems to be pretty good at checking dependencies and handling individual file updates, but what is the best practice for handling cumulative updates to, say, a relational database across multiple versions?
For instance, say you have product Foo with versions 1.2.1, 1.2.2, 1.2.3, and 1.3.0. In each of these, there were database schema changes that required SQL upgrade scripts. Running each upgrade script in sequence is required to get up to the current version of the schema.
Say a customer has 1.2.2 installed and wants to upgrade to 1.3.0. How can one structure the RPM package so that you have the appropriate scripts available and execute the correct upgrade scripts against the database? In this instance, you'd want to execute the upgrade scripts for 1.2.3 and 1.3.0, but not the ones for 1.2.1 or 1.2.2. since those have presumably already been executed.
One alternative is to require upgrading to each intermediate version in sequence, forcing the user in this example to upgrade to 1.2.3 before 1.3.0. This seems less than optimal. Also, this would presumably need to be "forced" through external process, since I don't see anything in the RPM SPEC file that would indicate this.
Are there any known techniques for handling this? A bit of Googling didn't expose any.
EDIT: By "known", I mean "tried and proven" not theoretical.
Use the right tool for the job. RPM probably isn't the right tool. Something like Liquibase would be better suited to this task.
Since I've not done this before I am not sure if the way I am planning to do this is okay or is there a better way. Like using Windows Installer or Install Shield or Windows Installer XML (WiX) toolset. Any help would be great, as I have no clue.
We have a product and we ship new version every few months. So far we've only been rolling out complete versions i.e. Either Version 1.0, or Version 1.5, but no upgrade from 1.0 to 1.2 to 1.3 to .... you get the picture, right! So any customer that get version 1.0 cannot upgrade to version 1.2 or 1.3 or even the latest. They'll have to uninstall old version and install the latest version. This is not right, but thats what we could do until now. But we'd like to change it.
My plan is to have a install file with (Sql Scripts) for each upgrade path. Check the table in database that stores the version info and depending on it run different script to upgrade database.
My concern is that this method may not be scalable, once we have more than 5 or 6 different versions.
If you could point to any articles or books on this topic, that would help a lot too.
Also, could we use Windows Installer or Install Shield for this?
thanks,
_UB
We've been using DBGhost for a year or so now to keep our database under source control along with our codebase, and it makes this kind of thing dead easy. It's not just well thought through, but they've been using it to roll out their own code for years, so it's dead solid.
Your problem is a pretty common one, and I've had to deal with this kind of problem at my last job. There is another tool aside from the RedGate tool that may help you do what you need to do. It's a tool called DB Ghost. They explicitly address the versioning problem, and have a packager as well. I would suggest doing a trial of the DB Ghost product because they have some interesting claims concerning multiple version upgrades. This was taken from their FAQ (http://www.innovartis.co.uk/faqs/faqs.aspx):
Q: Our problem is going to be managing
data structure changes during
upgrades. Our product line is
Shrink-Wrapped, or downloadable from
the website. So when a user downloads
an upgrade, they can be upgrading from
a very recent version, with few
database structure changes, or the
upgrade may be from a very old version
with a multitude of structural
changes. One upgrade needs to manage
it all. The user would be offsite, so
we can't hold their hand. We have
users in Greece, Australia, Malaysia,
Norway, etc. How would DB Ghost, if at
all, handle updates in remote
locations?
A: The DB Ghost Packager Plus product was
design to specifically address this
issue as it can dynamically handle the
required updates to a target database
seamlessly.
I'm just mentioning this because our company is trying to do something similar and I was doing research on this tool.
Thanks,
Eric
Do you insist on doing it yourself, or could you see yourself committing and investing in a tool?
I really like the idea of Red-Gate's SQL Packager, which will "diff" your two database versions, and then create a SQL script, a C# project, or a stand-alone executable to upgrade from version 1 to version 2.
Not 100% how you'd be able to upgrade from 1.0, 1.1, 1.2, 1.3 all to 2.0 - check out their website and see if they offer something for that scenario!
Otherwise, I guess it'll get quite thorny and messy......
Marc
In the Rails world they are using a tool/method called Migrations.
Basically is boils down to creating a small sql script to upgrade and downgrade each little change to the database.
When you are testing the application you migrate your database to the version you want and on deployment the application can check what version it needs and migrate to that version.
There are free migration toolkits for most popular languages, they might be part of some MVC framework though.
A nice side effect of migrations is that you have database source code that is easily stored in you source control repository.
I need to upgrade my current version of DNN this week. I am currently using 2.1.1. I don't want to do everything twice, so, I have several questions.
Is there an upgrade tool or some scripts somewhere that will help me to do an upgrade.
Am I better off installing 4.9 or 5.0. It is production.
If I go with 4.9, will I be able to upgrade to 5.0 when it releases?
I personally strongly disagree with ALassek, you can upgrade DotNetNuke, you just have to follow the steps listed and as long as you do that it isn't a big deal at all, but there are a few key things to keep in mind as you set down the road to do your migration.
DO NOT USE 5.0 in production at this time. 5.0 is only in RC2 stage at this time and using it in production is NOT recommended and an upgrade path from RC2 -> Final might not be possible!
If you plan on trying to upgrade from 2.1.1 go from it to the most current version of 2, then go to 3, then go to 3.3.7, then go to 4.4.1, then to 4.6.2, then to 4.9.0. Typically you are able to make it, but some sites are not.
Some modules though will need to be updated to work with DNN 4.x, depending on the numbers and vendors this can be an easy process or can involve needing to find other providers for the specific functionality at hand.
As for the potential to upgrade to 5.0 from 4.9, yes, that will be 100% supported once 5.0 is in a production ready state.
It's been my experience that DotNetNuke has a tendancy to release breaking changes without documenting them (or documenting much of anything, for that matter). Without knowing exactly what you have installed in it, it's impossible to say exactly how screwed you are. But I can guarantee you the transition will likely not be easy, especially if you have a lot of modules installed.
Between 2.1.1 => 4.9, so much has changed that I can't imagine there is any automated way to upgrade. You're better off starting from scratch and seeing what still works. Most likely you will need to find newer versions of any modules you're using, or replacements for those that aren't being kept current.
To be honest, I don't know. But I see that the DNN download page very strongly states that the 5.0 release-candidates are "NOT RECOMMENDED FOR PRODUCTION USE".
There was a huge amount of breaking changes between 2x and 3x which will cause pretty much any custom modules you have to have to be upgraded or replaced. Other than that Mitchel is the DNN man and I would defer to him.