Kiwi-tcms v8.0 database migration problem with test runs - kiwi-tcms

I just migrated Kiwi tcms from v7.1 on a server A to v8.0 on a server B (I virtualized our application server).
I use a pgsql container for the db.
For the migration I used the backup-restore method.
The db volume on server B is brand new and I created the schema using /Kiwi/manage.py migrate
Then I restored the .json. But I ran into schema problem because of the changes in v8.0 of the primary key. I just replaced in the json file the old names by the new one and the restoration passed.
The application runs fine except that I have now a problem with test runs when I go to the "search for test runs" page:
DataTables warning: table id=resultsTable - Requested unknown parameter 'run_id' for row 0, column 0. For more information about this error, please see http://datatables.net/tn/4
I am not sure if I made any mistake during the migration or if there is actually a bug in the migration process.
For info: I tried on my test container in v7.3 to migrate the database in v8.0. Everything went fine but I had the same problem at the end.
Thank you by advance for you support !
EDIT 1
I solved my problem following the advice of #Alexander Todorov and restarting from scratch the migration steps:
I uploaded the image of kiwi 7.1 on my docker registry so I migrated to my server B with kiwi 7.1. Now I can focus on upgrade.
I use pgsql container, so I didn't need to update mariadb container before migrating.
I upgraded to kiwi 8.0 using the kiwitcms/kiwi:latest image.
I am not sure why the upgrade from the 7.3 version didn't work the first time but from the 7.1 to 8.0 everything went fine.
It is possible that I backed up from 7.1 and restored on a 7.3. Everything was fine using 7.3 at this moment but I had trouble with the migration to 8.0...
Anyway, thanks for your support !
EDIT 2
I don't know how it is possible but, I can't reproduce the exact same migration on another instance. I get the issue each time I try.
What I have now:
A preprod instance of kiwi working fine in v8.0
What I want:
A production instance in v8.0
What I tried:
Create a new instance in v8.0 and backup the preprod db to restore it on the prod => fails with the error I had before...
DataTables warning: table id=resultsTable - Requested unknown parameter 'run_id' for row 0, column 0. For more information about this error, please see http://datatables.net/tn/4
Create a new instance in v7.1 and migrate in the exact same way as I did on my preprod => fails with the same error...
I am really clueless on this :/

I just migrated Kiwi tcms from v7.1 on a server A to v8.0 on a server B (I virtualized our application server).
That is what is causing your problems. You are trying to restore one version of DB schema + data onto another version (on the second server). In between the 2 versions there are quite a lot of DB migrations and what you are trying to do will always lead to failure.
You can use server A and upgrade in place to the latest version, then dump the data, move to server B (with the same version) and restore the data into the clean DB and decommission server A.
You may also setup server B to have the older version of Kiwi TCMS, migrate the data there and then upgrade server B to v8.0.

No idea why it worked but I did the following, and everything seems to be fixed. My symptoms were exactly like those described here (7.3->8.0, mariaDB, pruned after 8.0, DataTables warning)
$ cd Kiwi
$ sudo docker exec -it kiwi_db bash
$ mysql -ukiwi -p -h 127.0.0.1 (pw kiwi from yml)
Then within mariaDB
use kiwi;
show tables;
describe testruns_testrun;
select * from testruns_testrun;
It was here I refreshed the page to pull out the ID value to look against when all of a sudden my test runs loaded. I did a
sudo docker-compose down && sudo docker-compose up -d
To verify the issue doesn't come back and it didn't. I'm no DBA, just poking around. I did make a new run from a plan, which worked until I navigated away, and tried searching for the test run. I did the above afterwards.

Related

Database issue when migrating a Trac project

I am trying to migrate a series of Trac projects originally hosted on CloudForge onto a new Bitnami virtual machine (debian with Trac stack installed).
The documentation on the Trac wiki regarding restoring from a backup is a little vague for me but suggests that I should be able to setup a new project
$ sudo trac-admin PROJECT_PATH initenv
stop the services from running
$ sudo /opt/bitnami/ctlscript.sh stop
copy the snapshot from the backup into the new project path and restart the services
$ sudo /opt/bitnami/ctlscript.sh start
and should be good to go.
Having done this (and worked through quite a few issues on the way) I have now got to the point where the browser page shows
Trac Error
TracError: Unable to check for upgrade of trac.db.api.DatabaseManager: TimeoutError: Unable to get database connection within 0 seconds. (OperationalError: unable to open database file)
When I setup the new project I note that I left the default (unedited) database string but I have no idea what database type was used for the original CloudForge Trac project i.e. is there an additional step to restore the database.
Any help would be greatly appreciated, thanks.
Edit
Just to add, the CloudForge was using Trac 0.12.5, new VM uses Trac 1.5.1. Not sure if this will be an issue?
Edit
More investigation and I'm now pretty sure that the CloudForge snapshot is not an SQLite (or other) database file - it looks like maybe a query type response as it starts and ends with;
BEGIN TRANSACTION;
...
COMMIT;
Thanks to anyone taking the time to read this but I think I'm sorted now.
After learning more about SQLite i discovered that the file sent by CloudForge was an sqlite DUMP of the database and was easy enough to migrate to a new database instance using the command line
$ sqlite3 location_of/new_database.db < dump_file.db
I think I also needed another prior step of removing the contents of the original new_database.db using the sqlite3 command line (just type sqlite3 in terminal)
$ .open location_of/new_database.db
$ BEGIN TRANSACTION;
$ DELETE FROM each_table_in_database;
$ COMMIT;
$ .exit
I then had some issue with credentials on the bitnami VM so needed to retrieve these (as per the bitnami documentation) using
$ sudo cat /home/bitnami/bitnami_credentials
and add this USER_NAME as a TRAC_ADMIN using
$ trac-admin path/to/project/ permission add USER_NAME TRAC_ADMIN
NOTE that pre and post this operation be sure to stop and re-start the bitnami services using
$ sudo /opt/bitnami/ctlscript.sh stop
$ sudo /opt/bitnami/ctlscript.sh start
I am the guy from Trac Users, you need to understand that the user isnt really stored in the db. You got some tables with columns holding the username but there is no table for an user. Looking at you post i think your setup used htdigest and then your user infos are in that credential file. if you cat it you should see something like
username:realmname:pwhash
i thing this is md5 as hash but it doesnt really matter for your prob. so if you want to make a new useryou have to use
htdigest [ -c ] passwdfile realm username
then you should use trac-admin to give the permission and at that point your user should be able to login.
Cheers
MArkus

Bitbucket and Database Development

I have a Windows server with MS SQL Server running on it.
On the SQL Server developers have created stored procedures, views, tables, triggers.
On the Windows server developers created shell scripts.
I would like to start versioning the code described above in a BitBucket repository. I have a repository created in BitBucket.
How should the branches be organized in this repository? i.e. "SQL Server\Database\\ ...
"Windows Server\\shell_script" ...
Can I connect BitBucket to SQL Server and Windows Server and specify which code needs to be versioned?
Are both 1 and 2 options above possible?
I just need to version control the changes to the code and have the ability to mark under which project the code change was made.
I am new to BitBucket. I am using the web front end of it. I do not know how to configure command line access, so please try not to reference Bitbucket commands. Sorry if I sound confusing.
Please help.
I know this is an old question but anyway, in principle I'd recommend:
Put all the server shell scripts into one place and make that a git repo linked to your bitbucket repo
Add a server shell script to export what you want version controlled from the SQL db
The export from the SQL db should be to text files so they are easily 'diffable'
You might as well make the export to a sub-directory within the shell scripts repo so that everything is in one place and can't get out of sync
So you only have one branch, not a separate one for server shell scripts and db
Make sure people run the export script and then commit everything when they make a change
You ideally have a test server which means you'd want a way to push changes from the repo into the SQL db. I presume you can do this with a script but deleting the server setup and re-creating it from the text files.
So basically, you can't connect an SQL db to bitbucket directly. You need scripts to read and write to the db from a repo.

TeamCity Database migration

We have a TeamCity installation as well as an external MSSQL database on a Microsoft SQL server. We've had to migrate the database to a new instance and now have to configure TeamCity to point to the new database.
I've looked through this guide (https://confluence.jetbrains.com/display/TCD10/Manual+Backup+and+Restore) among others but they all seem needlesly complicated and seem to imply a complete relocation of the entire teamcity installation whereas we simply want to point an existing teamcity installation to a new database.
A simply search reveals a config with a connectionstring hidden in teamcity/serverdata/config. It would seem like we could simply change the config file and be done with it. Are we missing something?
We're using TeamCity Professional 2017.1 (build 46533)
If you're only migrating to the new server, then changing configuration in <TeamCity Data Directory>\config\database.properties file all you have to do.
I assume that you'll make a backup, migrate data to the new database, right? After that you can safely change value in the corresponding file and restart the Teamcity. Probably make sense to check connection to the database from Teamcity server first as well.

Running different versions of postgresql side by side

I have postgresql 9.3 installed.
I would like to have also postgres 9.6.1 installed.
Each application is using a different DB. Most of the times I don't run both applications, so I don't need them to run concurrently.
I downloaded the installer recommended by postgres, and installed 9.6.1, but then it seems that 9.3 is not able to start anymore. I'm getting an error trying to run sudo service postgres start:
Starting PostgreSQL 9.3 database server
The PostgreSQL server failed to start. Please check the log output.
The log file is empty (not sure that's the interesting one) - /var/log/postgresql/postgresql-9.3-main.log
Any idea how to be able to run both instances?
You need to check the postgresql.conf config file.
If you want to run both instances at the same time then they will need to be run on different ports otherwise they will conflict. The default is 5432, change this for one of the DB's.
Then make sure that the data directory, log file are unique for each instance.

ArangoDB upgrade loses data

I feel I'm doing something wrong with the ArangoDB upgrade process. The end result from the upgrade is that my databases exist, my users exist, my collections exist, but there are no documents in my collections. Obviously this is an issue. I've had this problem occur twice, upgrading from 2.3.1 -> 2.3.4, and 2.3.4 -> 2.4 in Windows. I used the same procedure in both cases:
Stopped the ArangoDB service
Made a backup copy of my ArangoDB directory from Program Files
Installed the new version of ArangoDB
Copied the contents of the database folder from the old ArangoDB directory to the new one, excluding the system database (I feel like this is where I go wrong...)
Then I open a command prompt to the bin directory and run arangod --upgrade
The upgrade output seems right to me, it finds the old databases and upgrades them, which is evident by the fact that they exist, along with the collections. But as stated before the collections are all empty. Thankfully this has been in a dev environment, but I worry about upgrading my production environment. Am I doing something wrong or is this a bug?
I've tried to reproduce this with the step 2.3.5 to 2.4.1 using the x64 Arango packages
What I did:
First, ran arangod from the shell with its own database directory outside of the program directory:
bin\arangod.exe c:\ee --console
Created a collection, inserted data (like the js/server/tests/aql-optimizer-rule-use-index-for-sort.js setUp()-function does)
then installed the new version, ran
bin\arangod.exe c:\ee --upgrade
then
bin\arangod.exe c:\ee --console
AQL_EXECUTE("for u in UnitTestsAqlOptimizeruse_index_for_sort_XX return u")
Which gave me all 100 documents which I put into the collection.
Next I tried with running the arangod service, with the var\lib folder inside of the Porgram Files folder. I connected using arangosh, inserted the documents into the collection again, verified with
db._query("for u in UnitTestsAqlOptimizeruse_index_for_sort_XX return u").toArray();
that all data was there.
Then stopped the service, installed 2.4.1, stopped the service, and used explorer to copy over the ArangoDB 2.4.1\var\lib directory, run the arangod --upgrade with success restarted the service, and used arangosh to successfully revalidate the collection and its documents again.
So, as this seems similar to what you did, can you try to reproduce this with a minimal set of data and send us your var\lib directory?
As it turns out the problem was related to replication. I would replicate data from the production db to use during development. Then when I would upgrade or stop the Arango service on the dev db all the documents would vanish. BUT when I used arango backup and restore to copy the production DB data, everything worked as expected. The newest version of Arango is supposed to have fixed the issue, but I haven't had any time to test it.

Resources