I have a testing script that creates and deletes testing databases. At some point today it started failing. Digging further it looks like several of my testing databases are in an inconsistent state.
The databases appear in Fauxton with the message "This database failed to load." I am unable to view the database contents on this interface. Their names which are usually links are now plain text.
Issuing GET and DELETE commands with curl shows the following errors:
$ curl -s -X DELETE http://username:password#0.0.0.0:5984/dbname
{"error":"error","reason":"internal_server_error"}
$ curl -s -X GET http://username:password#0.0.0.0:5984/dbname
{"error":"internal_server_error","reason":"No DB shards could be opened.","ref":2413987899}
I have looked inside the couchdb2 data directory and I do see that shards exist for these databases.
What can I do to delete these databases? I am not sure if I can do this by manually deleting files in the couchdb2 data directory.
Have you solved your issue yet? I had this same problem, and ultimately ended up just installing a new CouchDB 2.1.0 instance and replicating to it before taking down the original. I suspect it might have had something to do with CouchDB not liking its default choice of "couchdb#localhost" as the name for a node, because it was constantly telling me that was an illegal hostname.
Related
I am trying to migrate a series of Trac projects originally hosted on CloudForge onto a new Bitnami virtual machine (debian with Trac stack installed).
The documentation on the Trac wiki regarding restoring from a backup is a little vague for me but suggests that I should be able to setup a new project
$ sudo trac-admin PROJECT_PATH initenv
stop the services from running
$ sudo /opt/bitnami/ctlscript.sh stop
copy the snapshot from the backup into the new project path and restart the services
$ sudo /opt/bitnami/ctlscript.sh start
and should be good to go.
Having done this (and worked through quite a few issues on the way) I have now got to the point where the browser page shows
Trac Error
TracError: Unable to check for upgrade of trac.db.api.DatabaseManager: TimeoutError: Unable to get database connection within 0 seconds. (OperationalError: unable to open database file)
When I setup the new project I note that I left the default (unedited) database string but I have no idea what database type was used for the original CloudForge Trac project i.e. is there an additional step to restore the database.
Any help would be greatly appreciated, thanks.
Edit
Just to add, the CloudForge was using Trac 0.12.5, new VM uses Trac 1.5.1. Not sure if this will be an issue?
Edit
More investigation and I'm now pretty sure that the CloudForge snapshot is not an SQLite (or other) database file - it looks like maybe a query type response as it starts and ends with;
BEGIN TRANSACTION;
...
COMMIT;
Thanks to anyone taking the time to read this but I think I'm sorted now.
After learning more about SQLite i discovered that the file sent by CloudForge was an sqlite DUMP of the database and was easy enough to migrate to a new database instance using the command line
$ sqlite3 location_of/new_database.db < dump_file.db
I think I also needed another prior step of removing the contents of the original new_database.db using the sqlite3 command line (just type sqlite3 in terminal)
$ .open location_of/new_database.db
$ BEGIN TRANSACTION;
$ DELETE FROM each_table_in_database;
$ COMMIT;
$ .exit
I then had some issue with credentials on the bitnami VM so needed to retrieve these (as per the bitnami documentation) using
$ sudo cat /home/bitnami/bitnami_credentials
and add this USER_NAME as a TRAC_ADMIN using
$ trac-admin path/to/project/ permission add USER_NAME TRAC_ADMIN
NOTE that pre and post this operation be sure to stop and re-start the bitnami services using
$ sudo /opt/bitnami/ctlscript.sh stop
$ sudo /opt/bitnami/ctlscript.sh start
I am the guy from Trac Users, you need to understand that the user isnt really stored in the db. You got some tables with columns holding the username but there is no table for an user. Looking at you post i think your setup used htdigest and then your user infos are in that credential file. if you cat it you should see something like
username:realmname:pwhash
i thing this is md5 as hash but it doesnt really matter for your prob. so if you want to make a new useryou have to use
htdigest [ -c ] passwdfile realm username
then you should use trac-admin to give the permission and at that point your user should be able to login.
Cheers
MArkus
So my task is to write an assignment in nodejs and bootstrap with mongodb database.
My next task is to share this with the Assignee in a way so that he/ she can run the project in his/ her local environment.
I can transfer the codes in git and share but how to share the database as well?
You can see MongoDB documentation for that
mongodump -d <database_name> -o <directory_backup>
mongorestore -d <database_name> <directory_backup>
video for mongoexport command
You could use a database-as-a-service like mlab so that the database will be the same on both the machines.
Or if you don't like external database, you could also create a node.js script to init the database the first time the program is run.
Use studio3T, choose your collection. Right click on the collection and export in your desired format. Same way your colleague will import from Studio3T.
I am very new in Google app engine please help me to solve my problem
I have created one instance in Google cloud sql when I import SQL file then it shows me error like this.
ERROR 1227 (42000) at line 1088: Access denied; you need (at least one of) the SUPER privilege(s) for this operation
How do I to add super privilege to my instance.
As stated at the Cloud SQL documentation:
The SUPER privilege is not supported.
You can take a look at this page that explains how to import data to a Cloud SQL instance.
I also faced the same issue. But the problem was in dumped sql database. When exporting the database use these flags
--hex-blob --skip-triggers --set-gtid-purged=OFF
Here is the complete documentation of how to do it (https://cloud.google.com/sql/docs/mysql/import-export/importing). Once the data is exported, it can be imported using command line, gcloud shell or there is an option of import in gcloud sql as well.
I used the import feature of gcloud sql console and it worked for me.
I ran into the the same error when backporting a gzipped dump (procured with mysqldump from a 5.1 version of MySQL) into a Google Cloud SQL instance of MySQL 5.6. The following statement in the sql file was the problem:
DEFINER=`username`#`%`
The solution that worked for me was removing all instances of it using sed :
cat db-2018-08-30.sql | sed -e 's/DEFINER=`username`#`%`//g' > db-2018-08-30-CLEANED.sql
After removal the backport completed with no errors. Apparently SUPER privilege is needed, which isn't available in Google Cloud SQL, to run DEFINER.
Another reference: Access denied; you need (at least one of) the SUPER privilege(s) for this operation
Good luck!
i faced same issue you can try giving 'super permission' to user but isn't available in GCP cloud SQL.
The statement
DEFINER=username#`%
is an issue in your backup dump.
The solution that you can work around is to remove all the entry from sql dump file and import data from GCP console.
cat DUMP_FILE_NAME.sql | sed -e 's/DEFINER=<username>#%//g' >
NEW-CLEANED-DUMP.sql
After removing the entry from dump and completing successfully you can try reimporting.
For the use case of copying between databases within the same instance, it seems the only way to do this is using mysqldump, which you have to pass some special flags so that it works without SUPER privileges. This is how I copied from one database to another:
DB_HOST=... # set to 127.0.0.1 if using cloud-sql proxy
DB_USER=...
DB_PASSWORD=...
SOURCE_DB=...
DESTINATION_DB=...
mysqldump --hex-blob --skip-triggers --set-gtid-purged=OFF --column-statistics=0 -h $DB_HOST -u $DB_USER -p"$DB_PASSWORD" $SOURCE_DB \
| mysql -h $DB_HOST -u $DB_USER -p"$DB_PASSWORD" $DESTINATION_DB
Or if you just want to dump to a local file and do something else with it later:
mysqldump --hex-blob --skip-triggers --set-gtid-purged=OFF --column-statistics=0 -h $DB_HOST -u $DB_USER -p"$DB_PASSWORD" $SOURCE_DB \
> $SOURCE_DB.sql
See https://cloud.google.com/sql/docs/mysql/import-export/exporting#export-mysqldump for more info.
It's about the exporting of data. When you export from the console, it exports the whole Instance, not just the schema, which requires the SUPER privilege for the project in which it was created. To export data to another project, simply export by targeting the schema/s in the advanced option. If you run into could not find storage or object, save the exported schema to your local, then upload to your other project's storage, then select it from there.
In case somebody is searching for this in 2018 (at least august) the solution is:
Create a database. You can do this from UI, just go to Database menu and click "Create a database".
After you clicked "import" and selected your sql_dump (previously saved in a bucket), press "Show advanced options" and select your Db (not that advanced, are they?!). Otherwise, the default is the system mysql which, of course can not
support import.
Happy importing.
I solved this by creating a new database and in the SQL instance. (Default database is sys for mysql).
Steps(Non-cli version):
1) In GCP > SQL > Databases , create a new database e.g newdb
2) In your sql script, add: Use newdb;
Hope that helps someone
SUPER privilege is exclusively reserved for GCP
For you question, you need to import data into a YOUR database in which you have permission ..
I have a question about importing a sql file to postgres CLI. I may have been improperly importing my file or either I may have some User or Database privilege?!? issue. Anyways, these are just my hunches. I am trying to pinpoint the cause of this message after importing a sql file.
The message that I get is:
No relations found.
The steps I did to get into Postgres are:
I typed in:
sudo -i -u postgres
psql
then i created a new role, altered the role permission
and then created a new database as well
i got all my commands from this site http://blog.jasonmeridth.com/posts/postgresql-command-line-cheat-sheet/
last step was I imported a sql file by typing:
psql -d db_name_dev -U username_dev -f /www/dbexport.sql
Now when I go inside the database I created "db_name_dev" by typing
psql db_name_dev and check to see any content imported by typing \dt
I get
No relations found.
here is also a table and role list from my command line..
http://screencast.com/t/8ZMqBLNRb
I'm thinking my database might also have some access privilege issue..
also here is an additional issue i ran into.. hope this helps..
http://screencast.com/t/BJy0ZjrALm6h
thanks,
any feedback would be appreciated
ok so after a few research and readings, i found out my .sql file was empty.. here are some links ive read and learnt more about pg_dump command dbforums.com/showthread.php?1646161-Postgresql-Restores and pg_dump vs pg_dumpall? which one to use to database backups?
A while back I needed to parse a bunch of Serve-U FTP log files and store them in a database so people could report on them. I ended up developing a small C# app to do the following:
Look for all files in a dir that have not been loaded into the db (there is a table of previously loaded files).
Open a file and load all the lines into a list.
Loop through that list and use RegEx to identify the kind of row (CONNECT, LOGIN, DISCONNECT, UPLOAD, DOWNLOAD, etc), parse it into a specific kind of object corresponding to the kind of row and add that obj to another List.
Loop through each of the different object lists and write each one to the associated database table.
Record that the file was successfully imported.
Wash, rinse, repeat.
It's ugly but it got the job done for the deadline we had.
The problem is that I'm in a DBA role and I'm not happy with running a compiled app as the solution to this problem. I'd prefer something more open and more DBA-oriented.
I could rewrite this in PowerShell but I'd prefer to develop an SSIS package. I couldn't find a good way to split input based on RegEx within SSIS the first time around and I wasn't familiar enough with SSIS. I'm digging into SSIS more now but still not finding what I need.
Does anybody have any suggestions about how I might approach a rewrite in SSIS?
I have to do something similar with Exchange logs. I have yet to find an easier solution utilizing an all SSIS solution. Having said that, here is what I do:
First I use logparser from Microsoft and the bulk copy functionality of sql2005
I copy the log files to a directory that I can work with them in.
I created a sql file that will parse the logs. It looks similar to this:
SELECT TO_Timestamp(REPLACE_STR(STRCAT(STRCAT(date,' '), time),' GMT',''),'yyyy-M-d h:m:s') as DateTime, [client-ip], [Client-hostname], [Partner-name], [Server-hostname], [server-IP], [Recipient-Address], [Event-ID], [MSGID], [Priority], [Recipient-Report-Status], [total-bytes], [Number-Recipients], TO_Timestamp(REPLACE_STR([Origination-time], ' GMT',''),'yyyy-M-d h:m:s') as [Origination Time], Encryption, [service-Version], [Linked-MSGID], [Message-Subject], [Sender-Address] INTO '%outfile%' FROM '%infile%' WHERE [Event-ID] IN (1027;1028)
I then run the previous sql with logparser:
logparser.exe file:c:\exchange\info\name_of_file_goes_here.sql?infile=c:\exchange\info\logs\*.log+outfile=c:\exchange\info\logs\name_of_file_goes_here.bcp -i:W3C -o:TSV
Which outputs a bcp file.
Then I bulk copy that bcp file into a premade database table in SQL server with this command:
bcp databasename.dbo.table in c:\exchange\info\logs\name_of_file_goes_here.bcp -c -t"\t" -T -F 2 -S server\instance -U userid -P password
Then I run queries against the table. If you can figure out how to automate this with SSIS, I'd be glad to hear what you did.