improperly importing sql file in postgres cli - database

I have a question about importing a sql file to postgres CLI. I may have been improperly importing my file or either I may have some User or Database privilege?!? issue. Anyways, these are just my hunches. I am trying to pinpoint the cause of this message after importing a sql file.
The message that I get is:
No relations found.
The steps I did to get into Postgres are:
I typed in:
sudo -i -u postgres
psql
then i created a new role, altered the role permission
and then created a new database as well
i got all my commands from this site http://blog.jasonmeridth.com/posts/postgresql-command-line-cheat-sheet/
last step was I imported a sql file by typing:
psql -d db_name_dev -U username_dev -f /www/dbexport.sql
Now when I go inside the database I created "db_name_dev" by typing
psql db_name_dev and check to see any content imported by typing \dt
I get
No relations found.
here is also a table and role list from my command line..
http://screencast.com/t/8ZMqBLNRb
I'm thinking my database might also have some access privilege issue..
also here is an additional issue i ran into.. hope this helps..
http://screencast.com/t/BJy0ZjrALm6h
thanks,
any feedback would be appreciated

ok so after a few research and readings, i found out my .sql file was empty.. here are some links ive read and learnt more about pg_dump command dbforums.com/showthread.php?1646161-Postgresql-Restores and pg_dump vs pg_dumpall? which one to use to database backups?

Related

Import/Export PostgreSQL db "without" pg_dump or sql file / backup, etc...?

I need to import a old db into a new postgre server.
Is there a way to migrate an old database to a new server without using pg_dump?
I don't have the sql file, or the old server backup file, neither the user and the password, just the physical files in the "\data" folder, is there any way to do this?
The target server is in the same version of th old server.
Thanks.
Well as a test you could try:
pg_ctl start -D $DATA
Where pg_ctl comes from the target version and the $DATA is the the /data directory. You have not said how you came to have just a /data directory. If this came from an unclean shutdown or a corrupted drive the possibility exists that the server will not start.
UPDATE
To get around auth failure find pg_hba.conf and create or modify local connection to use trust method. For more info see pg_hba and trust. Then you should be able to connect like:
psql -d some_db -U postgres
Once in you can use ALTER ROLE to change password:
ALTER ROLE <role_name> WITH PASSWORD 'new_password';

Unable to DELETE or GET couchdb2 databases

I have a testing script that creates and deletes testing databases. At some point today it started failing. Digging further it looks like several of my testing databases are in an inconsistent state.
The databases appear in Fauxton with the message "This database failed to load." I am unable to view the database contents on this interface. Their names which are usually links are now plain text.
Issuing GET and DELETE commands with curl shows the following errors:
$ curl -s -X DELETE http://username:password#0.0.0.0:5984/dbname
{"error":"error","reason":"internal_server_error"}
$ curl -s -X GET http://username:password#0.0.0.0:5984/dbname
{"error":"internal_server_error","reason":"No DB shards could be opened.","ref":2413987899}
I have looked inside the couchdb2 data directory and I do see that shards exist for these databases.
What can I do to delete these databases? I am not sure if I can do this by manually deleting files in the couchdb2 data directory.
Have you solved your issue yet? I had this same problem, and ultimately ended up just installing a new CouchDB 2.1.0 instance and replicating to it before taking down the original. I suspect it might have had something to do with CouchDB not liking its default choice of "couchdb#localhost" as the name for a node, because it was constantly telling me that was an illegal hostname.

Google cloud sql instance super privilege error

I am very new in Google app engine please help me to solve my problem
I have created one instance in Google cloud sql when I import SQL file then it shows me error like this.
ERROR 1227 (42000) at line 1088: Access denied; you need (at least one of) the SUPER privilege(s) for this operation
How do I to add super privilege to my instance.
As stated at the Cloud SQL documentation:
The SUPER privilege is not supported.
You can take a look at this page that explains how to import data to a Cloud SQL instance.
I also faced the same issue. But the problem was in dumped sql database. When exporting the database use these flags
--hex-blob --skip-triggers --set-gtid-purged=OFF
Here is the complete documentation of how to do it (https://cloud.google.com/sql/docs/mysql/import-export/importing). Once the data is exported, it can be imported using command line, gcloud shell or there is an option of import in gcloud sql as well.
I used the import feature of gcloud sql console and it worked for me.
I ran into the the same error when backporting a gzipped dump (procured with mysqldump from a 5.1 version of MySQL) into a Google Cloud SQL instance of MySQL 5.6. The following statement in the sql file was the problem:
DEFINER=`username`#`%`
The solution that worked for me was removing all instances of it using sed :
cat db-2018-08-30.sql | sed -e 's/DEFINER=`username`#`%`//g' > db-2018-08-30-CLEANED.sql
After removal the backport completed with no errors. Apparently SUPER privilege is needed, which isn't available in Google Cloud SQL, to run DEFINER.
Another reference: Access denied; you need (at least one of) the SUPER privilege(s) for this operation
Good luck!
i faced same issue you can try giving 'super permission' to user but isn't available in GCP cloud SQL.
The statement
DEFINER=username#`%
is an issue in your backup dump.
The solution that you can work around is to remove all the entry from sql dump file and import data from GCP console.
cat DUMP_FILE_NAME.sql | sed -e 's/DEFINER=<username>#%//g' >
NEW-CLEANED-DUMP.sql
After removing the entry from dump and completing successfully you can try reimporting.
For the use case of copying between databases within the same instance, it seems the only way to do this is using mysqldump, which you have to pass some special flags so that it works without SUPER privileges. This is how I copied from one database to another:
DB_HOST=... # set to 127.0.0.1 if using cloud-sql proxy
DB_USER=...
DB_PASSWORD=...
SOURCE_DB=...
DESTINATION_DB=...
mysqldump --hex-blob --skip-triggers --set-gtid-purged=OFF --column-statistics=0 -h $DB_HOST -u $DB_USER -p"$DB_PASSWORD" $SOURCE_DB \
| mysql -h $DB_HOST -u $DB_USER -p"$DB_PASSWORD" $DESTINATION_DB
Or if you just want to dump to a local file and do something else with it later:
mysqldump --hex-blob --skip-triggers --set-gtid-purged=OFF --column-statistics=0 -h $DB_HOST -u $DB_USER -p"$DB_PASSWORD" $SOURCE_DB \
> $SOURCE_DB.sql
See https://cloud.google.com/sql/docs/mysql/import-export/exporting#export-mysqldump for more info.
It's about the exporting of data. When you export from the console, it exports the whole Instance, not just the schema, which requires the SUPER privilege for the project in which it was created. To export data to another project, simply export by targeting the schema/s in the advanced option. If you run into could not find storage or object, save the exported schema to your local, then upload to your other project's storage, then select it from there.
In case somebody is searching for this in 2018 (at least august) the solution is:
Create a database. You can do this from UI, just go to Database menu and click "Create a database".
After you clicked "import" and selected your sql_dump (previously saved in a bucket), press "Show advanced options" and select your Db (not that advanced, are they?!). Otherwise, the default is the system mysql which, of course can not
support import.
Happy importing.
I solved this by creating a new database and in the SQL instance. (Default database is sys for mysql).
Steps(Non-cli version):
1) In GCP > SQL > Databases , create a new database e.g newdb
2) In your sql script, add: Use newdb;
Hope that helps someone
SUPER privilege is exclusively reserved for GCP
For you question, you need to import data into a YOUR database in which you have permission ..

Dropping a postgres database in cmdline, still seeing the database when \list

I'm trying to drop my database and create a new one through the command line.
I login using psql postgres and then do a \list, see a list of the two databases i created which i now want to delete. so i tried using a DROP DATABASE databasename;
I don't see any error while executing that statement but when i try to \list again to see if that DB are deleted, i still see that that the DB exists. Can someone please tell me why this could happen? and how to surely delete those DB.
There are a couple caveats to DROP DATABASE:
It can only be executed by the database owner.
It cannot be executed while you or anyone else are connected to the target database.
I generally use the dropdb command-line tool to do this, since it's a wrapper around DROP DATABASE which doesn't require you to explicitly connect first. It still has the caveat that there can't be any users currently connected to the database, but it's generally quicker/easier to use.
I would recommend you try issuing a command like this:
dropdb -h <host> -U <user> -p <port> <name of db to drop>
Similarly, you can use the createdb command-line tool to create a database.
More info on DROP DATABASE: http://www.postgresql.org/docs/current/static/sql-dropdatabase.html
Edit:
Also, it is worth looking in the Postgres log (likely in /var/log/postgresql by default) to see if perhaps there is anything in there that wasn't surfaced in the results.

Easy way to view postgresql dump files?

I have a ton of postgresql dump files I need to peruse through for data. Do I have to install Postgresql and "recover" each one of them into new databases one by one? Or I'm hoping there's a postgresql client that can simply open them up and I can peek at the data, maybe even run a simple SQL query?
The dump files are all from a Postgresql v9.1.9 server.
Or maybe there's a tool that can easily make a database "connection" to the dump files?
UPDATE: These are not text files. They are binary. They come from Heroku's backup mechanism, this is what Heroku says about how they create their backups:
PG Backups uses the native pg_dump PostgreSQL tool to create its
backup files, making it trivial to export to other PostgreSQL
installations.
This was what I was looking for:
pg_restore db.bin > db.sql
Thanks #andrewtweber
Try opening the files with text editor - the default dump format is plain text.
If the dump is not plain text - try using pg_restore -l your_db_dump.file command. It will list all objects in the database dump (like tables, indexes ...).
Another possible way (may not work, haven't tried it) is to grep through the output of pg_restore your_db_dump.file command. If I understood correctly the manual - the output of pg_restore is just a sequence of SQL queries, that will rebuild the db.
In newer versions you need to specify the -f flag with a filename or '-' for stdout
pg_restore -f - dump_file.bin
I had this same problem and I ended up doing this:
Install Postgresql and PGAdmin3.
Open PGAdmin3 and create a database.
Right click the db and click restore.
Ignore file type.
Select the database dump file from Heroku.
Click Restore.
pg_restore -f - db.bin > db.sql
Dump files are usually text file, if Not compressed, and you can open them with a text editor. Inside you will find all the queries that allow the reconstruction of the database ...
If you use pgAdmin on Windows, can just backup the file as plain text, there is one option when you do backup instead of pg_dump in command line prompt.

Resources