I installed postgresql and tried to create new database but couldn't succeed in creating database:
Server [localhost]:
Database [postgres]:
Port [5433]:
Username [postgres]:
psql (9.2.17)
WARNING: Console code page (850) differs from Windows code page (1252)
8-bit characters might not work correctly. See psql reference
page "Notes for Windows users" for details.
Type "help" for help.
postgres=# createdb gps_heatmap
postgres-# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+---------------------+---------------------+-----------------------
database4 | postgres | UTF8 | English_Canada.1252 | English_Canada.1252 |
postgres | postgres | UTF8 | English_Canada.1252 | English_Canada.1252 |
template0 | postgres | UTF8 | English_Canada.1252 | English_Canada.1252 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | English_Canada.1252 | English_Canada.1252 | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
postgres-#
Database is not created . I couldnt solve this problem. Help needed.
You are missing ; use
createdb gps_heatmap;
Related
Here is the console input-output, where I'm trying to access DB but psql cannot find it. I have tried changing capitalization, but result is the same
(base) username#MacBook-Pro-Ruslan ~ % psql -U username Employees
psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: FATAL: database "Employees" does not exist
But, I have this DB in my Psql app, here the screenshoot of it
UPD: I just checked in command line all the databases
postgres-# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
----------------+----------------+----------+---------+-------+-----------------------------------
postgres | ruslanpilipyuk | UTF8 | C | C |
ruslanpilipyuk | ruslanpilipyuk | UTF8 | C | C |
template0 | ruslanpilipyuk | UTF8 | C | C | =c/ruslanpilipyuk +
| | | | | ruslanpilipyuk=CTc/ruslanpilipyuk
template1 | ruslanpilipyuk | UTF8 | C | C | =c/ruslanpilipyuk +
| | | | | ruslanpilipyuk=CTc/ruslanpilipyuk
(4 rows)
And it looks like that my psql app has not connected or transferred db between console. How could I get access to this db in app? That I already have. Db that are shown in list in console - are the Dbs that I created manually in console
I have a shell script pulling data from a server into a postgresql table.
df -g | awk 'BEGIN{OFS=","}NR>1{$1=$1; print}' > /data/metric.csv
psql -h localhost -d metrics -U postgres -c "copy tablename from STDIN with delimiter as ',';" < /data/metric.csv
Displays as:
filesystem | gb_blocks | free | %used | iused | %iused | mounted_on
/dev/hd2 | 16.75 | 12.60 | 25% | 79098 | 3% | /usr
/dev/hd9var | 8.00 | 6.00 | 25% | 11965 | 1% | /var
/dev/hd3 | 36.75 | 18.83 | 49% | 5614 | 1% | /tmp
/dev/hd1 | 3.25 | 3.11 | 5% | 674 | 1% | /home
/dev/hd11admin | 0.25 | 0.25 | 1% | 16 | 1% | /admin
/proc | - | - | - | - | - | /proc
I'm working with Postgresql on an Ubuntu OS and pulling the info from an AIX server. I'd like to add a column with a timestamp for every time new data is added to the table because right now it just all blends together. I've tried to add another column for timestamp and give it a timestamp value but the timestamp isn't in the csv file and I'm not sure how to add it either. I appreciate an help I can get to solve this.
Create table and add date column with Default value like current_Date/now() ).
CREATE TABLE IF NOT EXISTS metrics
(
filesystem text ,
gb_blocks text ,
free text ,
per_used text ,
iused text ,
per_iused text ,
mounted_on text ,
load_dttm timestamp without time zone DEFAULT now()
);
mention columnswith table while loading data as below command
psql -h localhost -d metrics -U postgres -c "copy metrics(filesystem,gb_blocks,free,per_used,iused,per_iused,mounted_on) from STDIN with delimiter as ',';" < /data/metric.csv
I have the following Dockerfile definition file for a MariaDB server:
version: "3.8"
...
services:
database:
command: ["mysqld", "--character-set-server=utf8mb4", "--collation-server=utf8mb4_unicode_ci"]
container_name: mariadb
environment:
MARIADB_DATABASE: sample_database
MARIADB_INITDB_SKIP_TZINFO: "true"
MARIADB_PASSWORD_FILE: /run/secrets/mariadb_user_password
MARIADB_ROOT_PASSWORD_FILE: /run/secrets/root_user_password
MARIADB_USER: mariadb
TZ: "Etc/UTC" # https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
image: docker.io/library/mariadb:10.6-focal # https://hub.docker.com/_/mariadb/
networks:
- global-network
ports:
- "3306:3306"
restart: on-failure
secrets:
- mariadb_user_password
- root_user_password
stdin_open: true
tty: true
volumes:
- "database-volume:/var/lib/mysql"
I'm not quite versed on this topic, but if I would like to check the character- and collation-set used by the server, I usually do execute these queries:
> show variables like 'char%';
+------------------------+--------------------------+
|Variable_name |Value |
+------------------------+--------------------------+
|character_set_client |utf8mb4 |
|character_set_connection|utf8mb4 |
|character_set_database |utf8mb4 |
|character_set_filesystem|binary |
|character_set_results |utf8mb4 |
|character_set_server |utf8mb4 |
|character_set_system |utf8mb3 |
|character_sets_dir |/usr/share/mysql/charsets/|
+------------------------+--------------------------+
> show variables like 'collation%';
+--------------------+------------------+
|Variable_name |Value |
+--------------------+------------------+
|collation_connection|utf8mb4_unicode_ci|
|collation_database |utf8mb4_unicode_ci|
|collation_server |utf8mb4_unicode_ci|
+--------------------+------------------+
My understanding is that the settings that I have: --character-set-server=utf8mb4 and --collation-server=utf8mb4_unicode_ci are the correct ones to set both, the character set and collation to UTF-8.
Now my questions is: how do I get rid of that utf8mb3 value for character_set_system? It should be set to utf8mb4 along with the other values — or that's what I think ;)
Any clues how to set this correctly?
I would like to avoid using any configuration file(s) (like my.cnf) since I'm using the stock Docker image without any modifications.
character_set_system specifies the character set which will be used to store identifiers and other internal information.
It is 3byte utf8, and you cannot change it unless you change sources and recompile MariaDB.
Beginning with 10.6, utf8 was mapped to utf8mb3 (and will be mapped to utf8mb4 in later versions). See MDEV-8334
I am having a similar problem in MariaDB 10.6.5, I was trying to load a dump from AWS and had this error:
ERROR 1253 (42000) at line 26: COLLATION 'utf8mb3_general_ci' is not valid for CHARACTER SET 'utf8mb4'
My config looks like this:
MariaDB [(none)]> SHOW VARIABLES LIKE '%char%';
+--------------------------+----------------------------+
| Variable_name | Value |
+--------------------------+----------------------------+
| character_set_client | utf8mb4 |
| character_set_connection | utf8mb4 |
| character_set_database | utf8mb4 |
| character_set_filesystem | binary |
| character_set_results | utf8mb4 |
| character_set_server | utf8mb4 |
| character_set_system | utf8mb3 |
| character_sets_dir | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+
and according to this page https://mariadb.com/kb/en/old-mode/ I had to set old-mode to empty in /etc/mysql/mariadb.conf.d/50-server.cnf
old-mode=
Therefore it changed from
MariaDB [(none)]> SHOW VARIABLES LIKE '%old%';
+------------------------------------------+-----------------+
| Variable_name | Value |
+------------------------------------------+-----------------+
| old_mode | UTF8_IS_UTF8MB3 |
+--------------------------+---------------------------------+
to
MariaDB [(none)]> SHOW VARIABLES LIKE '%old%';
+------------------------------------------+-----------------+
| Variable_name | Value |
+------------------------------------------+-----------------+
| old_mode | |
+--------------------------+---------------------------------+
So I've managed to load the SQL dump even if character_set_system was still showing utf8mb3.
HTH somebody.
I have a ubuntu 20 on dreamcompute (which is cloud computing).
I create a user and a database. Here is the list of database and users (for some reason, I can't see database under a matt username).
I went into:
nano /etc/postgresql/13/main/postgresql.conf &
nano /etc/postgresql/13/main/pg_hba.conf and did the whole '*' and '0.0.0.0/0'
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+---------+---------+-----------------------
postgres | postgres | UTF8 | C.UTF-8 | C.UTF-8 |
strapi | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres+
| | | | | hossein=CTc/postgres
template0 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
as you can see you can't see Superuser and database strapi under the matt username.
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
matt | | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
ubuntu | | {}
I'm using my dream compute's ip address as my host and use my database and user and password but get error message: connection attempt time out.
Could someone please give me a pointer on why this is happening? I have been working onthis for 2 weeks now and I can't get it to work.
Error message is connection time out. That usually means that the port is blocked by a firewall. Check your cloud provider firewall settings and iptablesin your Linux box in case you have installed it.
If there was problem with permissions, the error message would be something else.
as ex4 mentioned above I needed to reachout to the company that I was renting my cloud computer from but I still could not connect to the database.
The way I went around it is that you can ssh into your database and then connect to your database as a localhost since you are ssh into your cloud computer.
In DBeaver you have a ssh tab and you can connect and then you got back to your postgres tab and fill the localhost, user, database name, and user password area and simply click connect.
Sadly this took weeks to come to this :/
I want to use the REASSIGN OWNED query to change all objects in 1 database from owner A to owner B.
Let say I have the following databases:
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
db1 | user1 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
db2 | user1 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
db3 | user2 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
I want to db1 and all objects inside so they are owned by user2. I run:
postgres=# \c db1
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
You are now connected to database "db1" as user "postgres".
db1=# REASSIGN OWNED BY user1 TO user2;
REASSIGN OWNED
The owner changed as it should for db1 and all of its objects. But the command also changed the owner of db2. Not the objects in db2, just the database (like and ALTER DATABASE statement):
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
db1 | user2 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
db2 | user2 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
db3 | user2 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
Is this the normal behaviour? How can I run the REASSIGN OWNED without altering other databases?
The documentation quoted in the comment by Daniel Vérité states:
old_role
The name of a role. The ownership of all the objects within the current database, and of all shared objects (databases, tablespaces), owned by this role will be reassigned to new_role.
so this is per spec. If this is not what you want, I think you need to state your use case more fully.