I have installed PostgreSQL 9.4.6 in docker image with docker version 1.10.1. According to this official image:
https://github.com/docker-library/postgres/blob/443c7947d548b1c607e06f7a75ca475de7ff3284/9.4/Dockerfile
As it is said here , that to create initial databases I add my sql script in "/docker-entrypoint-initdb.d" .
https://hub.docker.com/_/postgres/
Now after having some trouble I found that when I add query to create a database in sql script where database name has '-' ,they just cause the container to crash(i.e. the containers exits just after starting).
But query with having no '-' in database name works fine and container also doesn't crash and i can access those database.
For example, This query runs fine.
create database 1stName_2ndName with owner vagrant;
But I tried both of this queries individually and it fails for both cases
create database '1stName-2ndName' with owner vagrant;
or
create database 1stName-2ndName with owner vagrant;
Note: consider queries without double-quotation. vagrant user is already created and works fine.
I have a database which name is 1stName-2ndName. Can anybody help me to figure out the issue?
At my local Postgres installation, the following query works without a problem:
create database "1stName-2ndName" with owner vagrant;
Related
I'm using a postgresql (9.6) database in my project which is currently in development stage.
For production I want to use an exact copy/mirror of the database-cluster with a slightly different name.
I am aware of the fact that I can make a backup and restore it under a different cluster-name, but is there something like a mirror function via the psql client or pgAdmin (v.4) that mirrors all my schemas and tables and puts it in a new clustername?
In PostgreSQL you can use any existing database (which needs to be idle in order for this to work) on the server as a template when you want to create a new database with that content. You can use the following SQL statement:
CREATE DATABASE newdb WITH TEMPLATE someDbName OWNER dbuser;
But you need to make sure no user is currently connected or using that database - otherwise you will get following error.
ERROR: source database "someDbName" is being accessed by other users
Hope that helped ;)
I'm following this tutorial on Microsoft Docs. I've reached the part where I use the "Data Migration Assistant", but after selecting the target Azure database and clicking "Next", I get the following error:
An unexpected error occurred.
Current principal does not have CONTROL permission on securable AzureDatabaseName of class DATABASE.
I'm using the only user of the Azure SQL server - the server admin, which should have all permissions. I've verified that the user is 'db_owner' by using IS_ROLEMEMBER.
Am I missing something?
I had the same issue. This seems to be a bug in Azure SQL databases. If you have dots in the database name it does not work. I replaced the dots with slashes and it worked for me.
You do not need to recreate the database. A rename worked fine for me:
You have to make sure, that no-one else is using the database!
Connect to master table and execute the following script on the Azure SQL Server:
USE master;
GO
ALTER DATABASE [my.database]
Modify Name = [my-database] ;
GO
Here is a link on how to rename Azure SQL databases:
https://learn.microsoft.com/en-us/sql/relational-databases/databases/rename-a-database
Also make sure to create a firewall rule for your incoming connection. This error can be a bit of a red herring.
I deleted everything - the database, the sql server, and the resource group. Then I recreated everything using the same names, except the database name - which previously contained dots - and this time the migration tool worked. I guess I just encountered some bug.
if you have dots in target database name, you have to remove or replace the dots in db name.
Like: 'demo.customerdb' to 'demo-customerdb'
You can use Sql Management Studio for db renaming:
connect target database server
select target database
press "F2" key or right click on target database then select
"Rename"
remove dots (.) from the database name and that's it! :)
After then, you can try migration process again from the start.
I have a problem with my database. I installed postgreSQL 9.5 on my Ubuntu server. I changed the postgresql.conf file to allow binding the postgreSQL server to the localhost. This allows me to run pgAdmin and connect to my database by forwarding also the port 5432, where I run my postgreSQL.
The problem I am experiencing is that I only see the default table 'postgres', but not my newly created one 'games' (I created this table by running create database games with the postgres user connected to the server).
And here is my screen shot of the pgAdmin application with all the property value I use to connect to my server.
As you can see from the first picture I use the same permissions as for the postgres database - it is blank, which should grant access to everyone. I know I have to change that later and limit it to the postgres user I have, but for now I will let it that way. Once I manage to see my 'games' database, then I will start to tighten the security more.
UPDATE I granted all access to the database 'games', which is visible right on the third screen shot down. The access privilege is different. This did not help me, I would still not see the database, when connecting to the server with pgAdmin. A saw someone had a similar problem and run the right click on the server and clicked 'New database'. This seems created a new database, because as you can see from the pgAdmin, the application manage to find the score table I create inside pgAdmin. The reason I believe this is the case is, because running the same SQL connected to the server postgres=# select * from score; results in ERROR: relation "score" does not exist LINE 1: select * from score;.
I manage to find the problem. One of my problems was that I had (unaware of that) installed a postgreSQL server on my machine. Seems I installed it with my pgAdmin install. So everytime I would connect to my server, I would establish a connection to my localhost server and not my remote server. So I just uninstalled the server and installed only the pgAdmin client.
The second problem I had was that the file /etc/postgresql/9.5/main/pg_hba.conf had to be changed. So I run:
sudo vi /etc/postgresql/9.5/main/pg_hba.conf
and changed the line
# Database administrative login by Unix domain socket
local all postgres peer
to
# Database administrative login by Unix domain socket
local all postgres md5
Once that was changed, I had to restart the configuration by executing:
sudo /etc/init.d/postgresql reload
I would also point out that it is important to have postgres user as a unix and DB user with same passwords. I found all this information here.
Try granting access privileges explicitly for your new table.
I believe a blank access privileges column means the table has DEFAULT access privileges. The default could be no public access for tables, columns, schemas, and tablespaces. For more info: http://www.postgresql.org/docs/9.4/static/sql-grant.html
I have to deploy a SQL Server 2008 R2 from my development environment hosted on a Virtual Box VM to a brand new test server. Both servers use integrated Windows authentication.
Part of the problem was that the test server uses SQL Server 2008 (Express). I have managed to export schema creation scripts and raw data inside an Access database, but this is not the subject of the question: apparently the database was correctly imported on the target environment.
However, when I started the web site that depends on the exported database, I got some errors that does not appear when running in the development environment. After some research I found that the problem is caused by a little stored procedure.
This stored procedure creates a table on the fly that is dropped when no longer needed with a syntax like this one:
create table tmp_Codes (Code nvarchar(max))
When the test environment executes this statement the test environment effectively creates the table but it has the username attached to it, something like:
dbo.[NT AUTHORITY\NETWORK SERVICE].tmp_Codes
The subsequent code cannot find the newly created table and fails all operation on it.
I'll understand that this design is somehow broken, but I inherited this bunch of SQL scripts from a working environment and I cannot understand how this can work
Any ideas?
CREATE TABLE #tmp_Codes (Code nvarchar(max))
Sorry it was a very trivial error: the user NT AUTHORITY\NETWORK SERVICE has the default schema set to its name instead of dbo.
After changing the setting, the stored procedure worked as expected.
I have an application that uses EF Code First against a SQL Server 2012 database. I'm using the DropCreateDatabaseIfModelChanges initializer.
I have a database on my development machine that I want to move over to my testing machine, and to do that I'm attempting to use backup/restore. Unfortunately, having done that, I get the dreaded "Model compatibility cannot be checked because the database does not contain model metadata" error.
I don't understand why this is the case - the database works OK on my dev machine. Is it not possible to transfer the database to another machine?
Solved: the issue was that the __MigrationHistory table, while present, was not accessible to the application because of insufficient database privileges. I (temporarily) made the user a DBO on the database, and it all worked fine. (Hat tip to Jayantha).
Now the metadata table is removed from the code first DB and added the __MigrationHistory table to system tables. You can try running Enable-Migrations command in Package Manager Console. Here is more details .