SYSTEM.ADMIN(ADMIN)=> create table test ( name varchar(20), age int);
CREATE TABLE
SYSTEM.ADMIN(ADMIN)=> alter table test add column dob varchar(20) NOT NULL;
ERROR: ALTER TABLE: not null constraint for column "DOB" not allowed without default value
Do we have to specify a default value after not null even on empty table?
SYSTEM.ADMIN(ADMIN)=> alter table test add column dob varchar(20) NOT NULL DEFAULT '0';
ALTER TABLE
Is this expected behavior ?
You can create the table from scratch without specifying a default value.
create table test ( name varchar(20)
, age int
,dob varchar(20) NOT NULL );
However when adding a column it is required in postgresql (netezza) to specify a default value to fill any nulls that would be present. This is expected. The sequence to remove the default is as follows:
create table test ( name varchar(20), age int);
ALTER TABLE test add column dob varchar(20) NOT NULL default 'a';
ALTER TABLE test ALTER COLUMN dob DROP DEFAULT;
How can I add a column to a Postgresql database that doesn't allow nulls?
This behavior is expected. When altering a table, Netezza uses a versioned table approach. If you add a column to a table, there will actually be two different table versions under the covers which are presented as a single table to the user.
The original table version (the one without the new NOT NULL DEFAULT column) is not modified until a GROOM VERSIONS collapses the versions again into a single underlying table. The upside here is that the alter is fast because it doesn't require a scan/update of the existing rows. Instead it knows to provide the DEFAULT value for column that doesn't exist in the original underlying table version.
When altering a table to add a column with the NOT NULL property, the system requires a DEFAULT specification so that it knows how to represent the added column. This is required whether the table actually has any rows or not.
TESTDB.ADMIN(ADMIN)=> CREATE TABLE TEST ( NAME VARCHAR(20), AGE INT);
CREATE TABLE
TESTDB.ADMIN(ADMIN)=> insert into test values ('mine',5);
INSERT 0 1
TESTDB.ADMIN(ADMIN)=> ALTER TABLE TEST ADD COLUMN DOB VARCHAR(20) NOT NULL DEFAULT '0';
ALTER TABLE
TESTDB.ADMIN(ADMIN)=> insert into test values ('yours',50);
INSERT 0 1
TESTDB.ADMIN(ADMIN)=> select* from test;
NAME | AGE | DOB
-------+-----+-----
yours | 50 | 0
mine | 5 | 0
(2 rows)
The good news is that you can then alter the newly added column to remove that default.
TESTDB.ADMIN(ADMIN)=> ALTER TABLE TEST ALTER COLUMN DOB DROP DEFAULT;
ALTER TABLE
TESTDB.ADMIN(ADMIN)=> \d test
Table "TEST"
Attribute | Type | Modifier | Default Value
-----------+-----------------------+----------+---------------
NAME | CHARACTER VARYING(20) | |
AGE | INTEGER | |
DOB | CHARACTER VARYING(20) | NOT NULL |
Distributed on random: (round-robin)
Versions: 2
TESTDB.ADMIN(ADMIN)=> select * from test;
NAME | AGE | DOB
-------+-----+-----
yours | 50 | 0
mine | 5 | 0
(2 rows)
As a parting note, it's important to groom any versioned tables as promptly as possible in order to keep your performance from degrading over time due to the nature of versioned tables.
TESTDB.ADMIN(ADMIN)=> GROOM TABLE TEST VERSIONS;
NOTICE: Groom will not purge records deleted by transactions that started after 2015-07-27 01:32:16.
NOTICE: If this process is interrupted please either repeat GROOM VERSIONS or issue 'GENERATE STATISTICS ON "TEST"'
NOTICE: Groom processed 1 pages; purged 0 records; scan size unchanged; table size unchanged.
GROOM VERSIONS
TESTDB.ADMIN(ADMIN)=> \d test
Table "TEST"
Attribute | Type | Modifier | Default Value
-----------+-----------------------+----------+---------------
NAME | CHARACTER VARYING(20) | |
AGE | INTEGER | |
DOB | CHARACTER VARYING(20) | NOT NULL |
Distributed on random: (round-robin)
At this point the table is no longer a versioned table an all values for the NOT NULL columns are fully materialized.
Related
I'm using SQL Server 2017 and I want to add a NOT NULL column without DEFAULT but supply a values for current record e.g. using WITH VALUES in a single query.
Let me explain. I understand the fact that I cannot create a NOT NULL column without supplying values. But a DEFAULT clause sets a default value for this column also for future inserts which I don't want. I want a default value to be used only for adding this new column and that's it.
Let me explain.
Assume such a sequence of queries:
CREATE TABLE items (
id INT NOT NULL PRIMARY KEY IDENTITY(1,1)
);
ALTER TABLE items ADD name VARCHAR(255) NOT NULL; -- No default value because table is empty
INSERT INTO items(name) VALUES( 'test'); -- ERROR
Last query gives error (as expected):
Error: Cannot insert the value NULL into column 'description', table 'suvibackend.dbo.items'; column does not allow nulls. INSERT fails.
It is so because we didn't supply value for description column. It's obvious.
Let's consider a situation when there are some records in items table. Without a DEFAULT and WITH VALUES clauses it will fail (obviously) so let's use them now:
CREATE TABLE items (
id INT NOT NULL PRIMARY KEY IDENTITY(1,1),
name varchar(255) NOT NULL
);
INSERT INTO items(name) VALUES ('name-test-1');
INSERT INTO items(name) VALUES ('name-test-2');
ALTER TABLE items ADD description VARCHAR(255) NOT NULL DEFAULT 'no-description' WITH VALUES;
So now our table looks like this as expected:
SELECT * FROM items;
--------------------------------------
| id | name | description |
| --- | ----------- | -------------- |
| 1 | name-test-1 | no-description |
| 2 | name-test-2 | no-description |
--------------------------------------
But from now on, it is possible to INSERT records without description:
INSERT INTO items(name) VALUES ('name-test-3'); -- No description column
SELECT * FROM ITEMS;
--------------------------------------
| id | name | description |
| --- | ----------- | -------------- |
| 1 | name-test-1 | no-description |
| 2 | name-test-2 | no-description |
| 3 | name-test-3 | no-description |
--------------------------------------
But when we compare this to our first situation (empty table without DEFAULT clause) it is different. I still want an error regarding NULL and description column.
SQL Server has created a default constraint for this column which I don't want to have.
The solution is to either drop a constraint after adding a new column with DEFAULT clause, or to split adding new column into 3 queries:
CREATE TABLE items
(
id INT NOT NULL PRIMARY KEY IDENTITY(1,1),
name varchar(255) NOT NULL
);
INSERT INTO items(name) VALUES ('name-test-1');
INSERT INTO items(name) VALUES ('name-test-2');
ALTER TABLE items
ADD description VARCHAR(255) NULL;
UPDATE items
SET description = 'no description'
ALTER TABLE items
ALTER COLUMN description VARCHAR(255) NOT NULL;
INSERT INTO items(name)
VALUES ('name-test-3'); -- ERROR as expected
My question:
Is there a way to achieve it in a single query, but without having a default constaint created?
It would be nice if it is possible to use a default value just for a query without permanently creating a constraint.
Although you can't specify an ephemeral default constraint that's automatically dropped after adding the column (i.e. single statement operation), you can explicitly name the constraint to facilitate dropping it immediately afterward.
ALTER TABLE dbo.items
ADD description VARCHAR(255) NOT NULL
CONSTRAINT DF_items_description DEFAULT 'no-description' WITH VALUES;
ALTER TABLE dbo.items
DROP CONSTRAINT DF_items_description;
Explict constraint names are a best practice, IMHO, as it makes subsequent DDL operations easier.
I am using sql server 2014 for windows form application,I have columns like this
id | AccountNo | Location
COLUMN DETAILS:
id ---> int, not null, auto increment
AccountNo ---> varchar(50), not null, Primary Key
Location ---> varchar(50), not null
I am want to run sql INSERT query to enter user input so that it would be like this
| id | AccountNo | Location|
| 10000 | PK10000 | PK |
I am thinking the QUERY LIKE THIS
string strCommand ="INSERT INTO Customer(ID, AccountNo, Location) values (#ID, CONCAT(#Location, #ID [, #AccountNo]), #Location)";
Or is it possible to create an trigger which would combine two columns at insert if it is not possible this way as the AccountNo is PK
Suggestions please..
You really shouldn't concatenate like this for primary key values. For example, you could get duplicate values. It's better to create an IDENTITY column to be the primary key. If you really need the extra column for display purposes, then add a computed column. For example:
CREATE TABLE MyTable
(
ID INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
Location VARCHAR(20) NOT NULL,
AccountNo AS CAST(ID AS VARCHAR) + Location
)
Now you only need to INSERT like this:
INSERT INTO MyTable (Location) VALUES ('PK')
This will give you a row with these values:
ID Location AccountNo
1 PK 1PK
You could also create a constraint on the computed column to enforce unique values for AccountNo:
ALTER TABLE MyTable
ADD CONSTRAINT UX_MyTable_AccountNo UNIQUE (AccountNo)
I created a set of tables in a database.A assigned relations with foreign key.
i am not getting cascading effect of data in mysql.
i created a table named ME and with MY_ID column name
I also created table named MY_Friends with MY_ID column name and foreing key with references to ME(MY_ID).
I am able to notice cascading effect in mysql
Me
My_ID | int(11) | NO | PRI | 0
My_Friends
My_ID | int(11) | YES | MUL | NULL | |
there are two columns description in two tables
FOREIGN KEY and CASCADE will have no effect if the tables are MyISAM. Check if the tables are defined using the InnoDB engine. Give us the output from SHOW CREATE TABLE Me and SHOW CREATE TABLE My_Friends so we can verify if that's the problem.
I use PostgreSQL but am looking for SQL answer as standard as possible.
I have the following table "docs" --
Column | Type | Modifiers
------------+------------------------+--------------------
id | character varying(32) | not null
version | integer | not null default 1
link_id | character varying(32) |
content | character varying(128) |
Indexes:
"docs_pkey" PRIMARY KEY, btree (id, version)
id and link_id are for documents that have linkage relationship between each other, so link_id self references id.
The problem comes with version. Now id is no longer the primary key (won't be unique either) and can't be referenced by by link_id as foreign key --
my_db=# ALTER TABLE docs ADD FOREIGN KEY(link_id) REFERENCES docs (id) ;
ERROR: there is no unique constraint matching given keys for referenced table "docs"
I tried to search for check constraint on something like "if exists" but didn't find anything.
Any tip will be much appreciated.
I usually do like this:
table document (id, common, columns, current_revision)
table revision (id, doc_id, content, version)
which means that document has a one-to-many relation with it's revisions, AND a one-to-one to the current revision.
That way, you can always select a complete document for the current revision with a simple join, and you will only have one unique row in your documents table which you can link parent/child relations in, but still have versioning.
Sticking as close to your model as possible, you can split your table into two, one which has 1 row per 'doc' and one with 1 row per 'version':
You have the following table "versions" --
Column | Type | Modifiers
------------+------------------------+--------------------
id | character varying(32) | not null
version | integer | not null default 1
content | character varying(128) |
Indexes:
"versions_pkey" PRIMARY KEY, btree (id, version)
And the following table "docs" --
Column | Type | Modifiers
------------+------------------------+--------------------
id | character varying(32) | not null
link_id | character varying(32) |
Indexes:
"docs_pkey" PRIMARY KEY, btree (id)
now
my_db=# ALTER TABLE docs ADD FOREIGN KEY(link_id) REFERENCES docs (id) ;
is allowed, and you also want:
my_db=# ALTER TABLE versions ADD FOREIGN KEY(id) REFERENCES docs;
of course there is nothing stoping you getting a 'combined' view similar to your original table:
CREATE VIEW v_docs AS
SELECT id, version, link_id, content from docs join versions using(id);
Depending on if it's what you want, you can simply create a FOREIGN KEY that includes the version field. That's the only way to point to a unique row...
If that doesn't work, you can write a TRIGGER (for all UPDATEs and INSERTs on the table) that makes the check. Note that you will also need a trigger on the docs table, that restricts modifications on that table that would break the key (such as a DELETE or UPDATE on the key value itself).
You cannot do this with a CHECK constraint, because a CHECK constraint cannot access data in another table.
I am wondering if all these are exactly the same or if there is some difference.
Method 1:
CREATE TABLE testtable
(
id serial,
title character varying,
CONSTRAINT id PRIMARY KEY (id)
);
Method: 2
CREATE TABLE testtable
(
id serial PRIMARY KEY,
title character varying,
);
Method 3:
CREATE TABLE testtable
(
id integer PRIMARY KEY,
title character varying,
);
CREATE SEQUENCE testtable_id_seq
START WITH 1
INCREMENT BY 1
NO MAXVALUE
NO MINVALUE
CACHE 1;
ALTER SEQUENCE testtable_id_seq OWNED BY testtable.id;
Update: I found something on the web saying that by using a raw sequence you can pre-allocate memory for primary keys which helps if you plan on doing several thousand inserts in the next minute.
Try it and see; remove the trailing "," after "varying" on the second and third so they run, execute each of them, then do:
\d testtable
after each one and you can see what happens. Then drop the table and move onto the next one. It will look like this:
Column | Type | Modifiers
--------+-------------------+--------------------------------------------------------
id | integer | not null default nextval('testtable_id_seq'::regclass)
title | character varying |
Indexes:
"id" PRIMARY KEY, btree (id)
Column | Type | Modifiers
--------+-------------------+--------------------------------------------------------
id | integer | not null default nextval('testtable_id_seq'::regclass)
title | character varying |
Indexes:
"testtable_pkey" PRIMARY KEY, btree (id)
Column | Type | Modifiers
--------+-------------------+-----------
id | integer | not null
title | character varying |
Indexes:
"testtable_pkey" PRIMARY KEY, btree (id)
First and second are almost identical, except the primary key created is named differently. In the third, the sequence is no longer filled in when you insert into the database. You need to create the sequence first, then create the table like this:
CREATE TABLE testtable
(
id integer PRIMARY KEY DEFAULT nextval('testtable_id_seq'),
title character varying
);
To get something that looks the same as the second one. The only upside to that is that you can use the CACHE directive to pre-allocate some number of sequence numbers. It's possible for that to be a big enough resource drain that you need to lower the contention. But you'd need to be doing several thousand inserts per second, not per minute, before that's likely to happen.
No semantic difference between method 1 and method 2.
Method 3 is quite similar, too - it's what happens implicitly, when using serial. However, when using serial, postgres also records a dependency of sequence on the table. So, if you drop the table created in method 1 or 2, the sequence gets dropped as well.