SQL Server Dynamic Column's - sql-server

I want to create an app like asana which has a dynamic column
Add Field option in ASANA
I don't want to use NO-SQL.
The idea is like in mariadb dynamic-columns
create table assets (
item_name varchar(32) primary key, -- A common attribute for all items
dynamic_cols blob -- Dynamic columns will be stored here
);
INSERT INTO assets VALUES
('MariaDB T-shirt', COLUMN_CREATE('color', 'blue', 'size', 'XL'));
INSERT INTO assets VALUES
('Thinkpad Laptop', COLUMN_CREATE('color', 'black', 'price', 500));
SELECT item_name, COLUMN_GET(dynamic_cols, 'color' as char) AS color FROM assets;
+-----------------+-------+
| item_name | color |
+-----------------+-------+
| MariaDB T-shirt | blue |
| Thinkpad Laptop | black |
+-----------------+-------+
I also have another idea to create a dynamic table, but I'm hesitant to implement because, I'm not that expert in SQL Server.
I'm afraid that this idea might take a long trip in the database.
The idea goes like below
create table assets_columns (
id integer primary key,
column_name varchar(100)
);
create table assets_values (
id integer primary key,
column_name varchar(100),
column_value varchar(100),
);
What do you think about my idea?
Do you have another idea?

Related

Add NOT NULL column without DEFAULT but WITH VALUES

I'm using SQL Server 2017 and I want to add a NOT NULL column without DEFAULT but supply a values for current record e.g. using WITH VALUES in a single query.
Let me explain. I understand the fact that I cannot create a NOT NULL column without supplying values. But a DEFAULT clause sets a default value for this column also for future inserts which I don't want. I want a default value to be used only for adding this new column and that's it.
Let me explain.
Assume such a sequence of queries:
CREATE TABLE items (
id INT NOT NULL PRIMARY KEY IDENTITY(1,1)
);
ALTER TABLE items ADD name VARCHAR(255) NOT NULL; -- No default value because table is empty
INSERT INTO items(name) VALUES( 'test'); -- ERROR
Last query gives error (as expected):
Error: Cannot insert the value NULL into column 'description', table 'suvibackend.dbo.items'; column does not allow nulls. INSERT fails.
It is so because we didn't supply value for description column. It's obvious.
Let's consider a situation when there are some records in items table. Without a DEFAULT and WITH VALUES clauses it will fail (obviously) so let's use them now:
CREATE TABLE items (
id INT NOT NULL PRIMARY KEY IDENTITY(1,1),
name varchar(255) NOT NULL
);
INSERT INTO items(name) VALUES ('name-test-1');
INSERT INTO items(name) VALUES ('name-test-2');
ALTER TABLE items ADD description VARCHAR(255) NOT NULL DEFAULT 'no-description' WITH VALUES;
So now our table looks like this as expected:
SELECT * FROM items;
--------------------------------------
| id | name | description |
| --- | ----------- | -------------- |
| 1 | name-test-1 | no-description |
| 2 | name-test-2 | no-description |
--------------------------------------
But from now on, it is possible to INSERT records without description:
INSERT INTO items(name) VALUES ('name-test-3'); -- No description column
SELECT * FROM ITEMS;
--------------------------------------
| id | name | description |
| --- | ----------- | -------------- |
| 1 | name-test-1 | no-description |
| 2 | name-test-2 | no-description |
| 3 | name-test-3 | no-description |
--------------------------------------
But when we compare this to our first situation (empty table without DEFAULT clause) it is different. I still want an error regarding NULL and description column.
SQL Server has created a default constraint for this column which I don't want to have.
The solution is to either drop a constraint after adding a new column with DEFAULT clause, or to split adding new column into 3 queries:
CREATE TABLE items
(
id INT NOT NULL PRIMARY KEY IDENTITY(1,1),
name varchar(255) NOT NULL
);
INSERT INTO items(name) VALUES ('name-test-1');
INSERT INTO items(name) VALUES ('name-test-2');
ALTER TABLE items
ADD description VARCHAR(255) NULL;
UPDATE items
SET description = 'no description'
ALTER TABLE items
ALTER COLUMN description VARCHAR(255) NOT NULL;
INSERT INTO items(name)
VALUES ('name-test-3'); -- ERROR as expected
My question:
Is there a way to achieve it in a single query, but without having a default constaint created?
It would be nice if it is possible to use a default value just for a query without permanently creating a constraint.
Although you can't specify an ephemeral default constraint that's automatically dropped after adding the column (i.e. single statement operation), you can explicitly name the constraint to facilitate dropping it immediately afterward.
ALTER TABLE dbo.items
ADD description VARCHAR(255) NOT NULL
CONSTRAINT DF_items_description DEFAULT 'no-description' WITH VALUES;
ALTER TABLE dbo.items
DROP CONSTRAINT DF_items_description;
Explict constraint names are a best practice, IMHO, as it makes subsequent DDL operations easier.

SQL Server - Database import from CSV/XLS

Have a basic question regarding how to solve this import problem.
Have a CSV with ca. 40 fields, that need to be inserted across ca. 5 tables.
Let say tables are like this
tpeople
Column Name | Datatype
GUID | uniqueidentifier
Fname | varchar
Lname | varchar
UserEnteredGUID | uniqueidentifier
tcompany
Column Name | Datatype
GUID | uniqueidentifier
CompanyTypeGUID | uniqueidentifier
PrintName | varchar
Website | varchar
tcompanyLocation
Column Name | Datatype
GUID | uniqueidentifier
CompanyGUID | uniqueidentifier
City | varchar
As we can see database is normalized and we can see different GUIDs.
My question is, when I will write for example a Python script to enter data, how should I handle the GUIDs?
For example I want to add:
Fname: John
Lname: Smith
Company: IBM
Location: New York
Website: www.ibm.com
UserEntered: Admin
How do I make sure all relations/GUID are correct?
I would try:
insert into tpeople(GUID,Fname,Lname,UserEnteredGUID) values("","John","Smith",???)
Question
How to get UserEnteredGUID? Do I have to make a select on GUID from UserEntered table where user equals "Admin"?
Or here:
insert into tcompany(GUID,CompanyTypeGUID,PrintName,Website) values("",??,"IBM","www.ibm.com")
Here the same? How should I handle CompanyTypeGUID? It would also mean that I have to populate CompanyType table BEFORE I add anything to tcompany table?
This does not look right to me, kind of thinking from back to forward, thinking how each table is connected to the other one .... there has to be a way to insert records to normalized database where this GUID, Foreign Keys stuff has to be somehow automated.
I hope somebody got my problem and can guide me towards solution.
Thanks!

Alter table add column not null on empty table in netezza

SYSTEM.ADMIN(ADMIN)=> create table test ( name varchar(20), age int);
CREATE TABLE
SYSTEM.ADMIN(ADMIN)=> alter table test add column dob varchar(20) NOT NULL;
ERROR: ALTER TABLE: not null constraint for column "DOB" not allowed without default value
Do we have to specify a default value after not null even on empty table?
SYSTEM.ADMIN(ADMIN)=> alter table test add column dob varchar(20) NOT NULL DEFAULT '0';
ALTER TABLE
Is this expected behavior ?
You can create the table from scratch without specifying a default value.
create table test ( name varchar(20)
, age int
,dob varchar(20) NOT NULL );
However when adding a column it is required in postgresql (netezza) to specify a default value to fill any nulls that would be present. This is expected. The sequence to remove the default is as follows:
create table test ( name varchar(20), age int);
ALTER TABLE test add column dob varchar(20) NOT NULL default 'a';
ALTER TABLE test ALTER COLUMN dob DROP DEFAULT;
How can I add a column to a Postgresql database that doesn't allow nulls?
This behavior is expected. When altering a table, Netezza uses a versioned table approach. If you add a column to a table, there will actually be two different table versions under the covers which are presented as a single table to the user.
The original table version (the one without the new NOT NULL DEFAULT column) is not modified until a GROOM VERSIONS collapses the versions again into a single underlying table. The upside here is that the alter is fast because it doesn't require a scan/update of the existing rows. Instead it knows to provide the DEFAULT value for column that doesn't exist in the original underlying table version.
When altering a table to add a column with the NOT NULL property, the system requires a DEFAULT specification so that it knows how to represent the added column. This is required whether the table actually has any rows or not.
TESTDB.ADMIN(ADMIN)=> CREATE TABLE TEST ( NAME VARCHAR(20), AGE INT);
CREATE TABLE
TESTDB.ADMIN(ADMIN)=> insert into test values ('mine',5);
INSERT 0 1
TESTDB.ADMIN(ADMIN)=> ALTER TABLE TEST ADD COLUMN DOB VARCHAR(20) NOT NULL DEFAULT '0';
ALTER TABLE
TESTDB.ADMIN(ADMIN)=> insert into test values ('yours',50);
INSERT 0 1
TESTDB.ADMIN(ADMIN)=> select* from test;
NAME | AGE | DOB
-------+-----+-----
yours | 50 | 0
mine | 5 | 0
(2 rows)
The good news is that you can then alter the newly added column to remove that default.
TESTDB.ADMIN(ADMIN)=> ALTER TABLE TEST ALTER COLUMN DOB DROP DEFAULT;
ALTER TABLE
TESTDB.ADMIN(ADMIN)=> \d test
Table "TEST"
Attribute | Type | Modifier | Default Value
-----------+-----------------------+----------+---------------
NAME | CHARACTER VARYING(20) | |
AGE | INTEGER | |
DOB | CHARACTER VARYING(20) | NOT NULL |
Distributed on random: (round-robin)
Versions: 2
TESTDB.ADMIN(ADMIN)=> select * from test;
NAME | AGE | DOB
-------+-----+-----
yours | 50 | 0
mine | 5 | 0
(2 rows)
As a parting note, it's important to groom any versioned tables as promptly as possible in order to keep your performance from degrading over time due to the nature of versioned tables.
TESTDB.ADMIN(ADMIN)=> GROOM TABLE TEST VERSIONS;
NOTICE: Groom will not purge records deleted by transactions that started after 2015-07-27 01:32:16.
NOTICE: If this process is interrupted please either repeat GROOM VERSIONS or issue 'GENERATE STATISTICS ON "TEST"'
NOTICE: Groom processed 1 pages; purged 0 records; scan size unchanged; table size unchanged.
GROOM VERSIONS
TESTDB.ADMIN(ADMIN)=> \d test
Table "TEST"
Attribute | Type | Modifier | Default Value
-----------+-----------------------+----------+---------------
NAME | CHARACTER VARYING(20) | |
AGE | INTEGER | |
DOB | CHARACTER VARYING(20) | NOT NULL |
Distributed on random: (round-robin)
At this point the table is no longer a versioned table an all values for the NOT NULL columns are fully materialized.

Combine(Concatenate) two columns to third(PK) on insert query, sql server 2014

I am using sql server 2014 for windows form application,I have columns like this
id | AccountNo | Location
COLUMN DETAILS:
id ---> int, not null, auto increment
AccountNo ---> varchar(50), not null, Primary Key
Location ---> varchar(50), not null
I am want to run sql INSERT query to enter user input so that it would be like this
| id | AccountNo | Location|
| 10000 | PK10000 | PK |
I am thinking the QUERY LIKE THIS
string strCommand ="INSERT INTO Customer(ID, AccountNo, Location) values (#ID, CONCAT(#Location, #ID [, #AccountNo]), #Location)";
Or is it possible to create an trigger which would combine two columns at insert if it is not possible this way as the AccountNo is PK
Suggestions please..
You really shouldn't concatenate like this for primary key values. For example, you could get duplicate values. It's better to create an IDENTITY column to be the primary key. If you really need the extra column for display purposes, then add a computed column. For example:
CREATE TABLE MyTable
(
ID INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
Location VARCHAR(20) NOT NULL,
AccountNo AS CAST(ID AS VARCHAR) + Location
)
Now you only need to INSERT like this:
INSERT INTO MyTable (Location) VALUES ('PK')
This will give you a row with these values:
ID Location AccountNo
1 PK 1PK
You could also create a constraint on the computed column to enforce unique values for AccountNo:
ALTER TABLE MyTable
ADD CONSTRAINT UX_MyTable_AccountNo UNIQUE (AccountNo)

Find data in table_1 and return it's primary key to table_2 - PostgreSQL database

In my postgresql database I have a table "countries":
| primary key | country_name|
And I have a csv file:
| primary_key | person_name | person's country |
I want to import this csv to my DB to the new table "people" and I want to automatically search person's country in the table "countries" and return it's primary key to the 4th new column of the table "people".
Can someone help with the script please?
UPDATE:
I'm now trying sir Zeki told me:
update people set country_id = (select id from countries where countries.country_name = people.country_name)
works fine, but the problem is that countries_id column remains empty. Even after I refresh the table.
Import the new table into postgres, add a fourth column, and then run something like:
update people set country_id = (select id from countries where countries.country_name = people.country_name)

Resources