SQL Server - Database import from CSV/XLS - sql-server

Have a basic question regarding how to solve this import problem.
Have a CSV with ca. 40 fields, that need to be inserted across ca. 5 tables.
Let say tables are like this
tpeople
Column Name | Datatype
GUID | uniqueidentifier
Fname | varchar
Lname | varchar
UserEnteredGUID | uniqueidentifier
tcompany
Column Name | Datatype
GUID | uniqueidentifier
CompanyTypeGUID | uniqueidentifier
PrintName | varchar
Website | varchar
tcompanyLocation
Column Name | Datatype
GUID | uniqueidentifier
CompanyGUID | uniqueidentifier
City | varchar
As we can see database is normalized and we can see different GUIDs.
My question is, when I will write for example a Python script to enter data, how should I handle the GUIDs?
For example I want to add:
Fname: John
Lname: Smith
Company: IBM
Location: New York
Website: www.ibm.com
UserEntered: Admin
How do I make sure all relations/GUID are correct?
I would try:
insert into tpeople(GUID,Fname,Lname,UserEnteredGUID) values("","John","Smith",???)
Question
How to get UserEnteredGUID? Do I have to make a select on GUID from UserEntered table where user equals "Admin"?
Or here:
insert into tcompany(GUID,CompanyTypeGUID,PrintName,Website) values("",??,"IBM","www.ibm.com")
Here the same? How should I handle CompanyTypeGUID? It would also mean that I have to populate CompanyType table BEFORE I add anything to tcompany table?
This does not look right to me, kind of thinking from back to forward, thinking how each table is connected to the other one .... there has to be a way to insert records to normalized database where this GUID, Foreign Keys stuff has to be somehow automated.
I hope somebody got my problem and can guide me towards solution.
Thanks!

Related

SQL Server Dynamic Column's

I want to create an app like asana which has a dynamic column
Add Field option in ASANA
I don't want to use NO-SQL.
The idea is like in mariadb dynamic-columns
create table assets (
item_name varchar(32) primary key, -- A common attribute for all items
dynamic_cols blob -- Dynamic columns will be stored here
);
INSERT INTO assets VALUES
('MariaDB T-shirt', COLUMN_CREATE('color', 'blue', 'size', 'XL'));
INSERT INTO assets VALUES
('Thinkpad Laptop', COLUMN_CREATE('color', 'black', 'price', 500));
SELECT item_name, COLUMN_GET(dynamic_cols, 'color' as char) AS color FROM assets;
+-----------------+-------+
| item_name | color |
+-----------------+-------+
| MariaDB T-shirt | blue |
| Thinkpad Laptop | black |
+-----------------+-------+
I also have another idea to create a dynamic table, but I'm hesitant to implement because, I'm not that expert in SQL Server.
I'm afraid that this idea might take a long trip in the database.
The idea goes like below
create table assets_columns (
id integer primary key,
column_name varchar(100)
);
create table assets_values (
id integer primary key,
column_name varchar(100),
column_value varchar(100),
);
What do you think about my idea?
Do you have another idea?

How to design this tables better?

Firstly, sorry about the title I couldn't find better one.
I have a database that stores some devices depending on their number:
|-----------|-------------|-------------|
| device_id | device_name | device_type |
|-----------|-------------|-------------|
Each device has two types, 3-port and 1-port and each port has specific name, for example:
Device 1122 is type 3-port, port names are (kitchen,
living_room, bed_room).
Device 1123 is type 1-port, port name is (boiler).
My imagination design is:
|-----------|-------------|--------|-----------|--------|
+ device_id | device_name | port_1 | port_2 | port_3 +
|-----------|-------------|--------|-----------|--------|
| 1122 | First floor | kitchen|living_room|bed_room|
|-----------|-------------|--------|-----------|--------|
| 1123 | Second floor| boiler | null | null |
|-----------|-------------|--------|-----------|--------|
But my design is not good since if I had 100 device of type 1-port I leave 200 fields empty.
Can you please help me to create better design ?
I am pasting in my comment as an answer so you can mark the question answered.
You could break out ports to a separate, normalized table with
deviceId, port number, and port name. You will have one record for
each device and port combination with a foreign key reference back to
the main devices table. This will reduce empty fields and allow more
than 3 ports should the requirements change. However this comes at the
cost of an additional table and duplication of the key. From a space
perspective you may not end up much better. Then again, storage is
pretty cheap so I would not sweat it too much.
Zohar's answer is much more complete, so I will not have a problem if you accept his answer. However you should accept an answer to close the question.
A fully normalized schema will have device type table (where you specify the number of ports for each device type),
a device table with a unique device name and device id,
a ports table with a unique port name and a unique port id,
and an intersection table with device id and port id where the combination of these 2 columns is the primary key.
You should consider adding a check constraint on adding records to the intersection table to make sure you don't add too many records according to the device type (if your target db supports check constraints).
Here is a pseudo code for this schema:
TblDeviceType
(
DeviceType_Id int, -- primary key
DeviceType_Name varchar(20) -- unique
)
TblDevice
(
Device_Id int, -- primary key
Device_Type int, -- fk to TblDeviceType
)
TblPorts
(
Port_Id int, -- primary key
Port_Name varchar(30) -- unique
)
TblDeviceToPort
(
DeviceToPort_Device int, -- fk to TblDevice
DeviceToPort_Port int, -- fk to TblPort
Primary key (DeviceToPort_Device, DeviceToPort_Port)
)

Find data in table_1 and return it's primary key to table_2 - PostgreSQL database

In my postgresql database I have a table "countries":
| primary key | country_name|
And I have a csv file:
| primary_key | person_name | person's country |
I want to import this csv to my DB to the new table "people" and I want to automatically search person's country in the table "countries" and return it's primary key to the 4th new column of the table "people".
Can someone help with the script please?
UPDATE:
I'm now trying sir Zeki told me:
update people set country_id = (select id from countries where countries.country_name = people.country_name)
works fine, but the problem is that countries_id column remains empty. Even after I refresh the table.
Import the new table into postgres, add a fourth column, and then run something like:
update people set country_id = (select id from countries where countries.country_name = people.country_name)

sql-server one to one relational tables for different types of users

I have 2 kind of users. they all have some shared information like: username, email, password, phone,... but each kind of users has some other settings that are related to that kind of user only now I have thought of 3 ways to do it:
having 1 users table and having all column for all type of users (each row will have several empty columns that are not related to that kind of user)
having 3 tables users, usertype1, usertype2. that shared settings will be saved in users table and there will be a one to one relation to usertype1 or usertype2 (based on the usertype)
one user table and one setting table, this is more dynamic but this has a problem that I have to use varchar type for all settings.
which one is more wiser to use? I'm concerned that in the future user types might increase too.
Your Second option is most suitable. In addition to what you have described I would add a Column to Main_Users Table User_Type which references to a table User_Types with only two
values
User_Type
PK_TypeID User_Type
1 usertype1
2 usertype2
Main_User
Only the Columns That every user will have Like first name, Last name, Dob, UserID and anyother information that every user will have.
U_ID | Column1 | Column2 | Column3 | User_Type --<-- User Type values(1,2)
-- from User_Type Table
Type 1
This will be only for the users with Type one and only column that a Type1 user will have. Make U_ID a foreign key referencing U_ID column in the main users table.
U_ID | Column1 | Column2 | Column3
Type 2
This will be only for the users with Type Two and only column that a Typeuser will have.MakeU_ID` a foreign key referencing U_ID column in the main users table.
U_ID | Column1 | Column2 | Column3
I would think that option 3 would be the best, and you wouldn't have to use varchar for all settings. What you'll need is a third table for lookup values as (Setting_ID, Setting_Description). Use the Setting_ID in your settings table along with the value for that setting.

Insert data in two tables using one query

In php mysql, how can i enter data in two tables simultaneously with both tables having a primary key - foreign key relationship. say for example.
Table 1
id(P.K) | username | password
Table 2
id(F.K) | fname | lname | email
i have researched, and i don't wanna use last_insert_id() or mysql_insert_id(). Are there any other methods?
There are many ways to do this, one of then (and the best IMHO) is creating an trigger on your primary table that will update your second table. Example:
CREATE TRIGGER <trigger name> AFTER INSERT ON <your_primary_table>
FOR EACH ROW
BEGIN
SET #id = NEW.id;
INSERT INTO <your_second_table>(id) VALUES (#id);
END;

Resources