Exporting tables from SQL Server into MS Access - sql-server

I'm trying to export tables from SQL Server into MS Access using the data import/export feature. Everything works well but for 2 things:
Primary key constraint is not being exported to MS Access and even the identity property. Ideally I wanted the country_id column to be an AutoNumber / primary key column in MS access.
bit column is being converted to Integer in MS access. I wanted it to be a Yes/No column.
Can somebody help me with this?
This here is my SQL Server code:
CREATE TABLE country
(
id_country int IDENTITY PRIMARY KEY not null,
my_tinyint tinyint,
my_single real,
my_double float,
my_bit bit,
my_char char(7),
my_longchar text
);

You cannot create an AutoNumber field directly with DDL. The best you can do is create a Primary Key field. The following DDL query works against my 2013 Access db:
CREATE TABLE country
(
id_country LONG CONSTRAINT PK_id_country PRIMARY KEY,
my_tinyint integer,
my_single single,
my_double double,
my_bit integer,
my_char text(7),
my_longchar memo
)
This would not create an incrementing field, however. You'd have to use DAO or ADOX to handle that, or do it manually in the Access interface.Here's a SO question that shows how to do that: How to create table with Autonumber field in MS - Access at run time?

Related

SQL Server Memory-Optimized Tables - Change Primary Key Data Type

I'm trying to change an in-memory table column type from INT to BIGINT on SQL Server 2017. Actually, this column are the PK from table.
For a regular table, the common path is to drop the PK constraint, change the data type and recreate PK, but when I've tried in my in-memory table I got the following error:
ALTER TABLE dbo.InMemTbl
DROP CONSTRAINT PK_InMemTbl
The memory optimized table 'InMemTbl' with DURABILITY=SCHEMA_AND_DATA
must have a primary key
I'm not an expert in memory-optimized tables so is there any workaround to solve it without recreate the whole table using the new data type?

Convert VARBINARY to Int or BigInt

My question is very simple and I understand that few Old days DB design is not good as we espect these days.
My legacy table does not have Primary key to perform Delta load. Hence, I'm trying to use Hashing concept to create Unique key. As "HASHBYTES" return VarBinary and I can not use VarBinary type as
primay key (not sure about this)
Ref URL on MSDN:
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/94231bb4-ccab-4626-a9fb-325264bb883f/can-varbinary700-column-be-used-as-primary-key?forum=transactsql
hence, I'm converting this to INT or BigInt. The problem is it gives both negative as well as positive value(due to the range).
My Question is:
How can I convert VARBINARY(100) type to integer or BigInt (+ve value) and Set this as a Primary key in one of my table?
Edit Note:
I tried to use VARBINARY as Primary key for Delta load in SSIS Lookup task. I got the error:
"Violation of PRIMARY KEY constraint 'PK__DMIN__607056C02FB7E7DE'. Cannot insert duplicate key in object 'dbo.DMIN_'. The duplicate key value is (0x00001195764c40525bcaf6baa922091696cd8886).".
However, when I checked for duplicate key from the table. Table does not have duplicate key. Then why this error is showing up?
Please note that, the 1st time of SSIS execution worked fine. However, it shows error during 2nd execution [during "lookup match output"].
Please help. Thanks.
In projects I've worked on before we've always used GUIDs as our primary keys, utilising the unique identifier type in SQL Server.
The main problem with this however, is that using a uniqueidentifier type as your clustered index can degrade the performance of your database after some time, so recently we've taken the following approach (based on this article):
Create column: guid, uniqueidentifier, nonnull, default value newsequentialid(), PK
Create column: id, bigint, nonnull, identity(1,1)
Create a non clustered index on the guid column, unique
Create a clustered index on the id column, unique
That way when you insert into this new table, you don't have to worry about the keys or identities.
If you need some form of reference between the old database and the new and you CAN modify the structure of the old database, you can create a uniqueidentifier column in that (or char(36) if it doesn't support uniqueidentifier) and assign a guid to each of those and THEN create an additional uniqueidentifier column in the new database so you have that reference and insert that value into it. If that makes sense.

Transfer data from one database to another database with different schema

i have problem to Transfer data from one sqlserver 2008 r2 to another sql server 2012 databases with different schema, here is some different scenario,
database 1
database 1 with tables Firm and Client, these both have FirmId and ClientId primary key as int datatype,
FirmId is int datatype as reference key used in Client table.
database 2
database 2 with same tables Firm and Client, these both have FirmId and ClientId but primary key as uniqueidentifier,
FirmId is uniqueidentifier datatype as reference key used in Client table.
problem
the problem is not to copy data from 1 database table to 2 database table, but the problem is to maintain the reference key's Firm table into Client table. because there is datatype change.
i am using sql server 2008 r2 and sql server 2012
please help me to resolve / find the solution, i really appreciate your valuable time and effort. thanks
I'll take a stab at it even if I am far from an expert on SQLServer - here is a general procedure (you will have to repeat it for all tables where you have to replace INT with UID, of course...).
I will use Table A to refer to the parent (Firm, if I understand your example clearly) and Table B to refer to the child (Client, I believe).
Delete the relations pointing to Table A
Remove the identity from the id column of Table A
Create a new column with Uniqueidentifier on Table A
Generate values for the Uniqueidentifier column
Add the new Uniqueidentifier column in all the child tables (Table B)
Use the OLD id column to map your child record & update the new Uniqueidentifier value from your parent table.
Drop all the id columns
Recreate the relations
Having said that, I just want to add a warning to you: converting to UID is, according to some, a very bad idea. But if you really need to do that, you can script (and test) the above mentioned procedure.

Correct SQL to convert mySQL tables to SQL Server tables

I have a number of tables I need to convert from mySQL to SQL Server.
An Example of a mySQL Table is
CREATE TABLE `required_items` (
`id` INT( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY COMMENT 'Unique Barcode ID',
`fk_load_id` INT( 11 ) NOT NULL COMMENT 'Load ID',
`barcode` VARCHAR( 255 ) NOT NULL COMMENT 'Barcode Value',
`description` VARCHAR( 255 ) NULL DEFAULT NULL COMMENT 'Barcode Description',
`created` TIMESTAMP NULL DEFAULT NULL COMMENT 'Creation Timestamp',
`modified` TIMESTAMP ON UPDATE CURRENT_TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Modified Timestamp',
FOREIGN KEY (`fk_load_id`) REFERENCES `loads`(`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE = InnoDB CHARACTER SET ascii COLLATE ascii_general_ci COMMENT = 'Contains Required Items for the Load';
And a trigger to update the created date
CREATE TRIGGER required_items_before_insert_created_date BEFORE INSERT ON `required_items`
FOR EACH ROW
BEGIN
SET NEW.created = CURRENT_TIMESTAMP;
END
Now I need to create tables similar to this in SQL Server. There seems to be a lot of different data types available so I am unsure which to use.
What data type should I use to the primary key column
(uniqueidentifier, bigint, int)?
What should I use for the timestamps
(timestamp, datatime, datetime2(7))?
How should I enforce the created
and modified timestamps (currently I am using triggers)?
How can I enforce foreign key constraints.
Should I be using Varchar(255) in SQL Server? (Maybe Text, Varchar(MAX) is better)
I am using Visual Studio 2010 to create the tables.
First of all, you can probably use PHPMyAdmin (or something similar) to script out the table creation process to SQL Server. You can take a look at what is automatically created for you to get an idea of what you should be using. After that, you should take a look at SSMS (SQL Server Management Studio) over Visual Studio 2010. Tweaking the tables that your script will create will be easier in SSMS - in fact, most database development tasks will be easier in SSMS.
What data type should I use to the primary key column (uniqueidentifier, bigint, int)?
Depending on how many records you plan to have in your table, use int, or bigint. There are problems with uniqueidentfiers that you will probably want to avoid. INT vs Unique-Identifier for ID field in database
What should I use for the timestamps (timestamp, datatime, datetime2(7))?
timestamps are different in SQL Server than in MySQL. Despite the name, a timestamp is an incrementing number that is used as a mechanism to version rows. http://msdn.microsoft.com/en-us/library/ms182776%28v=sql.90%29.aspx . In short though, datetime is probably your best bet for compatibility purposes.
How should I enforce the created and modified timestamps (currently I am using triggers)?
See above. Also, the SQL Server version of a "Timestamp" is automatically updated by the DBMS. If you need a timestamp similar to your MySQL version, you can use a trigger to do that (but that is generally frowned upon...kind of dogmatic really).
How can I enforce foreign key constraints.
You should treat them as you would using innoDB. See this article for examples of creating foreign key constraints http://blog.sqlauthority.com/2008/09/08/sql-server-%E2%80%93-2008-creating-primary-key-foreign-key-and-default-constraint/
Should I be using Varchar(255) in SQL Server? (Maybe Text, Varchar(MAX) is better)
That depends on the data you plan to store in the field. Varchar max is equivalent to varchar(8000) and if you don't need varchar(255), you can always set it to a lower value like varchar(50). Using a field size that is too large has performance implications. One thing to note is that if you plan to support unicode (multilingual) data in your field, use nvarchar or nchar.

PostgreSQL No Auto Increment function?

I have a test application coded in Java for creating an indexed and non indexed table in a MySQL, PostgreSQL, Oracle and Firebird database (Amongst other things).
Is it simply a case that PostgreSQL doesnt allow the auto increment feature? If not, what is the normal procedure for having an indexed coloumn?
Thanks in advance
You may use SERIAL in PostgreSQL to generate auto increment field,
For eg:-
CREATE TABLE user (
userid SERIAL PRIMARY KEY,
username VARCHAR(16) UNIQUE NOT NULL
)
This will create userid as auto-increment primary key indexed.
If you don't want this as primary key, just remove PRIMARY KEY.
Use a column of type SERIAL. It works the same way as AUTOINCREMEMT on some other DBs. (Check the docs for other features you can use with it.)
With current Postgres, you can just use SERIAL for the column type.
With older versions of Postgres, you can implement this using SEQUENCE; the relevant procedure is:
CREATE SEQUENCE mytable_myid_seq;
ALTER TABLE mytable ALTER COLUMN myid SET DEFAULT NEXTVAL('mytable_myid_seq');
A good article on this is MySQL versus PostgreSQL: Adding an Auto-Increment Column to a Table

Resources