I have a requirement of writing a SQL Server trigger which populates some of the columns of particular table.
For example, I have table called 'mytable' and columns
"ID", "NAME", "ADDRESS", "AGE", "CREATED_AT", "CREATED_BY", "MODIFIED_AT", "MODIFIED_BY"
Now I am populating all the columns through application except these 2 columns
"CREATED_AT", "CREATED_BY"
I would like to create a trigger which populates these 2 columns with following values when application commits record in database.
CREATED_AT - Database time-stamp UTC
CREATED_BY - Database User Id
I heard that SQL Server does not have BEFORE INSERT triggers.
Can someone help me on how to write this trigger ? Any help is greatly appreciated.
Related
I am experimenting with PostgreSQL15 logical replication.
I have a table called "test" in database "test1" with columns "id" int (primary) and "name" varchar
id int (primary) | name varchar
I also have a table called "test" in database "test0" with columns "tenant" int (primary), "id" int (primary) and "name" varchar
tenant int (primary/default=1) | id int (primary) | name varchar
I have the following publisher on database "test1"
CREATE PUBLICATION pb_test FOR TABLE test ("id", "name")
SELECT pg_create_logical_replication_slot('test_slot_v1', 'pgoutput');
I also have the following subscriber database "test0"
CREATE SUBSCRIPTION sb_test CONNECTION 'dbname=test1 host=localhost port=5433 user=postgres password=*********' PUBLICATION pb_test WITH (slot_name = test_slot_v1, create_slot = false);
The result is that every time a new record is added on the database "test1", then the same record is inserted on database "test0" with tenant=1 as is the default value.
The question, is there any way to have custom expression for this additional column "tenant" while being replicated? For example records coming from database "test1" should have tenant=1 but records coming from database "test2" will have tenant=2.
Seems that currently, PostgreSQL14 Logical Replication does not support this requirement to have additional columns with fixed values.
UPDATE
PostgreSQL15 allows to publish and subscribe on subset of columns of a table but still is not supporting custom expression as key or value column
I am migrating a large Access application to SQL Server.
One technique that I use in Access is to use a selection table in the frontend joined to a shared backend table to allow the user to select individual records. Once the user has specified the selection, I then use the selection as a criteria to update, export or report on the selected records.
The selection table simply consists of a foreign key field and a boolean field used to display a checkbox for the record selection.
Here is an example showing the equivalent of what I do in Access:
CREATE TABLE tblJob (
JobNumber int NOT NULL,
JobName nchar(255) NULL
)
ALTER TABLE tblJob ADD CONSTRAINT PrimaryKey PRIMARY KEY CLUSTERED
(
JobNumber ASC
)
GO
INSERT INTO tblJob (JobNumber, JobName) VALUES (1, 'Job 1')
INSERT INTO tblJob (JobNumber, JobName) VALUES (2, 'Job 2')
INSERT INTO tblJob (JobNumber, JobName) VALUES (3, 'Job 3')
GO
CREATE TABLE tblSelect (
SelectID int NOT NULL,
Selected int NOT NULL
)
--CREATE UNIQUE NONCLUSTERED INDEX PrimaryKey ON tblSelect (
-- SelectID ASC
--)
ALTER TABLE tblSelect ADD DEFAULT (0) FOR Selected
GO
-- Create a view joining the selection table tblSelect to the data table tblJob
CREATE OR ALTER VIEW vwJobSelected
AS
SELECT
JobNumber,
JobName,
Selected
FROM tblJob
LEFT OUTER JOIN tblSelect ON tblJob.JobNumber = tblSelect.SelectID
GO
-- The application initialises tblSelect with the Job keys
DELETE FROM tblSelect
GO
INSERT INTO tblSelect(SelectID, Selected)
SELECT JobNumber, 0 FROM tblJob
GO
-- The user selects the second record via an editing form in the application
UPDATE vwJobSelected
SET Selected=1
WHERE JobNumber = 2
GO
-- Show the selected record(s)
SELECT *
FROM vwJobSelected
WHERE Selected=1
GO
This works well in Access but will not work in SQL Server for the following reasons:
The selection tblSelect is in the frontend database and therefore is unique to the user/workstation/running instance of the application. However with the migrated database this would create a “heterongenous join” which would join the tables on the client instead of the server and therefore drag the whole data table tblJob across the network.
Access allows the query with the left outer join to be modified seamlessly. However SQL Server will not let you delete records since there are multiple base tables. This would require an INSTEAD OF trigger on the view.
So my question is: What is the recommend method for performing this user selection process in SQL Server?
I have considered:
a) Adding columns for UserName (= SUSER_NAME()) and Workstation (=HOST_NAME()) to tblSelect so that there are user/workstation specific selections available. However I think Access will require a unique index on SelectedID to keep the query updatable and this presents problems.
b) Temporary tables. However I get the impression that these only persist for the duration of a connection to the server and the Access application will surely disconnect and reconnect to the server within the duration of running a session.
c) Asking the good folk at Stack Overflow for advice! :-)
Kind regards
Neil Sargent
i have problem to Transfer data from one sqlserver 2008 r2 to another sql server 2012 databases with different schema, here is some different scenario,
database 1
database 1 with tables Firm and Client, these both have FirmId and ClientId primary key as int datatype,
FirmId is int datatype as reference key used in Client table.
database 2
database 2 with same tables Firm and Client, these both have FirmId and ClientId but primary key as uniqueidentifier,
FirmId is uniqueidentifier datatype as reference key used in Client table.
problem
the problem is not to copy data from 1 database table to 2 database table, but the problem is to maintain the reference key's Firm table into Client table. because there is datatype change.
i am using sql server 2008 r2 and sql server 2012
please help me to resolve / find the solution, i really appreciate your valuable time and effort. thanks
I'll take a stab at it even if I am far from an expert on SQLServer - here is a general procedure (you will have to repeat it for all tables where you have to replace INT with UID, of course...).
I will use Table A to refer to the parent (Firm, if I understand your example clearly) and Table B to refer to the child (Client, I believe).
Delete the relations pointing to Table A
Remove the identity from the id column of Table A
Create a new column with Uniqueidentifier on Table A
Generate values for the Uniqueidentifier column
Add the new Uniqueidentifier column in all the child tables (Table B)
Use the OLD id column to map your child record & update the new Uniqueidentifier value from your parent table.
Drop all the id columns
Recreate the relations
Having said that, I just want to add a warning to you: converting to UID is, according to some, a very bad idea. But if you really need to do that, you can script (and test) the above mentioned procedure.
I am new to SQL server and am wanting to make sure I am using best practices. What I am doing is creating 7 tables.
(Transaction,Customer,Business,Vehicle,Seller,Lien,Mailto)
Transaction is my main table where it creates a TransactionID. Then in the other 6 tables I will also have a TransactionID column so I can link them all together.
In the other 6 tables they each have there own ID as well.
For example
(CustomerID, BusinessID, VehicleID, SellerID, LienID, MailtoID)
My question is in my transaction table do I have to list all of those IDs or does having just the TransactionID allow them all to connect.
Transaction Table 1 Example
ID
Type
DateTime
Transaction Table 2 Example
ID
Type
CustomerID
BusinessID
VehicleID
MailtoID
SellerID
LienID
DateTime
(For the transaction ID I am wanting it to be created and then automatically fill the same in for the other tables as those fields are submitted using foreign keys I believe)
Any help on this would be greatly appreciated!!
do I have to list all of those IDs- NO!.
having just the "TransactionID" allow them all to connect.
I am trying to build Master-Detail Form using TADODataSet, TDBText for Master Table and TDBGrid for Details Table (something smiller to Orders Form like Master Table includs the order header and Details Table includes the Order Items)
Master primary key is Identity column (autoincreament Field)
When trying to add a new record in Master Table and then trying to add records in Details Table before posting the record in Master Table I get this error "non-nullable column cannot be updated to null" and this happens because the master table primary key value is still not known because I didn't post the master record but if I tried the same scenario except that before I add the details records I posted the Master record then the error doesn't appear.
how to work around this problem?
I am connecting Master Table with Details Table using the following properties :
Both DataSets have courser location : Client
Details Table :
DataSource : Master Table DataSource
Master Records : Id (Primary key of the master table)
IndexFieldNames : OrderId (the field in Details Table that indicates to which master record does this detail record belong to)
Lock Type : BatchOptimistic
Please help me
Thanks in advance
Yazan Al-lahham
Well,
You should do something like that (pseudo-code):
1 - start a transaction
2 - post master record
3 - get the id inserted on master
4 - pass the master id to detail dataset
5 - post detail record
6 - If it worked, commit transaction. Otherwise, rollback transaction.
Just an side note: CTP of the new SQL Server codename 'Denali' will bring the feature of SEQUENCES, working much near of whar firebird generator works. So this task will become MUCH easier:
When you get the command from gui to start an insert, get an ID from sequence
Use it to fill the PK field of master record
Post master record
While you have detail records to insert
Fill detail(s) record
Post detail record
Commit transaction
Very niiiice...