Precedence of SQL queries in Linq-2-SQL Context - winforms

I have a table that holds Key (LocumID) - Value (AvailableDate) pairs. The table has it's own primer. The key/value cannot be repeated so I added a unique composite constraint on LocumID & AvailableDate.
OID LocumID AvailableDate AvailabilityStatusID
-------------------- -------------------- ----------------------- --------------------
1 1 2009-03-02 00:00:00 1
2 2 2009-03-04 00:00:00 1
3 1 2009-03-05 00:00:00 1
4 1 2009-03-06 00:00:00 1
5 2 2009-03-07 00:00:00 1
6 7 2009-03-09 00:00:00 1
7 1 2009-03-11 00:00:00 1
8 1 2009-03-12 00:00:00 2
9 1 2009-03-14 00:00:00 1
10 1 2009-03-16 00:00:00 1
Now, upon real usage, I find that simply eliminating old value(s) for a LocumID-AvailableDate pair and insert a new one is best approach. However, in Linq-to-SQL, I call DbContext.SubmitChanges() in the end and that's where the SQL Server throws an exception complaining that the statement conflicted with the constraint.
Now, if I check my code-lines, I'm deleting pre-existing pair first, creating new and inserting it. Why is the constraint being violated in this transaction?
Also, if I delete the pre-existing pair first, then SubmitChanges and then proceed to create a new one, insert it, and SubmitChanges... all is well.
However, I lose the benefit of a complete transaction.
Any ideas? Regards.

SubmitChanges doesn't guarente the calls to be made in that order. However, instead of relying on linq to sql to make a transaction, simply make one yourself:
using(TransactionScope scope = new TransactionScope())
{
//first call
context.SubmitChanges();
//other work
context.SubmitChanges();
}
linq to sql will first check if it finds a transaction before making one on its own.

Related

Auto Generate Unique ID separate from Primary Key

I have a Relational database (that I'm new to creating) that has many tables relating to baseball statistical tracking and analysis. The table I'm working on currently is the tbl_PitchLog table, that tracks pitches in an at-bat.
What I'd like to do is eliminate the need for an at-bat table and just use At-Bat Unique ID for a group of pitches. I have a Pitch_ID field, but I'd like SS to generate a new AtBat_ID that I can then reference to get all the pitches of a particular at-bat.
For Example
Pitch_ID | Pitch_Number | Result
1 1 Foul
2 2 Foul
3 3 Strike
4 1 Ball
5 2 Flyout
6 1 Groundout
To be:
Pitch_ID | AtBat_ID | Pitch_Number | Result
1 1 1 Foul
2 1 2 Foul
3 1 3 Strike
4 2 1 Ball
5 2 2 Flyout
6 3 1 Groundout
You don't specify what version of SS you're using, but SQL Server 2012 introduced sequences; you can create an at bat sequence, get the next value when the at bat changes, and use that value for your inserts.
CREATE SEQUENCE at_bat_seq
AS INTEGER
START WITH 1
INCREMENT BY 1
MINVALUE 1
MAXVALUE <what you want the max to be>
NO CYCLE;
DECLARE #at_bat int;
SET #at_bat = NEXT VALUE FOR at_bat_seq;
Most of the qualifiers are self-explanatory; the [NO] CYCLE specifies whether the value will start over at the min when it hits the max. As defined above, you'll get an error when you get to the max. If you just want it to start over when it gets to the max, then specify CYCLE instead of NO CYCLE.
Create the tbl_PitchLog table with a Pitch_ID as its primary key, while it's at the same time a foreign key taken from the main table.
What you're looking for is a one to one relationship.

SQL Server - Slowly Changing dimension join

I have a fact table and employee "tier" table, let's say.
So the fact table looks sorta like
employee_id call date
Mark 1 1-1-2017
Mark 2 1-2-2017
John 3 1-2-2017
Then there needs to be a data structure for 'tier level' - a slowly changing dimension table. I want to keep this simple -- I can change the structure of this table to whatever, but for now I've created it as such.
employee_id tier1_start ... tier2_start ... tier3_start
Mark 5-1-2016
John 6-1-2016 8-1-2016
Lucy 6-1-2016 10-1-2016
Two important notes. This table sort of operates under the assumption that a promotion will only occur once - aka no demotions and repromotions will occur. Also, it's possible one can jump from tier 1 to tier 3.
I was trying to come up with the best possible query for coming up with a 'tier' dimension (denormalization) for the fact table.
For instance, I want to see the Tier 1 metrics for February, or the Tier 2 metrics for February. Obviously the historically-changing tier dimension must be linked.
The clumsiest way I can think of doing this for now ... is simply joining the fact table on the tier table using employee_id.
Then, doing an even clumsier case statement:
case
when isnull(tier3_start,'0') < date then 'T3'
when isnull(tier2_start, '0') < date then 'T2'
when isnull(tier1_start, '0') < date then 'T1'
else 'other'
end as tier_level
Yes, as you can see this is very clumsy.
I'm thinking maybe I need to change the structure of this a bit.
You're probably better off splitting your tier table in two.
So have a Tier table like this:
TierID Tier
------------------
1 Tier 1
2 Tier 2
3 Tier 3
And an EmployeeTier table:
ID EmpID TierID TierDate
---------------------------------------
1 1 1 Jun 1, 2016
2 1 3 Oct 2, 2016
3 2 1 Jul 10, 2016
4 2 2 Nov 11, 2016
Now you can query the EmployeeTier table and filter on the TierID you're looking for.
This also gives you the ability to promote/demote multiple times. You simply filter by the employee and sort by date to find the current tier.

Enforce uniqueness on column based on contents from another column

I have the typical Invoice/InvoiceItems master/detail tables (the ones every book and tutorial out there uses as examples). I also have a "Proforma" table which holds data similar to invoices that are sometimes linked to invoices. Both are linked to each item in the invoice, with a column optionally referencing a proforma, something like this:
id | id_invoice | id_proforma | amount ....... and a bunch of irrelevant stuff
-----------------------------------------------
1 | 1 | null | 100
2 | 1 | null | 40
3 | 2 | 3 | 1000
4 | 3 | 4 | 473
5 | 3 | 4 | 139
Basically, each item in an invoice can be linked to a proforma. There is also a business rule that says that each proforma can be used in only one invoice (it's OK to use it in many items within the same invoice).
Currently that rule is enforced on the application side but this has problems with concurrency, as 2 users could take the same proforma at the same time and the system would let it pass. My intention is to have the DB validate this in addition to some front-end visual clues, but so far I've failed to come with an approach for this particular case.
Filtered unique indexes could serve well, except that the same proforma can be used twice if it's for the same invoice, so my question is, how can I make the DB server enforce that rule?
Database engine can be SQL 2012 or latter and any edition from express to enterprise.
You can create a user-defined scalar function that returns TRUE if the proforma id and invoice id combination are valid. Then put a check constraint on the table requiring the function to return true. Like this (tweak to fit your table name/needs):
-- Here's the function:
create function dbo.svfIsCombinationValid (
#id_invoice int
, #id_proforma int
)
returns bit
as
begin;
declare #return bit = 1;
if exists (
select 1
from dbo.YourInvoiceProformaXRefTable
where id_proforma = #id_proforma
and id_invoice <> #id_invoice
)
begin;
set #return = 0;
end;
return #return;
end;
After that, you can alter the table and add the check constraint:
alter table dbo.YourInvoiceProformaXRefTable
add constraint CK_YourInvoiceProformaXRefTable_UniqueInvoiceProforma
check (dbo.svfIsCombinationValid(id_invoice,id_proforma)=1);
This is OK with nulls (multiple id_invoice can have id_proforma NULL values). but if both values are not null, then the combination must either be NEW or the same as existing rows.

SQLite Row_Num/ID

I have a SQLite database that I'm trying to use data from, basically there are multiple sensors writing to the database. And I need to join one row to the proceeding row to calculate the value difference for that time period. But the only catch is the ROWID field in the database can't be used to join on anymore since there are more sensors beginning to write to the database.
In SQL Server it would be easy to use Row_Number and partition by sensor. I found this topic: How to use ROW_NUMBER in sqlite and implemented the suggestion:
select id, value ,
(select count(*) from data b where a.id >= b.id and b.value='yes') as cnt
from data a where a.value='yes';
It works but is very slow. Is there anything simple I'm missing? I've tried to join on the time difference possibly, create a view. Just at wits end! Thanks for any ideas!
Here is sample data:
ROWID - SensorID - Time - Value
1 2 1-1-2015 245
2 3 1-1-2015 4456
3 1 1-1-2015 52
4 2 2-1-2015 325
5 1 2-1-2015 76
6 3 2-1-2015 5154
I just need to join row 6 with row 2 and row 3 with row 5 and so forth based on the sensorID.
The subquery can be sped up with an index with the correct structure.
In this case, the column with the equality comparison must come first, and the one with unequality, second:
CREATE INDEX xxx ON MyTable(SensorID, Time);

EF or SQL Server - Computed / Continous Column

I'm not sure how to do this and where it should be implemented.
I have a table with columns: ID, TypeID, AIndex. Both the ID and TypeID are supplied when creating a new record. The AIndex is a continuous column based on both IDs.
To illustrate here an example:
ID TypeID Aindex
---------------------
1 1 1
1 1 2
1 2 1
1 3 1
2 1 1
2 1 2
The big question is:
They are lot of people writing to this table at a time?
When you compute Aindex column from previous database data, some thing like max( Aindex ) + 1 the risk is to induce locks.
If only few inserts are made in table you can write increment code in your favorite layer, EF or DB. But if your table has high write ratio you should search for a alternate technique like counters table or something else. You can keep this counters table with EF if you want.

Resources