I have a Relational database (that I'm new to creating) that has many tables relating to baseball statistical tracking and analysis. The table I'm working on currently is the tbl_PitchLog table, that tracks pitches in an at-bat.
What I'd like to do is eliminate the need for an at-bat table and just use At-Bat Unique ID for a group of pitches. I have a Pitch_ID field, but I'd like SS to generate a new AtBat_ID that I can then reference to get all the pitches of a particular at-bat.
For Example
Pitch_ID | Pitch_Number | Result
1 1 Foul
2 2 Foul
3 3 Strike
4 1 Ball
5 2 Flyout
6 1 Groundout
To be:
Pitch_ID | AtBat_ID | Pitch_Number | Result
1 1 1 Foul
2 1 2 Foul
3 1 3 Strike
4 2 1 Ball
5 2 2 Flyout
6 3 1 Groundout
You don't specify what version of SS you're using, but SQL Server 2012 introduced sequences; you can create an at bat sequence, get the next value when the at bat changes, and use that value for your inserts.
CREATE SEQUENCE at_bat_seq
AS INTEGER
START WITH 1
INCREMENT BY 1
MINVALUE 1
MAXVALUE <what you want the max to be>
NO CYCLE;
DECLARE #at_bat int;
SET #at_bat = NEXT VALUE FOR at_bat_seq;
Most of the qualifiers are self-explanatory; the [NO] CYCLE specifies whether the value will start over at the min when it hits the max. As defined above, you'll get an error when you get to the max. If you just want it to start over when it gets to the max, then specify CYCLE instead of NO CYCLE.
Create the tbl_PitchLog table with a Pitch_ID as its primary key, while it's at the same time a foreign key taken from the main table.
What you're looking for is a one to one relationship.
Related
I am working on row level security in my database. I have two tables. Row based security is implemented on data_table and only returns rows that the user can see.
data_table:
data_id name role
-----------------------------
1 test USER
2 another ADMIN
3 yep USER
type_table:
type_id name
-----------------
1 this
2 is
3 a
4 type
EXECUTE AS USER = 'USER';
SELECT * FROM data_table;
returns rows 1 and 3 only. If you execute as ADMIN, all of the rows are returned. This is working properly in my database.
However, my issue is my bridging table.
data_type_table:
data_type_id data_id type_id
1 1 2
2 1 3
3 2 1
4 2 2
5 3 1
6 3 4
As of right now
EXECUTE AS USER = 'USER';
SELECT COUNT(data_type_id) FROM data_type_table;
returns 6 because it's looking at all 6 rows in the table. I'm trying to set it up in such a way that user USER will only see rows in data_type_table which are referencing rows where data_table.role = 'USER' (this means that the select count query would return 4). What would be the simplest way to implement something like this?
My data_table will more than likely contain hundreds of thousands of rows. Efficiency could become a problem here.
I have a table that looks like this:
ID A B Count
-----------------
1 abc 0 1
2 abc 0 2
3 abc 1 1
4 xyz 1 1
5 xyz 1 2
6 xyz 1 3
7 abc 1 2
8 abc 0 3
The "Count" column is incremented by one in the next insertion depending on the value of fields "A" and "B". so for example, if the next record I want to insert is:
ID A B Count
-----------------
abc 0
The value of count will be 4.
I have been trying to find documentation about this, but I'm still quite lost in the MS SQL world! There must be a way to configure the "Count" column as a sequence dependent on the other two columns. My alternative would be to select all the records with A=abc and B=0, get the maximum "Count", and do +1 in the latest one, but I suspect there must be another way related to properly defining the Count column when creating the table.
The first question is: Why do you need this?
There is ROW_NUMBER() which will - provided the correct PARTITION BY in the OVER() clause - do this for you:
DECLARE #tbl TABLE(ID INT,A VARCHAR(10),B INT);
INSERT INTO #tbl VALUES
(1,'abc',0)
,(2,'abc',0)
,(3,'abc',1)
,(4,'xyz',1)
,(5,'xyz',1)
,(6,'xyz',1)
,(7,'abc',1)
,(8,'abc',0);
SELECT *
,ROW_NUMBER() OVER(PARTITION BY A,B ORDER BY ID)
FROM #tbl
ORDER BY ID;
The problem is: What happens if a row is changed or deleted?
If you write this values into a persistant column and one row is removed physically, you'll have a gap. Okay, one can live with this... But if a value in A is changed from abc to xyz (same applies to B of course) the whole approach breaks.
If you still want to write this into a column you can use the ROW_NUMBER() from above to fill these values initially and a TRIGGER to set the next value with your SELECT MAX()+1 approach for new rows.
If the set of combinations is limited you might create a SEQUENCE (needs v2012+) for each.
But - to be honest - the whole issue smells a bit.
I'm not sure how to do this and where it should be implemented.
I have a table with columns: ID, TypeID, AIndex. Both the ID and TypeID are supplied when creating a new record. The AIndex is a continuous column based on both IDs.
To illustrate here an example:
ID TypeID Aindex
---------------------
1 1 1
1 1 2
1 2 1
1 3 1
2 1 1
2 1 2
The big question is:
They are lot of people writing to this table at a time?
When you compute Aindex column from previous database data, some thing like max( Aindex ) + 1 the risk is to induce locks.
If only few inserts are made in table you can write increment code in your favorite layer, EF or DB. But if your table has high write ratio you should search for a alternate technique like counters table or something else. You can keep this counters table with EF if you want.
I have a table that holds Key (LocumID) - Value (AvailableDate) pairs. The table has it's own primer. The key/value cannot be repeated so I added a unique composite constraint on LocumID & AvailableDate.
OID LocumID AvailableDate AvailabilityStatusID
-------------------- -------------------- ----------------------- --------------------
1 1 2009-03-02 00:00:00 1
2 2 2009-03-04 00:00:00 1
3 1 2009-03-05 00:00:00 1
4 1 2009-03-06 00:00:00 1
5 2 2009-03-07 00:00:00 1
6 7 2009-03-09 00:00:00 1
7 1 2009-03-11 00:00:00 1
8 1 2009-03-12 00:00:00 2
9 1 2009-03-14 00:00:00 1
10 1 2009-03-16 00:00:00 1
Now, upon real usage, I find that simply eliminating old value(s) for a LocumID-AvailableDate pair and insert a new one is best approach. However, in Linq-to-SQL, I call DbContext.SubmitChanges() in the end and that's where the SQL Server throws an exception complaining that the statement conflicted with the constraint.
Now, if I check my code-lines, I'm deleting pre-existing pair first, creating new and inserting it. Why is the constraint being violated in this transaction?
Also, if I delete the pre-existing pair first, then SubmitChanges and then proceed to create a new one, insert it, and SubmitChanges... all is well.
However, I lose the benefit of a complete transaction.
Any ideas? Regards.
SubmitChanges doesn't guarente the calls to be made in that order. However, instead of relying on linq to sql to make a transaction, simply make one yourself:
using(TransactionScope scope = new TransactionScope())
{
//first call
context.SubmitChanges();
//other work
context.SubmitChanges();
}
linq to sql will first check if it finds a transaction before making one on its own.
I need to search in a table for items which have all of my desired values in a column i.e.
I have table :
ID : 1 2 3 3 2 2 2 1 1 3
VALUE : 5 6 5 3 6 7 2 1 9 0
I want to give a StoredProc a list of values for example ("6,7,2") and it returns me all IDs that have all the given values in this case it would only returns 2
If I wanted to search for those which at least have one of the values I know I could use "IN" but to have all the values i found nothing.
Thank you in advance
Afshin Arefi
In SQL Server 2008 you can use table value parameters.
These allow you to pass in a table of values to a stored procedure and treat it as any other table (use in sub-queries, joins etc).
In terms of the query - if you do use a table value parameter, you can query it for size (how many rows), use IN in conjunction with a GROUP BY on the ID field and a HAVING that counts the number of rows.