I'm working on a leave software, and my problem is that i need to reset the leave days to default number of days (30 day) after one year. would you pleas help me with that.
ps: I'm using VB.NET AND SQL SERVER.
create table Addemployees
(
Fname varchar (500),
Lname varchar (500),
ID int not null identity(1, 1) primary key,
CIN varchar (500),
fromD date,
toD date,
Email varchar(500),
phone varchar(500),
Leave_num int
)
This is the tablet that contains the column Leave_num that has the leave numbers inserted by the user
update addemployees
set leave_num = 30
As for how you trigger this logic. There are many ways you could go about this. You'll need some sort of scheduler like an Agent job, or whatever else you have at your disposal to run this process on a recurring, scheduled, basis. The key thing is not to keep updating the LeaveNum if it's already been updated. You could maintain an extra column on each row indicating the last time they were reset. This is probably the simplest, but if it's truly an all-or-nothing type thing, and those dates will all be the same, that's sort of a waste of space.
You could then either create a separate table which just contains information about when these once-a-year jobs run, or something like an Extended Property (which is a little more involved to set up).
Whatever the solution you choose, Just save off the date (or even just the year), and then when your process runs, if the difference between the last update is greater than a year (or if the year of the last update is less than the current year) run your update, then update however you're storing that information; be it columns, a separate table, or an extended property.
Related
I need to create a service but i need a help with choice of tools.
Imagine service in which users create some data that have value in historical view (e.g. transactions). Other users can see this data but they need a proof that data are real and not falsified by users or even by service.
Example:
User A creates record with number 42
Couple of months passes
User B see this record and wants to be sure that service can't update this record with any other number 37
Service has trust window with 24 hours: it even can change users data, which were made on this day.
Question: Which instruments can help me to achieve that?
I was thinking about doing public daily backups (or reports?) that any user can download. From each report hash will be calculated and inserted into next backup – thus, a chan of hashes created. If service will change something in past, then hashes in this chain will not converge. Of course, i'll create open sourced tool for easy comparing diff between data and check if chain is valid.
Point of trust: there is one thing that i'm afraid of. Service can use many databases simultaneously and update all backups with all hashes one time (because first backup has no hash of previous one). So, to cover that case too, i think of storing hashes in some place that service can't change at all. For example, in one of the existed blockchains (btc, eth, ...) from official wallet of service. Or, maybe, DAG with some blockchain like IOTA?
What do you think of point of trust?
Can i achieve my goal with some simpler way (without blockchain)? And which one?
What are bottlenecks in this logic?
There are 2 participating variables here
timestamp at which the record is created.
the data.
Solution premise,
Tampering proof.
the data can be changed in the same GMT calendar day without violating tamper-proof guarantee. (can be changed to a fixed window after creation)
RDBMS as the data store, (can be changed to any NoSQL with minor mods, but the idea remains the same).
Doesn't depend on any other mechanism which can be faulty or error-prone.
Single query verification.
## Proposed solution
create data table
CREATE TABLE TEST(
ID INT PRIMARY KEY AUTO_INCREMENT,
DATA VARCHAR(64) NOT NULL,
CREATED_AT DATETIME DEFAULT CURRENT_TIMESTAMP()
);
create checksum table, which monitor tempering
CREATE TABLE SIGN(
ID INT PRIMARY KEY AUTO_INCREMENT,
DATA_ID INT NOT NULL,
SIGNATURE VARCHAR(128) NOT NULL,
CREATED_AT DATETIME DEFAULT CURRENT_TIMESTAMP(),
UPDATED_AT TIMESTAMP
);
create trigger on insert of data
/** Trigger on insert */
DELIMITER //
CREATE TRIGGER sign_after_insert
AFTER INSERT
ON TEST FOR EACH ROW
BEGIN
-- INSERT VAL
INSERT INTO SIGN(DATA_ID, `SIGNATURE`) VALUES(
NEW.ID, MD5(CONCAT (NEW.DATA, DATE(NEW.CREATED_AT)))
);
END; //
DELIMITER ;
Create a trigger for update of data
-- UPDATE TRIGGER
DELIMITER //
CREATE TRIGGER SIGN_AFTER_UPDATE
AFTER UPDATE
ON TEST FOR EACH ROW
BEGIN
-- UPDATE VALS
IF (NEW.DATA <> OLD.DATA) AND (DATE(OLD.CREATED_AT) = CURRENT_DATE() ) THEN
UPDATE SIGN SET SIGNATURE=MD5(CONCAT(NEW.DATA, DATE(NEW.CREATED_AT))) WHERE DATA_ID=OLD.ID;
END IF;
END; //
DELIMITER ;
Test
Step 1: insert the data
INSERT INTO TEST(DATA) VALUES ('DATA2');
The signature of data and the date at which it was created, will reflect as the signature in SIGN table.
Step 2: update the data
the signature will get updated if value is changed and it is the SAME DAY.
UPDATE TEST SET DATA='DATA' WHERE ID =1;
Step 3: validate
you can always validate the data signature as
SELECT MD5(CONCAT (T.DATA, DATE(T.`CREATED_AT`))) AS CHECKSUM, S.SIGNATURE FROM TEST AS T ,SIGN AS S WHERE S.DATA_ID= T.ID AND S.`id`=1;
Output
| CHECKSUM | SIGNATURE |
| ------ | ------ |
|2bba70178abdafc5915ba0b5061597fa |2bba70178abdafc5915ba0b5061597fa
I have a table (in SQL Server 2014) including multiple running totals (by different dates) - not an ideal design but imagine a very large number of rows and users able to pick a specified time period - we don't want to calculate SUMs from the start of time to get the running total to that period every time.
I am looking for an elegant way to update those running totals when multiple rows are updated.
The actual scenario is an account reconciliation - the table stores money transactions for which we have the event date (e.g. when a thing was sold), the transaction date (e.g. the invoice date) and the payment date (when the invoice was paid). For each of these there is a running total, e.g. (much simplified)
CREATE TABLE MyTransaction (
Id INT NOT NULL IDENTITY(1,1) PRIMARY KEY,
EventDate DATETIME NOT NULL,
TransactionDate DATETIME,
PaymentDate DATETIME,
Amount INT, -- assume whole numbers for sake of it
RunningTotalByEventDate INT,
RunningTotalByTransactionDate INT,
RunningTotalByPaymentDate INT,
IsCancelled BIT DEFAULT (0)
)
... with indexes on dates as needed, etc. and assume for sake of example that the date/times are unique (in practice there are uniqueifiers and other stuff).
Inserting a transaction is fine(ish) - best I have come up with is three separate queries, each updating the running total by the relevant date... or one query with logic... so after inserting a new row (with obviously-named variables passed inot a stored proc)...
UPDATE MyTransaction SET RunningTotalByEventDate += #Amount
WHERE EventDate > #EventDate
and so on for the other two running totals, or a single query like...
UPDATE MyTransaction
SET RunningTotalByEventDate += CASE WHEN EventDate > #EventDate THEN #Amount ELSE 0 END,
RunningTotalByTransactionDate += CASE WHEN TransactionDate > #TransactionDate THEN #Amount ELSE 0 END,
RunningTotalByPaymentDate += CASE WHEN PaymentDate > #PaymentDate THEN #Amount ELSE 0 END
WHERE EventDate > #EventDate
OR TransactionDate > #TransactionDate
OR PaymentDate > #PaymentDate
Now I need to cancel transactions, e.g. an invoice is written off - the requirement is to leave the row in, but remove the effect - so the row stays with its Amount, but the cancelled flag is set and the row has no effect on the running totals. Unfortunately an invoice may have multiple transactions (e.g. several part payments), so there could be several transaction rows to update.
My best option so far for updating the multiple running totals is to loop/cursor around the (expected to be few) updated rows and reduce the subsequent running totals much as we increased them when adding a row - so for each time around the loop we have the three update queries (or one with logic) to update the three running totals.
A single UPDATE won't work, since it will only update a target row once (and if two part payments are being cancelled, we need to update it twice to take off each amount). I've played variously with windowed functions but cannot see a way to do this neatly with a single query set-wise.
So given a list of MyTransaction.Id values to cancel (e.g. in a table, table variable or CSV string list), what's the best way to update the various running totals?
Any ideas (and apologies for the rambling question) are very welcome.
I have 2 tables Individual(IndividualId is primary key) and IndividualAudit. Every time update is made on individual table
record goes to audit table. There are many columns that can be modified but i am interested only in picking up records where SSN is modified.
I m using below query:
Select DI.IndividualId,DI.ssn FRom Individual I
INNER JOIN IndividualAudit A
ON(I.IndividualId = A.IndividualId and A.UpdateDate = GETDATE())
where i.updatedate = GETDATE() and I.ssn <> a.ssn
group by I.IndividualId,I.ssn
Can someone please tell me whether my approach is correct.
Actually i was searching on google and got scared looking at below link:
Query help when using audit table
the person who answered similar query on this post seem to be very good with sql and comparing with his answer my approach looks quite naive.
so i just want to know where am i wrong in my understanding.
Thanks a lot
Rather than fixing the query, I'd suggest instead using an update trigger aimed specifically at changes to that SSN column you're concerned about. The query you've supplied won't work because of the date comparison (as user2159471 has pointed out). But even after you get the query fixed, you'll still have to run it in order to see which SSNs have been updated.
Instead use a SQL update trigger that, perhaps, inserts an entry into a third table each time an individual's SSN get changed. Then you can look at that table any time you, or run a report against it, to see who's been changed.
The trigger code looks like this:
CREATE TRIGGER MyCoolNewTrigger ON Individual
FOR UPDATE
AS
SET NOCOUNT ON
IF (UPDATE(SSN))
BEGIN
Declare #oldSSN as varchar(40)
Declare #NewSSN as varchar(40)
set #oldSSN = deleted.SSN --holds the old SSN being changes
Set #NewSSN = inserted.SSN -- holds the new SSN inserted
Insert into IndividualUpdateLog (NewSSN, OldSSN, ChangeDate)
values (#NewSSN, #oldSSN, getdate)
END
Say I have a table on an Informix DB:
create table password_audit (
username CHAR(20),
old_password CHAR(20),
new_password CHAR(20),
update_date DATETIME YEAR TO FRACTION));
I need the update_date field to be in milliseconds (or seconds maybe - same question applies) because there will be multiple updates of the password on the same day.
Say, I have a nightly batch job that wants to retrieve all records from the password_audit table for today.
To increase performance, I want to put an index on the update_date column. If I do this:
CREATE INDEX pw_idx ON password_audit(update_date);
and run this SQL:
SELECT *
FROM password_audit
WHERE DATE(update_date) = mdy(?,?,?)
(where ?, ?, ? are the month, day and year passed in by my batch job)
then I don't think my index will be used - is that right?
I think I need to create an index something like this:
CREATE INDEX pw_idx ON password_audit(DATE(update_date));
- is that right?
Because you are forcing the server to convert two values to DATE, not DATETIME, then it probably won't use an index.
You would do best to generate the SQL as:
SELECT *
FROM password_audit
WHERE update_date
BETWEEN DATETIME(2010-08-02 00:00:00.00000) YEAR TO FRACTION(5)
AND DATETIME(2010-08-02 23:59:59.99999) YEAR TO FRACTION(5)
That's rather verbose. Alternatively, and maybe slightly more easily:
SELECT *
FROM password_audit
WHERE update_date >= DATETIME(2010-08-02 00:00:00.00000) YEAR TO FRACTION(5)
AND update_date < DATETIME(2010-08-03 00:00:00.00000) YEAR TO FRACTION(5)
Both of these should be able to use the index on the update_date column. You can experiment with dropping some of the trailing zeroes from the literals, but I don't think you'll be able to remove them all - but see what the SET EXPLAIN ON output tells you.
Depending on your server version, you might need to run UPDATE STATISTICS after creating the index before the optimizer uses it at all; that is more of a problem on older (say 10.00 and earlier) versions of Informix than on the current (11.10 and later) versions.
I Didn't see 'date_to_accounts_ni' defined in your password_audit table.
What datatype/length is it?
Your first index on password_audit.update_date is adequate, why would you want to index
(DATE(update_table))?
I have a customer table, and my requirement is to add a new varchar column that automatically obtains a random unique value each time a new customer is created.
I thought of writing an SP that randomizes a string, then check and re-generate if the string already exists. But to integrate the SP into the customer record creation process would require transactional SQL stuff at code level, which I'd like to avoid.
Help please?
edit:
I should've emphasized, the varchar has to be 5 characters long with numeric values between 1000 and 99999, and if the number is less than 10000, pad 0 on the left.
if it has to be varchar, you can cast a uniqueidentifier to varchar.
to get a random uniqueidentifier do NewId()
here's how you cast it:
CAST(NewId() as varchar(36))
EDIT
as per your comment to #Brannon:
are you saying you'll NEVER have over 99k records in the table? if so, just make your PK an identity column, seed it with 1000, and take care of "0" left padding in your business logic.
This question gives me the same feeling I get when users won't tell me what they want done, or why, they only want to tell me how to do it.
"Random" and "Unique" are conflicting requirements unless you create a serial list and then choose randomly from it, deleting the chosen value.
But what's the problem this is intended to solve?
With your edit/update, sounds like what you need is an auto-increment and some padding.
Below is an approach that uses a bogus table, then adds an IDENTITY column (assuming that you don't have one) which starts at 1000, and then which uses a Computed Column to give you some padding to make everything work out as you requested.
CREATE TABLE Customers (
CustomerName varchar(20) NOT NULL
)
GO
INSERT INTO Customers
SELECT 'Bob Thomas' UNION
SELECT 'Dave Winchel' UNION
SELECT 'Nancy Davolio' UNION
SELECT 'Saded Khan'
GO
ALTER TABLE Customers
ADD CustomerId int IDENTITY(1000,1) NOT NULL
GO
ALTER TABLE Customers
ADD SuperId AS right(replicate('0',5)+ CAST(CustomerId as varchar(5)),5)
GO
SELECT * FROM Customers
GO
DROP TABLE Customers
GO
I think Michael's answer with the auto-increment should work well - your customer will get "01000" and then "01001" and then "01002" and so forth.
If you want to or have to make it more random, in this case, I'd suggest you create a table that contains all possible values, from "01000" through "99999". When you insert a new customer, use a technique (e.g. randomization) to pick one of the existing rows from that table (your pool of still available customer ID's), and use it, and remove it from the table.
Anything else will become really bad over time. Imagine you've used up 90% or 95% of your available customer ID's - trying to randomly find one of the few remaining possibility could lead to an almost endless retry of "is this one taken? Yes -> try a next one".
Marc
Does the random string data need to be a certain format? If not, why not use a uniqueidentifier?
insert into Customer ([Name], [UniqueValue]) values (#Name, NEWID())
Or use NEWID() as the default value of the column.
EDIT:
I agree with #rm, use a numeric value in your database, and handle the conversion to string (with padding, etc) in code.
Try this:
ALTER TABLE Customer ADD AVarcharColumn varchar(50)
CONSTRAINT DF_Customer_AVarcharColumn DEFAULT CONVERT(varchar(50), GETDATE(), 109)
It returns a date and time up to milliseconds, wich would be enough in most cases.
Do you really need an unique value?