I have Statistic table with these fields Id UserId DateStamp Data
Also there is User table in database which has CreditsLeft(int) field. I need to create function(let's name it FindNewRecordsAndUpdate) which will read Statistic table every 10 minutes from my application and decrease CreditLeft field by number of new Statistic records found for specified user.
My main concern is when I execute FindNewRecordsAndUpdate function next time how to find new records in Statistic field and skip already counted ones? I could add Counted(bool) field in Statistic and set True for already "used" records but maybe there is better solution withotu adding new field?
At least 3 other options:
Use a trigger. So when rows are inserted into the Statistic table, the balance is User is automatically updated
Just do an aggregate on demand over the Statistic table to get the SUM(Data)
Use an indexed view to "pre calculate" the SUM in point 2
Personally, I'd go for point 2 (and point 3 depending on query frequency) to avoid denormalised data in the User table.
Related
My use case is that one of the columns in my table has multiple categories included. For example, entry 1 may say budget, schedule, entry 2 may say schedule, quality and entry 3 may say schedule. I can also change it so that budget, schedule and quality become boolean columns.
I would like to create a drop down option that list each grade level individually and when one of the categories is selected, I want the table to filter based on the selection in the drop down. So when someone selects schedule, all 3 entries, in this scenario, will be displayed.
Do you believe this is possible to do in Data Studio?
Thank you
Create a parameter cat_select to filter the data.
Create a calculated field cat_test with
CONTAINS_TEXT(lower(categories),lower(cat_select))
Generate a filter for that field cat_test to be true.
My goal is to design a portion of a database that captures the time an activity occurs or the time a record is updated. I would also like the database to set certain field values of new records in one table based on field values from the record of another table or query.
For the first goal, there will be 4 entities: user, subject, activityLog (intermediate entity for a many-to-many between user and subject with an updatedTime field in addition to the primary keys), and a violation entity. The violation entity will also have both users and subjects as foreign keys.
Each time a user adds a new subject record into the violation table or updates an existing record in the violation table, I would like the database to programmatically select the current record's values and copy those values into a new record (essentially duplicate the entire record or certain field values I choose) in the activityLog and set the current system date/time in its updatedTime field.
For the second goal, my agency has business rules that impose penalties for violations and the penalties are assessed based on first, second, third offenses. For example, if a subject commits 5 violations and 2 of the 5 are of the same violation type, then the penalty for the 2nd occurrence of the 2 violations that are the same should be elevated to a second offense penalty (all others will remain at 1st offense if no other violation types are 2 or more).
What I'd like the database to do is select the subjectID and violationID from the activityLog table, group by subjectID and count the number of violatonIDs. After typing this out, I am realizing that is basically a query. So, the results of this query will tell me how many times an individual committed a violation, and I'd write VBA code to update a table's record that contained the queried data (this table would be permanent...I have no clue what type of query this would be...update query, perhaps).
Based on the descriptions I have provided above, how would this design be rated as far as good/bad/efficient/inefficient? Please
To set this up....I have roughly 12,000 rows of data that captures sales detail activity. I have an Excel function in a column to the right of the data that identifies the invoice number based on unique qualifiers (if the transcation meets the qualifier it lists the invoice number and, if not, assigns ""). Of the 12,000 rows of data maybe 50 will qualify. I then need to use the invoice numbers that qualify to pull sales and COGS data for all those invoices that match the qualifying invoice numbers (another explanation is necessary at this point....an invoice that qualifies will be listed twice....once that contains the information that identifies whetehr it qualifies but that row will have no sales data and a second time with thye dsales data...I do not know why it does it this way. So, in other words, the row of data with the sales information I need to capture does not contain the data that allows me to recognize that an invoice qualifies). Each mopnth adds the current month's sales data to the population...so, 12,000 rows this month and it will be 16,000 next month and so on.
I was hoping to create a table that pulls only the qualifuying invoice numbers from the sales data (it would basically create a table of 50 (in my example) while ignoring the 11,950 rows where the cell equals ""). I can do a simple pivot table but next month I need it to recognize the new population of data and update based on the additional rows of new monthly data as well as the rows of data from prior months.
Any soultions/ideas are much appreciated!
task_set is a database with two colums(id, task):
id task
1 shout
2 bark
3 walk
4 run
assume there is another table with two colums(employee,task_order)
task_order is an ordered set of tasks, for example (2,4,3,1)
generally, the task_order is unchanged, but sometimes it may be inserted or deleted, e.g, (2,4,9,3,1) ,(2,4,1)
how to design such a database? I mean how to realize the ordered set?
If, and ONLY if you don't need to search inside the task_set column, or update one of it's values (i.e change 4,2,3 to 4,2,1), keeping that column as a delimited string might be an easy solution.
However, if you ever plan on searches or updates for specific values inside the task_set, then you better normalize that structure into a table that will hold employee id, task id, and task order.
I've been asked by my client to manage update history for each column/field in one of our SQL Server tables. I've managed "batch" versions of entire rows before, but haven't done this type of backup/history before. They want to be able to keep track of changes for each column/field in a row of a table. I could use some help in the most efficient way to track this. Thanks!
The answer is triggers and history tables, but the best practice depends on what data your database holds and how often and by how much it is modified.
Essentially every time a record in a table is updated, an update trigger (attached to the table) gets notified what the old record looked like and what the new record will look like. You can then write the change history to new records to another table (i.e. tblSomething_History). Note: if updates to your tables are done via stored procs, you could write the history from there, but the problem with this is if another stored procedure updates your table as well, then the history won't be written.
Depending on the amount of fields / tables you want history for, you may do as suggested by #M.Al, you may embedded your history directly into the base table through versioning, or you may create a history table for each individual table, or you may create a generic history table such as:
| TblName | FieldName | NewValue | OldValue | User | Date Time |
Getting the modified time is easy, but it depends on security setup to determine which user changed what. Keeping the history in a separate table means less impact on retrieving the current data, as it is already separated you do not need to filter it out. But if you need to show history most of the time, this probably won't has the same effect.
Unfortunately you cannot add a single trigger to all tables in a database, you need to create a separate trigger for each, but they can then call a single stored procedure to do the guts of the actual work.
Another word of warning as well: automatically loading all history associated with tables can dramatically increase the load required, depending on the type of data stored in your tables, the history may become systematically larger than the base table. I have encountered and been affected by numerous applications that have become unusable, because the history tables were needlessly loaded when the base table was, and given the change history for the table could run into 100's per item, that's how much the load time increased.
Final Note: This is a strategy which is easy to absorb if built into your application from the ground up, but be careful bolting it on to an existing solution, as it can have a dramatic impact on performance if not tailored to your requirements. And can cost more than the client would expect it to.
I have worked on a similar database not long ago, where no row is ever deleted, every time a record was updated it actually added a new row and assigned it a version number. something like....
Each table will have two columns
Original_ID | Version_ID
each time a new record is added it gets assigned a sequential Version_ID which is a Unique Column and the Original_ID column remains NULL, On every subsequent changes to this row will actually insert a new row into the table and increases the Version_ID and the Version_ID that was assigned when the record was 1st created will be assigned to Original_ID.
If you have some situations where records need deleting, use a BIT column Deleted and set its value to 1 when a records is suppose to be deleted, Its called (Soft Deletion).
Also add a column something like LastUpdate datetime, to keep track of time that a change was made.
This way you will end up with all versions of a row starting from where a row is inserted till its is deleted (Soft deletion).