I've been struggling with this for some days now. I hope I get it now, but I wanted to check it with you.
For every transaction there is a SKPaymentTransaction. In a regular purchase, the property Original Transaction is empty. In a restore or auto renewal, Original transaction is the original transaction SKPaymentTransaction.
The tricky part in my opinion is the receipt received. So every transaction in the receipt contains a transaction_id and a original_transaction_id. In a one time purchase they are the same, in a subscription, the original_transaction_id is the transaction_id of the first transaction the user subscribed.
So my first question: If I want to check the validity of a purchase in the receipt -> The transactionID of the SKPayment transaction appears ONLY in the receipt, if it is not a restore or renewal. Otherwise the SKPaymentTransaction transactionID is NOT in the receipt. But since in these cases the SKPaymentTransaction has a property originalTransaction, originalTransaction.transactionID appears in the receipt. Correct?
And now the thing I have been struggling with, 2nd question: So the originalTransaction property of the SKPaymentTransaction has not necessarily anything to do with the original_transaction_id in the receipt, correct? I mean for a subscription with several renewals - If I restore them I get a SKPaymentTransaction with a transaction ID, which isn't in the receipt. Then I take instead the originalTransaction.transactionID of this SKPaymentTransaction and look for it in the receipt, but NOT in the original_transaction_id field but in the transaction_id field of the receipt, correct?
I hope I get it now..I really think the documentation is rather confusing here from Apple..
Restoring the transactions on your device will generate unique transaction_id's. So the original_transaction_id will not be found after this if you do it. The same happens on different devices, e.g. iPad, iPhone. the web_order_line_item_id will not change for these transactions if you need a stable identifier.
Yes, in your SKPaymentTransaction there is a property originalTransaction. You can find your original_transaction_id in the receipt. However it is not a good way to validate receipt, because it should be done using the server to avoid man in the middle attacks.
I would recommend you validating receipt through the server as Apple recommends.
There are a few of ready-to-go solutions, like ours - Apphud or RevenueCat.
Also I would recommend you reading articles about what is receipt validation and why it's needed: https://blog.apphud.com/receipt-validation/
Related
I'm trying to create SSIS package which will periodically send data to other database. I want to send only new records(I need to keep sent records) so I created status column in my source table.
I want my package to update this column after successfuly sending data, but I can't update all rows wih "unsent" status because during package execution some rows may have been added, and I also can't use transactions(I mean on isolation levels that would solve my problem: I can't use Serializable beacause i musn't prevent users from adding new rows, and Sequence Container doesn't support Snapshot).
My next idea was to use recordset and after sending data to other db use it to get ids of sent rows, but I couldn't find a way to use it as datasource.
I don't think I should set status "to send" and then update it to "sent", I believe it would be to costly.
Now I'm thinking about using temporary table, but I'm not convinced that this is the right way to do it, am I missing something?
Record Set is a destination. You cannot use it in Data Flow task.
But since the data is saved to a variable, it is available in the Control flow.
After completing the DataFlow, come to the control flow and create a foreach component that can run on the ResultSet varialbe.
Read each Record Set value into a variable and use it to run an update query.
Also, see if "Lookup Transform" can be useful to you. You can generate rows that match or doesn't match.
I will improve the answer based on discussions
What you have here is a very typical data mirroring problem. To start with, I would not simply have a boolean that signifies that a record was "sent" to the destination (mirror) database. At the very least, I would put a LastUpdated datetime column in the source table, and have triggers on that table, on insert and update, that put the system date into that column. Then, every day I would execute an SSIS package that reads the records updated in the last week, checks to see if those records exist in the destination, splitting the datastream into records already existing and records that do not exist in the destination. For those that do exist, if the LastUpdated in the destination is less than the LastUpdated in the source, then update them with the values from the source. For those that do not exist in the destination, insert the record from the source.
It gets a little more interesting if you also have to deal with record deletions.
I know it may seem wasteful to read and check a week's worth, every day, but your database should hardly feel it, it provides a lot of good double checking, and saves you a lot of headaches by providing a simple, error tolerant algorithm. Some record does not get transferred because of some hiccup on the network, no worries, it gets picked up the next day.
I would still set up the SSIS package as a server task that sends me an email with any errors, so that I can keep track. Most days, you get no errors, and when there are errors, you can wait a day or resolve the cause and let the next days run pick up the problems.
I am doing a similar thing, in my case, I have a status on the source record.
I read in all records with a status of new.
Then use a OLE DB Command to execute SQL on each row, changing
the status to "In progress"(in you where, enter a ? as the value in
the Component Property tab, and you can configure it as a parameter
from the table row like an ID or some pk in the Column Mappings
tab).
Once the records are processed, you can change all "In Progress"
records to "Success" or something similar using another OLE DB
Command.
Depending on what you are doing, you can use the status to mark records that errored at some point, and require further attention.
We want to know what rows in a certain table is used frequently, and which are never used. We could add an extra column for this, but then we'd get an UPDATE for every SELECT, which sounds expensive? (The table contains 80k+ rows, some of which are used very often.)
Is there a better and perhaps faster way to do this? We're using some old version of Microsoft's SQL Server.
This kind of logging/tracking is the classical application server's task. If you want to realize your own architecture (there tracking architecture) do it on your own layer.
And in any case you will need application server there. You are not going to update tracking field it in the same transaction with select, isn't it? what about rollbacks? so you have some manager who first run select than write track information. And what is the point to save tracking information together with entity info sending it back to DB? Save it into application server file.
You could either update the column in the table as you suggested, but if it was me I'd log the event to another table, i.e. id of the record, datetime, userid (maybe ip address etc, browser version etc), just about anything else I could capture and that was even possibly relevant. (For example, 6 months from now your manager decides not only does s/he want to know which records were used the most, s/he wants to know which users are using the most records, or what time of day that usage pattern is etc).
This type of information can be useful for things you've never even thought of down the road, and if it starts to grow large you can always roll-up and prune the table to a smaller one if performance becomes an issue. When possible, I log everything I can. You may never use some of this information, but you'll never wish you didn't have it available down the road and will be impossible to re-create historically.
In terms of making sure the application doesn't slow down, you may want to 'select' the data from within a stored procedure, that also issues the logging command, so that the client is not doing two roundtrips (one for the select, one for the update/insert).
Alternatively, if this is a web application, you could use an async ajax call to issue the logging action which wouldn't slow down the users experience at all.
Adding new column to track SELECT is not a practice, because it may affect database performance, and the database performance is one of major critical issue as per Database Server Administration.
So here you can use one very good feature of database called Auditing, this is very easy and put less stress on Database.
Find more info: Here or From Here
Or Search for Database Auditing For Select Statement
Use another table as a key/value pair with two columns(e.g. id_selected, times) for storing the ids of the records you select in your standard table, and increment the times value by 1 every time the records are selected.
To do this you'd have to do a mass insert/update of the selected ids from your select query in the counting table. E.g. as a quick example:
SELECT id, stuff1, stuff2 FROM myTable WHERE stuff1='somevalue';
INSERT INTO countTable(id_selected, times)
SELECT id, 1 FROM myTable mt WHERE mt.stuff1='somevalue' # or just build a list of ids as values from your last result
ON DUPLICATE KEY
UPDATE times=times+1
The ON DUPLICATE KEY is right from the top of my head in MySQL. For conditionally inserting or updating in MSSQL you would need to use MERGE instead
Question title is the crux of the problem. I have an Access 2007 (2003 format) front-end with a SQL Server 2008 Express back-end. The input form has a subform linked to another table. When adding a record in the main form, the PK field of the table (set to auto increment) is skipping about four IDs (I say about because sometimes it's three, sometimes five, sometimes 4).
To illustrate, if the last ID is 1234, the ID of the new record might be 1238.
I've stepped through the code, but haven't found anything that would indicate multiple saves or deletes. This problem manifests regardless of whether any records are added to the subform.
I realize this could be anything, but I'm hoping someone might have some insight or suggestions of avenues to investigate.
It could be that some INSERTs in the table are being done within a transaction and the transaction is then rolled back - this would use up IDs, leaving gaps.
Check the Identity specification on the database to see what the Identity Increment is. It may be incrementing at an interval greater than 1, though that wouldn't explain your odd numbering. It's a good starting point.
Also, you could be having people start a record and then deleting it ala the transactions being rolled back and the incrementer being increased.
fire up sql profiler and observer the RPC:Completed and SqlStmt:Completed events to see what exactly is getting executed.
SQL Server doesn't just skip numbers for no reason. it looks like it's inserting something and rolling it back or inserting rows fail.
I'm trying to create a LINQ to SQL class that represents the "latest" version of itself.
Right now, the table that this entity represents has a single auto-incrementing ID, and I was thinking that I would add a version number to the primary key. I've never done anything like this, so I'm not sure how to proceed. I would like to be able to abstract the idea of the object's version away from whoever is using it. In other words, you have an instance of this entity that represents the most current version, and whenever any changes are submitted, a new copy of the object is stored with an incremented version number.
How should I proceed with this?
If you can avoid keeping a history, do. It's a pain.
If a complete history is unavoidable (regulated financial and medical data or the like), consider adding history tables. Use a trigger to 'version' into the history tables. That way, you're not dependent on your application to ensure a version is recorded - all inserts/updates/deletes are captured regardless of the source.
If your app needs to interact with historical data, make sure it's readonly. There's no sense capturing transaction histories if someone can simply change them.
If your concern is concurrent updates, consider using a record change timestamp. When both User A and User B view a record at noon, they fetch the record's timestamp. When User A updates the record, her timestamp matches the record's so the update goes through and the timestamp is updated as well. When User B updates the record five minutes later, his timestamp doesn't match the record's so he's warned that the record has changed since he last viewed it. Maybe it's automatically reloaded...
Whatever you decide, I would avoid inter-mingling current and historic data.
Trigger resources per comments:
MSDN
A SQL Team Introduction
Stackoverflow's Jon Galloway describes a general data-change logging trigger
The keys to an auditing trigger are the virtual tables 'inserted' and 'deleted'. These tables contain the rows effected by an INSERT, UPDATE, or DELETE. You can use them to audit changes. Something like:
CREATE TRIGGER tr_TheTrigger
ON [YourTable]
FOR INSERT, UPDATE, DELETE
AS
IF EXISTS(SELECT * FROM inserted)
BEGIN
--this is an insert or update
--your actual action will vary but something like this
INSERT INTO [YourTable_Audit]
SELECT * FROM inserted
END
IF EXISTS(SELECT * FROM deleted)
BEGIN
--this is a delete, mark [YourTable_Audit] as required
END
GO
The best way to proceed is to stop and seriously rethink your approach.
If you are going to keep different versions of the "object" around, then you are better off serializing it into an xml format and storing that in an XML column with a field for the version number.
There are serious considerations when trying to maintain versioned data in sql server revolving around application maintenance.
UPDATE per comment:
Those considerations include: the inability to remove a field or change the data type of a field in future "versions". New fields are required to be nullable or, at the very least, have a default value stored in the DB for them. As such you will not be able to use them in a unique index or as part of the primary keys.
In short, the only thing your application can do is expand. Provided the expansion can be ignored by previous layers of code.
This is the classic problem of Backwards Compatibility which desktop software makers have struggled with for years. And is the reason you might want to stay away from it.
I have designed database tables (normalised, on an MS SQL server) and created a standalone windows front end for an application that will be used by a handful of users to add and edit information. We will add a web interface to allow searching accross our production area at a later date.
I am concerned that if two users start editing the same record then the last to commit the update would be the 'winner' and important information may be lost. A number of solutions come to mind but I'm not sure if I am going to create a bigger headache.
Do nothing and hope that two users are never going to be editing the same record at the same time. - Might never happed but what if it does?
Editing routine could store a copy of the original data as well as the updates and then compare when the user has finished editing. If they differ show user and comfirm update - Would require two copies of data to be stored.
Add last updated DATETIME column and check it matches when we update, if not then show differences. - requires new column in each of the relevant tables.
Create an editing table that registers when users start editing a record that will be checked and prevent other users from editing same record. - would require carful thought of program flow to prevent deadlocks and records becoming locked if a user crashes out of the program.
Are there any better solutions or should I go for one of these?
If you expect infrequent collisions, Optimistic Concurrency is probably your best bet.
Scott Mitchell wrote a comprehensive tutorial on implementing that pattern:
Implementing Optimistic Concurrency
A classic approach is as follows:
add a boolean field , "locked" to each table.
set this to false by default.
when a user starts editing, you do this:
lock the row (or the whole table if you can't lock the row)
check the flag on the row you want to edit
if the flag is true then
inform the user that they cannot edit that row at the moment
else
set the flag to true
release the lock
when saving the record, set the flag back to false
# Mark Harrison : SQL Server does not support that syntax (SELECT ... FOR UPDATE).
The SQL Server equivalent is the SELECT statement hint UPDLOCK.
See SQL Server Books Online for more information.
-first create filed (update time) to store last update record
-when any user select record save select time,
compare between select time and update time field if( update time) > (select time) that mean another user update this record after select record
SELECT FOR UPDATE and equivalents are good providing you hold the lock for a microscopic amount of time, but for a macroscopic amount (e.g. the user has the data loaded and hasn't pressed 'save' you should use optimistic concurrency as above. (Which I always think is misnamed - it's more pessimistic than 'last writer wins', which is usually the only other alternative considered.)
Another option is to test that the values in the record that you are changing are the still the same as they were when you started:
SELECT
customer_nm,
customer_nm AS customer_nm_orig
FROM demo_customer
WHERE customer_id = #p_customer_id
(display the customer_nm field and the user changes it)
UPDATE demo_customer
SET customer_nm = #p_customer_name_new
WHERE customer_id = #p_customer_id
AND customer_name = #p_customer_nm_old
IF ##ROWCOUNT = 0
RAISERROR( 'Update failed: Data changed' );
You don't have to add a new column to your table (and keep it up to date), but you do have to create more verbose SQL statements and pass new and old fields to the stored procedure.
It also has the advantage that you are not locking the records - because we all know that records will end up staying locked when they should not be...
The database will do this for you. Look at "select ... for update", which is designed just for this kind of thing. It will give you a write lock on the selected rows, which you can then commit or roll back.
With me, the best way i have a column lastupdate (timetamp datatype).
when select and update just compare this value
another advance of this solution is that you can use this column to track down the time data has change.
I think it is not good if you just create a colum like isLock for check update.