will salesforce /deleted give only hard delete objects? - salesforce

I was searching through salesforce and I know queryAll and getDeleted can return deleted objects. But what I don't understand is Will /deleted return only hard deleted objects of last 15 days or it will also return soft deleted objects also?

If a record is hard deleted it is no longer on the server, so it would not be available for return. Because of that I believe it will only return soft deleted object

Related

Does DynamoDB has support for Tombstone Record handling?

How do we handle the case in dynamoDB when there is a older put request for a key for which there has been a newer delete operation already performed.
Since the newer delete operation has already deleted the record, the older put request can simply write the record again which is not correct.
Does DynamoDB save any metadata for recently deleted records?
Is there any way to handle this case in DynamoDB? Or anyone has any suggestion how to handle this case.
I am using high level DynamoDBMapper instead of low level API.
Dynamo doesn't keep any 'metadata' about what has been previously deleted. Considering you can't create a new attribute to keep track of deleted status, the only two ways I can think for you to handle this are:
Option 1: create your own 'metadata' table
Create a separate table to keep track of everything you deleted. You'd have a main table where you store your regular data, and a main_deleted table, where you store only primary_keys that have been deleted from the main table.
Before inserting any item in the main table, check for the primary_key in the main_deleted table. If it's there, do not proceed with the insert.
Option 2: use the range_key
If your items have a sort key, you could use it to flag items as deleted without creating a new attribute. Suppose you have this item where the range_key is a UNIX timestamp:
{
"primary_key": "example",
"timestamp": 1234567890,
"other": "stuff"
}
Instead of deleting the item primary_key=example, remove all its attributes and set the timestamp to a value that you'd never use for regular items, such as 0. When inserting or updating items in the table, use a condition expression or query the database previously to check if the item was not deleted before (in other words, timestamp=0).
I'm sure there will be plenty of other ways and maybe (or probably) those two above aren't the best ones. Use your creativity! ;)
I believe you can use ConditionExpression to check whether the data is already exists before updating (i.e. putting) the item. Both UpdateItem and PutItem have ConditionExpression to check some condition before performing the action.
Refer this conditional writes
In your case, you need to check whether the hash key exists before performing the update operation.

SQL Server transactional replication filter

I have a very large table from which I need to extract a subset of records, the last 30 days records, and to replicate this 30 days records to a second database, for reporting purposes. Now I am using transactional replication where I added a filter in the published articles to isolate the 30 days records, to get a near real time replication envirnment.
The issue I have is that : the replication seems to be incremental, meaning that the most recent records are added to the replica, but the older records are not removed so it keeps getting large.
When a record that is out of filtering criteria is updated and enters again under the filtering criteria the replication crashes with an "duplicate primary key error".
How to make it work so that the replica to contain only the last 30 days of data ?
Is the above described behaviour something that I shall expect to see ?
Many thanks,
Well the simplest way, is not to use mssql's filter. The simplest way is to change the SPS used for update and delete with custom sps so that you will not get errors on deleting (absent rows) and updating (absent rows). This is done from the article's advanced properties. In case of a delete you should just use a merge and filter there your criteria.
Also have a job that deletes from the tables what you need to have deleted.
Of course you will need to be very careful when doing structure updates, but it is doable.
Another more ugly way is to keep sql's stored procedures and just ignore the errors (through the distribution agent .. -SkipErrors 2601:2627:20598). This will require again a job to delete old rows and it will not bring you back into your scope the old rows that are just updated. All in all the first solution should be the best one.
Hope it helps.

Recommended order of updating a database

Granted that while updating a database, there are DELETED records, UPDATED records and INSERTED (new) records. What would be the recommended order for performing the updates?
I currently do DELETE, UPDATE, INSERT. My reason is as follows:
DELETED records are not in the DB anymore (at least from the user's point of view) so they should be removed first.
UPDATED records should go next because they are modifying existing data.
INSERTED records go last to fill the DB with the newest records.
Is this sequence satisfactory or a different order would be better?
This sequence is satisfactory you don't need to change.

Conditional associated record deletion in afterDelete()

I have the following setup:
Models:
Team
Task
Change
TasksTeam
TasksTeam is a hasManyThrough, that associates teams to tasks. Change is used to record changes in the details of tasks, including when teams are attached/detached (i.e. through records in TasksTeam).
TasksTeam also cascades deletes of Task. If a task is deleted, all related team associations should also be deleted.
When a TasksTeam is deleted, it means a team has left a task, and I'd like to record a Change for that. I'm using the TasksTeam afterDelete() to record teams leaving. In the TasksTeam beforeDelete I save the data to $this->predelete so it'll be available in the afterDelete().
Here is the non-working code in TasksTeam:
public function afterDelete(){
$team_id = $this->predelete['TasksTeam']['team_id'];
$task_role_id = $this->predelete['TasksTeam']['task_role_id'];
$task_id = $this->predelete['TasksTeam']['task_id'];
// Wanted: only record a change if the task isn't deleted
if($this->Task->exists($task_id)){
$this->Task->Change->removeTeamFromTask($task_id, $team_id, $task_role_id);
}
return true;
}
Problem:
When a task is deleted, the delete cascades to TasksTeam correctly. However, a change will be recorded even if the Task is deleted. From another answer to something similar on SO, I think the reason is that the callbacks are called before Model:del(), meaning the task hasn't yet been deleted when it hits TasksTeam afterDelete()
Question
How can I successfully save a Change only if the task isn't deleted?
Thanks in advance.
If the callbacks are getting called before the actual delete, I see maintaining an assoc. array of flags with task IDs as keys, or a set of task IDs, which are added when afterDelete is called on Task. Then you could create a method in Task, such as isDeleting or similar, which queries the array, to tell you if the task is in the process of being deleted.
Using the suggestion from #James Dunne I ended up adding a tinyint field to the Task model called is_deleted and simply set this boolean true in the Task beforeDelete(). I then check for this flag and only save a Change if the flag is boolean false. It seems wasteful to add a field for something that is only affected just before the record is deleted, but for my purposes it works fine. I think a "real solution" would involve the Cake Events System , avoiding the need for chained callbacks.

SQL Server 2000: Is there a way to tell when a record was last modified?

The table doesn't have a last updated field and I need to know when existing data was updated. So adding a last updated field won't help (as far as I know).
SQL Server 2000 does not keep track of this information for you.
There may be creative / fuzzy ways to guess what this date was depending on your database model. But, if you are talking about 1 table with no relation to other data, then you are out of luck.
You can't check for changes without some sort of audit mechanism. You are looking to extract information that ha not been collected. If you just need to know when a record was added or edited, adding a datetime field that gets updated via a trigger when the record is updated would be the simplest choice.
If you also need to track when a record has been deleted, then you'll want to use an audit table and populate it from triggers with a row when a record has been added, edited, or deleted.
You might try a log viewer; this basically just lets you look at the transactions in the transaction log, so you should be able to find the statement that updated the row in question. I wouldn't recommend this as a production-level auditing strategy, but I've found it to be useful in a pinch.
Here's one I've used; it's free and (only) works w/ SQL Server 2000.
http://www.red-gate.com/products/SQL_Log_Rescue/index.htm
You can add a timestamp field to that table and update that timestamp value with an update trigger.
OmniAudit is a commercial package which implments auditng across an entire database.
A free method would be to write a trigger for each table which addes entries to an audit table when fired.

Resources