How to check if no new opportunity has been created in past 1 year for an account in salesforce? - salesforce

I've to create an automation process to check that no new opportunities has been created for an account in past 12 months and update the account field based on that.
Tried process builder, but it doesn't seem to work.

Tricky
A flow/workflow/process builder needs some triggering condition to fire. If an account was created 5 years ago, not updated since, haven't had any opportunities - it will not trigger any flows until somebody touches it.
And even if you somehow to manage to make a time-based workflow for example (to enqueue making a Task 1 year from now if there are no Opps by then) - it'll "queue" actions only from the moment it was created, it will not retroactively tag old unused accounts.
The time-based actions suck a bit. Say you made it work, it enqueued some future tasks/field updates/whatevers. Then you realise you need to exclude Accounts of certain record type from it. You need to deactivate the workflow/flow to do it - and deactivation wipes the enqueued actions out. So you'd need to save your changes and somehow "touch" all accounts again so they're checked again.
Does it have to be a field on Account? Can it be just a report (which you could make a reporting snapshot of if needed)? You could embed a report on account layout right? A query? Worst case some apex nightly job that runs and tags the accounts? It would dutifully run through them all and set/clear your helper field, easy to change (well, for a developer).
SELECT Id, Name
FROM Account
WHERE Id NOT IN (SELECT AccountId FROM Opportunity WHERE CreatedDate = LAST_N_DAYS:365)
Reporting way would be "cross filter": https://salesforce.vidyard.com/watch/aQ6RWvyPmFNP44brnAp8tf, https://help.salesforce.com/s/articleView?id=sf.reports_cross_filters.htm&type=5

Related

How do I update columns via SQL when a column has be changed? Kind of like a log! Microsoft SQL Server Management Studio

Okay, just to clarify: I have a SQL Table (contains ID, School, Student ID, Name, Fee $, Fee Type, and Paid (as the columns)) that needs to be posted on a Grid that will uploaded on a website. The Grid shows everything correctly and shows what Fees need to be Paid. The Paid column has a bit data type for 1 or 0 (basically a checklist.) I am being asked to add two more columns: User and DateChanged. The reason why is to log which staff changed the "Paid" column. It would only capture the Username of the staff who changed it in the SQL Table and also the time. So to clarify even more, I need to create 2 columns: "User, DateChanged" and the columns would log when someone changed the "Paid" column.
For example: User:Bob checks the Paid column for X student on 5/2/17 at 10pm.
In the same row of X student's info, under User column Tom would appear there. Under DateChanged it would show 2017-05-02 10pm.
What steps would I take to make this possible.
I'm currently IT Intern and all this SQL stuff is new to me. Let me know if you need more clarification. FYI The two new columns: User, DateChanged will not be on the grid.
The way to do this as you've described is to use a trigger. I have an example of some code below but be warned as triggers can have unexpected side-effects, depending on how the database and app interface are set up.
If it is possible for you to change the application code that sends SQL queries to the database instead, that would be much safer than using a trigger. You can still add the new fields, you would just be relying on the app to keep them updated instead of doing it all in SQL.
Things to keep in mind about this code:
If any background processes or procedures make updates to the table, it will overwrite the timestamp and username automatically, because this is triggered on any update to the row(s) in question.
If the users don't have any direct access to SQL Server (in other words, the app is the only thing connecting to the database), then it is possible that the app will only be using one database login username for everyone, and in that case you will not be able to figure out which user made the update unless you can change the application code.
If anyone changes something by accident and then changes it back, it will overwrite your timestamp and make it look like the wrong person made the update.
Triggers can potentially bog down the database system if there are a very large number of rows and/or a high number of updates being made to the table constantly, because the trigger code will be executed every time an update is made to a row in the table.
But if you don't have access to change the application code, and you want to give triggers a try, here's some example code that should do what you are needing:
create trigger TG_Payments_Update on Payments
after update
as
begin
update Payments
set DateChanged = GetDate(), UserChanged = USER_NAME()
from Payments, inserted
where Payments.ID = inserted.ID
end
The web app already knows the current user working on the system, so your update would just include that user's ID and the current system time for when the action took place.
I would not rely on SQL Server triggers since that hides what's going on within the system. Plus, as others have said, they have side effects to deal with too.

Add DATE column to store when last read

We want to know what rows in a certain table is used frequently, and which are never used. We could add an extra column for this, but then we'd get an UPDATE for every SELECT, which sounds expensive? (The table contains 80k+ rows, some of which are used very often.)
Is there a better and perhaps faster way to do this? We're using some old version of Microsoft's SQL Server.
This kind of logging/tracking is the classical application server's task. If you want to realize your own architecture (there tracking architecture) do it on your own layer.
And in any case you will need application server there. You are not going to update tracking field it in the same transaction with select, isn't it? what about rollbacks? so you have some manager who first run select than write track information. And what is the point to save tracking information together with entity info sending it back to DB? Save it into application server file.
You could either update the column in the table as you suggested, but if it was me I'd log the event to another table, i.e. id of the record, datetime, userid (maybe ip address etc, browser version etc), just about anything else I could capture and that was even possibly relevant. (For example, 6 months from now your manager decides not only does s/he want to know which records were used the most, s/he wants to know which users are using the most records, or what time of day that usage pattern is etc).
This type of information can be useful for things you've never even thought of down the road, and if it starts to grow large you can always roll-up and prune the table to a smaller one if performance becomes an issue. When possible, I log everything I can. You may never use some of this information, but you'll never wish you didn't have it available down the road and will be impossible to re-create historically.
In terms of making sure the application doesn't slow down, you may want to 'select' the data from within a stored procedure, that also issues the logging command, so that the client is not doing two roundtrips (one for the select, one for the update/insert).
Alternatively, if this is a web application, you could use an async ajax call to issue the logging action which wouldn't slow down the users experience at all.
Adding new column to track SELECT is not a practice, because it may affect database performance, and the database performance is one of major critical issue as per Database Server Administration.
So here you can use one very good feature of database called Auditing, this is very easy and put less stress on Database.
Find more info: Here or From Here
Or Search for Database Auditing For Select Statement
Use another table as a key/value pair with two columns(e.g. id_selected, times) for storing the ids of the records you select in your standard table, and increment the times value by 1 every time the records are selected.
To do this you'd have to do a mass insert/update of the selected ids from your select query in the counting table. E.g. as a quick example:
SELECT id, stuff1, stuff2 FROM myTable WHERE stuff1='somevalue';
INSERT INTO countTable(id_selected, times)
SELECT id, 1 FROM myTable mt WHERE mt.stuff1='somevalue' # or just build a list of ids as values from your last result
ON DUPLICATE KEY
UPDATE times=times+1
The ON DUPLICATE KEY is right from the top of my head in MySQL. For conditionally inserting or updating in MSSQL you would need to use MERGE instead

Salesforce-to-salesforce round trip field update issue

it's been a couple releases since I've had to do a S2S integration, but I ran into an unexpected issue that hopefully someone can solve more effectively.
I have two orgs, sharing contacts over S2S.
Contacts in each org have the identical schema, it's standard fields plus custom fields. I've reproduced a base case with just two custom fields: checkbox field A, and Number(18,0) field B.
Org 1 publishes field A, and subscribes to field B.
Org 2 subscribes to field A, and publishes field B.
Org 1 initiates all S2S workflow by sharing contacts to Org 2 over S2S. Org 2 has auto-accept on.
Org 2 has a Contact Before Insert trigger that simply uses field A to calculate the value for field B. e.g. if field A is checked, populate field B with 2, if unchecked, 0. (This of course is a drastic over-simplification of what I really need to do, but it's the base reproducible case.)
That all works fine in Org 2 - contacts come across fine with field A, and I see the field results get calculated into field B.
The problem is that the result - field B - does not get auto-shared back to Org 1 until the next contact update. It can be as simple as me editing a non-shared field on that same contact, like "Description", in Org 2, and then I instantly see the previously calculated value of field B get pushed back to Org 1.
I'm assuming that this is because, since the calculation of field B is occurring within a Before Insert, the S2S connection assumes the current update transaction was only performed by itself (I can see how this logic would make sense to prevent infinite S2S update loops).
I first tried creating a workflow field update that forcibly updated a (new, dummy) shared field when field B changed, but that still did not cause the update to flow back, presumably because it's in the same execution context which Salesforce deems exempt from re-sharing. Also tried a workflow rule that forwarded the Lead back to the connection queue when the field is changed, and it also didn't work.
I then tried a re-update statement in an AfterUpdate trigger - if the shared field is updated, reload and re-update the shared object. That also didn't work.
I did find a solution, which is a Future method called by the AfterUpdate trigger which reloads and touches any record that had its shared field changed by the BeforeUpdate trigger. This does cause the field results to show up in near-real-time in the originating organization.
This solution works for me for now, but I feel like I MUST be missing something. It causes way more Future calls and DML to be executed than should be necessary.
Does anyone have a more elegant solution for this?
Had the same problem and an amazing Salesforce support rep unearthed this documentation, which covers Salesforce's specific guidance here: https://web.archive.org/web/20210603004222/https://help.salesforce.com/articleView?id=sf.business_network_workflows.htm&type=5
Sometimes it makes sense to use an Apex trigger instead of a workflow. Suppose that you have a workflow rule that updates a secondary field, field B, when field A is updated. Even if your Salesforce to Salesforce partner subscribed to fields A and B, updates to field B that are triggered by your workflow rule aren’t sent to your partner’s organization. This prevents a loop of updates.
If you want such secondary field updates to be sent to your Salesforce to Salesforce partners, replace the workflow with an Apex trigger that uses post-commit logic to update the secondary field.
In bi-directional connections, Salesforce to Salesforce updates are triggered back only on “after” triggers (for example, “after insert” or “after update”), not on “before” triggers.
This is what OP ended up doing, but this documentation from Salesforce at least clears up the assumptions and guesses that were made here as part of the discussion. It also helpfully points out that it's not best practice to use "before" triggers in these circumstances, for future reference.
I think there is no better workaround then what you are doing. Limits for Future Callouts is increased to fairly highlevel, that should not be your concern.
May be other thing you can do is (not sure if this will work as we are still in same context) -
Org 1 -
Field A is Updated, Publishes Contract
Org 2 -
Before Update of Contract in Org 2; If A has been updated - Save ID of the Contract in NEW Custom Object.
In After Update of NEW Custom Object, Update Field B for Given Contract ID. Updates on B will be published

Return value for correct session?

I'm working on a project in dead ASP (I know :( )
Anyway it is working with a kdb+ database which is major overkill but not my call. Therefore to do inserts etc we're having to write special functions so they can be handled.
Anyway we've hit a theoretical problem and I'm a bit unsure how it should be dealt with in this case.
So basically you register a company, when you submit validation will occur and the page will be processed, inserting new values to the appropriate tables. Now at this stage I want to pull ID's from the tables and use them in the session for further registration screens. The user will never add a specific ID of course so it needs to be pulled from the database.
But how can this be done? I'm particularly concerned with 2 user's simultaneously registering, how can I ensure the correct ID is passed back to the correct session?
Thank you for any help you can provide.
Instead of having the ID set at the point of insert, is it possible for you to "grab" an ID value before hand, and then use that value throughout the process?
So:
Start the registration.
System connects to the database, creates an ID (perhaps from an ID table) and Stores in ASP Session.
Company registers.
You validate and insert data into DB (including the ID session)
The things you put in the Session(...) collection is only visible to that session (i.e. the session is used only by the browser windows on one computer). The session is identified by a GUID value that is stored in a cookie on the client machine. It is "safe" to store your IDs there (other users won't be able to read them easily) .
either your id can include date and time - so it will be example - id31032012200312 - but if you still think that 2 people can register at the same type then I would use recordset locks liek the ones here - http://www.w3schools.com/ado/prop_rs_locktype.asp
To crea ids like above in asp you do - replace(date(),"/","") ' and then same with time with ":"
Thanks

How to track data changes in a database table

What is the best way to track changes in a database table?
Imagine you got an application in which users (in the context of the application not DB users ) are able to change data which are store in some database table. What's the best way to track a history of all changes, so that you can show which user at what time change which data how?
In general, if your application is structured into layers, have the data access tier call a stored procedure on your database server to write a log of the database changes.
In languages that support such a thing aspect-oriented programming can be a good technique to use for this kind of application. Auditing database table changes is the kind of operation that you'll typically want to log for all operations, so AOP can work very nicely.
Bear in mind that logging database changes will create lots of data and will slow the system down. It may be sensible to use a message-queue solution and a separate database to perform the audit log, depending on the size of the application.
It's also perfectly feasible to use stored procedures to handle this, although there may be a bit of work involved passing user credentials through to the database itself.
You've got a few issues here that don't relate well to each other.
At the basic database level you can track changes by having a separate table that gets an entry added to it via triggers on INSERT/UPDATE/DELETE statements. Thats the general way of tracking changes to a database table.
The other thing you want is to know which user made the change. Generally your triggers wouldn't know this. I'm assuming that if you want to know which user changed a piece of data then its possible that multiple users could change the same data.
There is no right way to do this, you'll probably want to have a separate table that your application code will insert a record into whenever a user updates some data in the other table, including user, timestamp and id of the changed record.
Make sure to use a transaction so you don't end up with cases where update gets done without the insert, or if you do the opposite order you don't end up with insert without the update.
One method I've seen quite often is to have audit tables. Then you can show just what's changed, what's changed and what it changed from, or whatever you heart desires :) Then you could write up a trigger to do the actual logging. Not too painful if done properly...
No matter how you do it, though, it kind of depends on how your users connect to the database. Are they using a single application user via a security context within the app, are they connecting using their own accounts on the domain, or does the app just have everyone connecting with a generic sql-account?
If you aren't able to get the user info from the database connection, it's a little more of a pain. And then you might look at doing the logging within the app, so if you have a process called "CreateOrder" or whatever, you can log to the Order_Audit table or whatever.
Doing it all within the app opens yourself up a little more to changes made from outside of the app, but if you have multiple apps all using the same data and you just wanted to see what changes were made by yours, maybe that's what you wanted... <shrug>
Good luck to you, though!
--Kevin
In researching this same question, I found a discussion here very useful. It suggests having a parallel table set for tracking changes, where each change-tracking table has the same columns as what it's tracking, plus columns for who changed it, when, and if it's been deleted. (It should be possible to generate the schema for this more-or-less automatically by using a regexed-up version of your pre-existing scripts.)
Suppose I have a Person Table with 10 columns which include PersonSid and UpdateDate. Now, I want to keep track of any updates in Person Table.
Here is the simple technique I used:
Create a person_log table
create table person_log(date datetime2, sid int);
Create a trigger on Person table that will insert a row into person_log table whenever Person table gets updated:
create trigger tr on dbo.Person
for update
as
insert into person_log(date, sid) select updatedDTTM, PersonSID from inserted
After any updates, query person_log table and you will be able to see personSid that got updated.
Same you can do for Insert, delete.
Above example is for SQL, let me know in case of any queries or use this link :
https://web.archive.org/web/20211020134839/https://www.4guysfromrolla.com/webtech/042507-1.shtml
A trace log in a separate table (with an ID column, possibly with timestamps)?
Are you going to want to undo the changes as well - perhaps pre-create the undo statement (a DELETE for every INSERT, an (un-) UPDATE for every normal UPDATE) and save that in the trace?
Let's try with this open source component:
https://tabledependency.codeplex.com/
TableDependency is a generic C# component used to receive notifications when the content of a specified database table change.
If all changes from php. You may use class to log evry INSERT/UPDATE/DELETE before query. It will be save action, table, column, newValue, oldValue, date, system(if need), ip, UserAgent, clumnReference, operatorReference, valueReference. All tables/columns/actions that need to log are configurable.

Resources