I'm trying to query only opportunities that have tasks associated with the following query:
Select Id, (id, status from tasks) from opportunity where id in (select whatid from task)
The where condition is not compiling, any ideas?
Thank you!
"Task is not supported for semi-join inner SELECTs"
You're welcome to upvote an idea, it's only 7 years old ;) https://success.salesforce.com/ideaView?id=08730000000J68oAAC
You can't do a rollup summary of activities either, to put a counter of Tasks for example. Messy...
you can just ignore it, SELECT all opps and filter them manually in Apex if(!opp.Tasks.isEmpty()){/*do my stuff */}
You can try splitting it into 2 queries, get a Set<Id> of Task.WhatId and then bind that to second, Opp-related query...
you can put some helper field on Opp that would help you identify them (and in future populate it with a cross-object workflow? process builder? Task trigger?)
you can consider using a report with "Opportunities with Tasks" cross filter and then fetch that report's results with Analytics REST API. That counts as a callout though and it's limited to 2K rows I think.
Related
I've to create an automation process to check that no new opportunities has been created for an account in past 12 months and update the account field based on that.
Tried process builder, but it doesn't seem to work.
Tricky
A flow/workflow/process builder needs some triggering condition to fire. If an account was created 5 years ago, not updated since, haven't had any opportunities - it will not trigger any flows until somebody touches it.
And even if you somehow to manage to make a time-based workflow for example (to enqueue making a Task 1 year from now if there are no Opps by then) - it'll "queue" actions only from the moment it was created, it will not retroactively tag old unused accounts.
The time-based actions suck a bit. Say you made it work, it enqueued some future tasks/field updates/whatevers. Then you realise you need to exclude Accounts of certain record type from it. You need to deactivate the workflow/flow to do it - and deactivation wipes the enqueued actions out. So you'd need to save your changes and somehow "touch" all accounts again so they're checked again.
Does it have to be a field on Account? Can it be just a report (which you could make a reporting snapshot of if needed)? You could embed a report on account layout right? A query? Worst case some apex nightly job that runs and tags the accounts? It would dutifully run through them all and set/clear your helper field, easy to change (well, for a developer).
SELECT Id, Name
FROM Account
WHERE Id NOT IN (SELECT AccountId FROM Opportunity WHERE CreatedDate = LAST_N_DAYS:365)
Reporting way would be "cross filter": https://salesforce.vidyard.com/watch/aQ6RWvyPmFNP44brnAp8tf, https://help.salesforce.com/s/articleView?id=sf.reports_cross_filters.htm&type=5
I am trying to create an alert to be sent daily. The condition is to display all the Return Orders not completed.
I am expecting it to be sent later today, then just in case there were items on those data that are still not completed by tomorrow, then it will sent out again.
Is there a need to query that condition? let's say
SELECT SalesOrderNo, SalesOrderDetailID, CustomerNo, ItemNo, DueDate
, Completed, Qty, UDCode, CustomerPO
FROM dbo._EventAlertReturns
WHERE GETDATE() is today?
What is the best way to do that?
At the moment, your query is adequate on its own, you don't need that WHERE clause - but you cannot set up an automated process using a mere SELECT statement.
As Jon Vote said, to do something like this you need a SQL Server Scheduled Job. With a scheduling agent you could create a scheduled package to execute in the evening which emails you the results of your query.
For additional granularity you can look into SQL Server Data Tools to more-easily create tasks and settings for your automated packages.
I currently have a report that grabs certain orders (orders with discounts) and is emailed on a daily basis. However, is there a way so that it will only email or send out the subscription, if there are orders with discounts?
Help would be immensely appreciated.
The workaround we use for this problem is kind of silly, but very effective.
Add a row count check at the beginning of your code like:
IF (SELECT COUNT(X) FROM TABLES)>0
BEGIN
RAISERROR ('No Rows to Report',2,1)
END
The error will halt execution of the subscription.
I've a Cognos report in which I've cascading prompts. The Hierarchy is defined in the image attached.
The First Parent (Division) fills the two cascading child in 3-5 seconds.
But when I select any Policy, (that will populate the two child beneath) it took around 2 minutes.
Facts:
The result set after two minutes is normal (~20 rows)
The Queries behind all the prompts are simple Select DISTINCT Col_Name
Ive created indexes on all the prompt columns.
Tried turning on the local cache and Execution Method to concurrent.
I'm on Cognos Report Studio 10.1
Any help would be much appreciated.
Thanks,
Nuh
There is an alternative to a one-off dimension table. Create a Query Subject in Framework for your AL-No prompt. In the query itself, build a query that gets distinct AL-No (you said that is fast, probably because there is an index on AL-No). Wrap that in a select that does a filter on ' #prompt('pPolicy')#' (assuming your Policy Prompt is keyed to ?pPolicy?)
This will force the Policy into the sql before it is sent to the database, but wrapping on the distinct AL-No will allow you to use the AL-No index.
select AL_NO from
(
select AL_NO, Policy_NO
from CLAIMS
group by AL_NO, Policy_NO
)
where Policy_NO = #prompt('pPolicyNo')#
Your issue is just too much table scanning. Typically, one would build a prompt page from dimension-based tables, not the fact table, though I admit that is not always possible with cascading prompts. The ideal solution is to create a one-off dimension table with these distinct values, then model that strictly for the prompts.
Watch out for indexing each field, as the indexes will not be used due to the selectivity of the values. A compound index of the fields may work instead. As with any time you are making changes to the DDL - open SQL profiler and see what SQL Cognos is generating, then run an explain plan before/after the changes.
We have started using bigquery for event logging from our games.
We collect events from appengine nodes and enques them in chunks now and then which are placed in a task queue.
A backend is then processing this queue and uploads events to bigquery.
Today we store ca 60 million daily events from one of our games and 6 million from another.
We have also made cron jobs to process these events to gather various gaming KPI's. (I.e. second day retention, active users, etc etc)
Everything has gone quite smooth but we do now face a tricky problem !
======== Question 1 ===============================================
Due to some reason the deletion of the queue tasks fails now and then. Not very often but it happens and often in bursts.
TransientFailureException is probably the cause ... I say probable since we are deleting process events in batch mode. I.e. ...
List<Boolean> Queue.deleteTask(List<TashHandle> taskstoDelete)
... so we actually don't know why we failed to delete a task.
We have today added retry code that will try to delete those failed deletions again.
Is there a best practice to deal with this kind of problem?
========= Question 2 =======================================================
Duplicate detection
The following SQL succeds to find duplicates for or smaller game but exceeds resources for
the bigger one.
SELECT DATE(ts) date, SUM(duplicates) - COUNT(duplicates) as duplicates
FROM (
SELECT ts, eventId, userId, count(*) duplicates
FROM [analytics_davincigameserver.events_app1_v2_201308]
GROUP EACH BY ts, eventId, userId
HAVING duplicates > 1
)
GROUP EACH BY date
Is there a way to detect duplicates even for our bigger game?
I.e. a query that bigquery will be able to mangle our 60 million daily rows and locate duplicates.
Thanks in advance!
For question #2 (I'd prefer they were separate questions, to skip this step and opportunity of confusion):
Resources are exhausted on the inner query, or the outer query?
Does this work?
SELECT ts, eventId, userId, count(*) duplicates
FROM [analytics_davincigameserver.events_app1_v2_201308]
GROUP EACH BY ts, eventId, userId
HAVING duplicates > 1
What about reducing the cardinality? I'm guessing as you are grouping by timestamp, there might be too many different buckets to group by. Does this work better?
SELECT ts, eventId, userId, count(*) duplicates
FROM [analytics_davincigameserver.events_app1_v2_201308]
WHERE ABS(HASH(ts) % 10) = 1
GROUP EACH BY ts, eventId, userId
HAVING duplicates > 1