Executing the following query
"Select C.Title FROM ContentVersion C WHERE ContentDocumentId IN (SELECT ContentDocumentId FROM ContentWorkspaceDoc WHERE ContentWorkSpaceId='".LIBRARY_ID."')"
which gives me a big list of files in the library with Id LIBRARY_ID
As soon as I add
"Select C.Title,C.VersionData FROM..."
I only get one record. Only one of 8 records is a link, so what foolishness am I performing to get this undesired behavior?
Does VersionData require some additional privileges?
When using the API, you may receive fewer than the default 500 records in a QueryResult if the rows are wide, which they will be when retrieving the base64-encoded content stored in VersionData. You should check the done property, and call queryMore with the queryLocator to get more rows. See http://bit.ly/KEEo7M.
I had the same issue. I'd suggest you to run this same query in Anonymous window in developer console and you'll surely run into heap size error.
This happens when you exceed the governor limit ,which is 6mb for synchronous process. VersionData consists binary representation of file available in contentversion.
If you are using this query in your apex code, I'd suggest you to use batch apex for this one, only if the file size in versionData is < 12mb.
Related
I am facing an issue using Salesforce API. While querying I am getting the following exception: "The SOQL FIELDS function must have a LIMIT of at most 200". Now, I understand SF expects a max of 200 only. So, I wanted to ask how can I query when the results are more than 200?
I can only use REST API to query, but if there is another option, then please let me know and I will try to add it in my code.
Thanks in Advance
You could chunk it, SELECT FIELDS(ALL) FROM Account ORDER BY Id LIMIT 200. Read the id of last record and in next query add WHERE Id> '001...'. but that's not very effective.
Look into "describe" calls, waste 1 call to learn names of all fields you need and explicitly list them in the query instead of relying on FIELDS(ALL). You can compose SOQL up to 20k characters long and with "bulk API" queries you could fetch up to 10k records in each API call so "investing" 1 call for describes would quickly pay off.
You could even cache the describe's result in your application and fetch fresh only if something interesting changed, there's rest API header for that: https://developer.salesforce.com/docs/atlas.en-us.232.0.api_rest.meta/api_rest/sobject_describe_with_ifmodified_header.htm
Try this it is Helpful:
// Get the Map of Schema of Account SObject
Map<String, Schema.SObjectField> fieldMap = Account.sObjectType.getDescribe().fields.getMap();
// Get all of the fields on the object
Set<String> setFieldNames = fieldMap.keySet();
list<String> lstFieldNames = new List<String>(setFieldNames);
// Dynamic Query String.
List<Account> lstAccounts = Database.query('SELECT ' + String.join(lstFieldNames, ',') + ' FROM Account');
system.debug('lstAccounts'+lstAccounts);
I'm trying to get a list of domain names from email addresses using the Salesforce query language. This potentially really simple and something I would normally accomplish with split_part in postgresql, like:
SELECT split_part(Email, '#', 2)
FROM Lead
GROUP BY 1
I've been digging through the SOQL documentation and can't really find any standard string functions. However, there's this salesforce community answer: https://success.salesforce.com/answers?id=90630000000gi8EAAQ which uses LEFT, FIND and SUBSTITUTE. I tried something as simple as:
SELECT LEFT(Email, 3) FROM Lead limit 10
But get:
Error: MALFORMED_QUERY
Message: SELECT LEFT(Email, 3) FROM Lead limit 10
ERROR at Row:1:Column:17\nunexpected token: ','
Have these functions been deprecated?
I have lots of potential domain names and don't really want to query for pages and pages of every possible email address or I'll quickly hit my Salesforce API limit.
Those functions are not available in SOQL/SOSL. The link you provided refers to Custom Formula Fields. Are you trying to get the data using the UI or the API? Without knowing how you are extracting the data, the following are general suggestions:
You can create a formula field on your object called Email_Domain__c. Then, you can query or filter by that field.
You can also use custom formulas in Reports to filter your results.
Use Apex and/or Visualforce to extract/display/export the data.
Use the LIKE keyword to filter by domain name as in:
SELECT Email FROM LEAD WHERE Email LIKE '%gmail.com%'
The LIKE keyword is not efficient. https://trailhead.salesforce.com/en/modules/database_basics_dotnet/units/writing_efficient_queries
Using the PHP library for salesforce I am running:
SELECT ... FROM Account LIMIT 100
But the LIMIT is always capped at 25 records. I am selecting many fields (60 fields). Is this a concrete limit?
The skeleton code:
$client = new SforceEnterpriseClient();
$client->createConnection("EnterpriseSandboxWSDL.xml");
$client->login(USERNAME, PASSWORD.SECURITY_TOKEN);
$query = "SELECT ... FROM Account LIMIT 100";
$response = $client->query($query);
foreach ($response->records as $record) {
// ... there's only 25 records
}
Here is my check list
1) Make sure you have more than 25 records
2) after your first loop do queryMore to check if there are more records
3) make sure batchSize is not set to 25
I don't use PHP library for Salesforce. But I can assume that before doing
SELECT ... FROM Account LIMIT 100
some more select queries have been performed. If you don't code them that maybe PHP library does it for you ;-)
The Salesforce soap API query method will only return a finite number of rows. There are a couple of reasons why it may be returning less than your defined limit.
The QueryOptions header batchSize has been set to 25. If this is the case, you could try adjusting it. If it hasn't been explicitly set, you could try setting it to a larger value.
When the SOQL statement selects a number of large fields (such as two or more custom fields of type long text) then Salesforce may return fewer records than defined in the batchSize. The reduction in batch size also occurs when dealing with base64 encoded fields, such as the Attachment.Body. If this is the case they you can just use queryMore with the QueryLocator from the first response.
In both cases, check the done and size properties of the done and size properties of the QueryResult to determine if you need to use queryMore and the total number of rows that match the SOQL query.
To avoid governor limits it might be better to add all the records to a list then do everything you need to do to the records in the list. After you done just update your database using: update listName;
I executed some query like "Address:Jack*". It show numFound = 5214 and display 100 documents in results page(I changed default display results from 10 to 100).
How can I get all documents.
I remember myself doing &rows=2147483647
2,147,483,647 is integer's maximum value. I recall using a number bigger than that once and having a NumberFormatException because it couldn't be parsed into an int. I don't know if they use Long nowadays, but 2 billion rows is normally more than enough.
Small note:
Be careful if you are planning to do this in production. If you do a query like * : * and your index is big, you could transferring a couple of gigabytes in that query.
If you know you won't have many docs, go ahead and use integer's max value.
On the other hand, if you are doing a one-time script and just need to dump all results (for example document ID's) then this approach is valid, if you don't mind waiting 3-5 minutes for a query to return.
Don't use &rows=2147483647
Don't use Integer.MAX_VALUE(2147483647) as value of rows in production. This will heavily slow down your query even if you have a small resultset, because solr preallocates a queue in this size. see https://issues.apache.org/jira/browse/SOLR-7580
I strongly suggest to use Exporting Result Sets
It’s possible to export fully sorted result sets using a special rank query parser and response writer specifically designed to work together to handle scenarios that involve sorting and exporting millions of records.
Or I suggest to use Deep Paging.
Simple Pagination is a easy thing when you have few documents to read and all you have to do is play with start and rows parameters. But this is not a feasible way when you have many documents, I mean hundreds of thousands or even millions.
This is the kind of thing that could bring your Solr server to their knees.
For typical applications displaying search results to a human user,
this tends to not be much of an issue since most users don’t care
about drilling down past the first handful of pages of search results
— but for automated systems that want to crunch data about all of the
documents matching a query, it can be seriously prohibitive.
This means that if you have a website and are paging search results, a real user do not go so further but consider on the other hand what can happen if a spider or a scraper try to read all the website pages.
Now we are talking of Deep Paging.
I’ll suggest to read this amazing post:
https://lucidworks.com/post/coming-soon-to-solr-efficient-cursor-based-iteration-of-large-result-sets/
And take a look at this document page:
https://solr.apache.org/guide/pagination-of-results.html
And here is an example that try to explain how to paginate using the cursors.
SolrQuery solrQuery = new SolrQuery();
solrQuery.setRows(500);
solrQuery.setQuery("*:*");
solrQuery.addSort("id", ORDER.asc); // Pay attention to this line
String cursorMark = CursorMarkParams.CURSOR_MARK_START;
boolean done = false;
while (!done) {
solrQuery.set(CursorMarkParams.CURSOR_MARK_PARAM, cursorMark);
QueryResponse rsp = solrClient.query(solrQuery);
String nextCursorMark = rsp.getNextCursorMark();
for (SolrDocument d : rsp.getResults()) {
...
}
if (cursorMark.equals(nextCursorMark)) {
done = true;
}
cursorMark = nextCursorMark;
}
Returning all the results is never a good option as It would be very slow in performance.
Can you mention your use case ?
Also, Solr rows parameter helps you to tune the number of the results to be returned.
However, I don't think there is a way to tune rows to return all results. It doesn't take a -1 as value.
So you would need to set a high value for all the results to be returned.
What you should do is to first create a SolrQuery shown below and set the number of documents you want to fetch in a batch.
int lastResult=0; //this is for processing the future batch
String query = "id:[ lastResult TO *]"; // just considering id for the sake of simplicity
SolrQuery solrQuery = new SolrQuery(query).setRows(500); //setRows will set the required batch, you can change this to whatever size you want.
SolrDocumentList results = solrClient.query(solrQuery).getResults(); //execute this statement
Here I am considering an example of search by id, you can replace it with any of your parameter to search upon.
The "lastResult" is the variable you can change after execution of the first 500 records(500 is the batch size) and set it to the last id got from the results.
This will help you execute the next batch starting with last result from previous batch.
Hope this helps. Shoot up a comment below if you need any clarification.
For selecting all documents in dismax/edismax via Solarium php client, the normal query syntax : does not work. To select all documents set the default query value in solarium query to empty string. This is required as the default query in Solarium is :. Also set the alternative query to :. Dismax/eDismax normal query syntax does not support :, but the alternative query syntax does.
For more details following book can be referred
http://www.packtpub.com/apache-solr-php-integration/book
As the other answers pointed out, you can configure the rows to be max integer to yield back all the results for a query.
I would recommend though to use Solr feature of pagination, and build a function that will return for you all the results using the cursorMark API. The gist of it is you set the cursorMark parameter to '*', you set the page size(rows parameter), and on each result you'll get a cursorMark for the next page, so you execute the same query only with the cursorMark given from the last result. This way you'll have more flexibility on how much of the results you want back, in a much more performant way.
The way I dealt with the problem is by running the query twice:
// Start with your (usually small) default page size
solrQuery.setRows(50);
QueryResponse response = solrResponse(query);
if (response.getResults().getNumFound() > 50) {
solrQuery.setRows(response.getResults().getNumFound());
response = solrResponse(query);
}
It makes a call twice to Solr, but gets you all matching records....with the small performance penalty.
query.setRows(Integer.MAX_VALUE);
works for me!!
How to fetch more than 2000 records through SOQL ....
Is there something query more ?
call queryMore with the queryLocator provided in the first set of results, keep calling it with the next queryLocator until the done flag is true. See the Web Services API docs for more info.
You can actually do this manually as well if you order by id and then query again with "where id > : idPrevious". If you try this just as I've typed it you'll hit a problem however, you can't use > and < with id fields. There is a simple work around for this though, just create a text type formula field which takes it's value from the id field. Then you can use that field in the query with no problems.
Of course if you're just looking to process loads of data then you might really be looking for Batch Apex.