I'm trying to delete a set of records using the following C# SDK Smartsheet API 2.0 method:
long[] deleteRowIds = existingRowIds.Except(updatedRowIds).ToArray();
smartsheet.SheetResources.RowResources.DeleteRows(sheetId, deleteRowIds, true)
Within the smartsheet documentation the row id parameter example is as follows:
smartsheet.SheetResources.RowResources.DeleteRows( sheetId, new long[] { 207098194749316, 207098194749317 }, true)
I hardcoded the row ids relevant to my sheet and was able to execute the method. However, when I try to push the array of Ids I'm generating in my first line of code I'm receiving this error: "There was an issue connecting".
I can't find that error in any of their documentation. Is there a chance I'm misunderstanding how my long[] variable is initialized from List by using the ToArray() method?
That's really my only theory (as I've exported all my row ids to ensure I'm not pushing an incorrect data type).
Any help would be greatly appreciated.
Thanks!
Channing
Looks like the Delete method bulk operation has a limit on the amount of row ids I can pass into the long[] parameter. The limit is somewhere between 400 - 500 row ids. I'll partition these in order to bypass the limit.
Related
I am facing an issue using Salesforce API. While querying I am getting the following exception: "The SOQL FIELDS function must have a LIMIT of at most 200". Now, I understand SF expects a max of 200 only. So, I wanted to ask how can I query when the results are more than 200?
I can only use REST API to query, but if there is another option, then please let me know and I will try to add it in my code.
Thanks in Advance
You could chunk it, SELECT FIELDS(ALL) FROM Account ORDER BY Id LIMIT 200. Read the id of last record and in next query add WHERE Id> '001...'. but that's not very effective.
Look into "describe" calls, waste 1 call to learn names of all fields you need and explicitly list them in the query instead of relying on FIELDS(ALL). You can compose SOQL up to 20k characters long and with "bulk API" queries you could fetch up to 10k records in each API call so "investing" 1 call for describes would quickly pay off.
You could even cache the describe's result in your application and fetch fresh only if something interesting changed, there's rest API header for that: https://developer.salesforce.com/docs/atlas.en-us.232.0.api_rest.meta/api_rest/sobject_describe_with_ifmodified_header.htm
Try this it is Helpful:
// Get the Map of Schema of Account SObject
Map<String, Schema.SObjectField> fieldMap = Account.sObjectType.getDescribe().fields.getMap();
// Get all of the fields on the object
Set<String> setFieldNames = fieldMap.keySet();
list<String> lstFieldNames = new List<String>(setFieldNames);
// Dynamic Query String.
List<Account> lstAccounts = Database.query('SELECT ' + String.join(lstFieldNames, ',') + ' FROM Account');
system.debug('lstAccounts'+lstAccounts);
I have some images stored in the default cluster in my OrientDB database. I stored them by implementing the code given by the documentation in the case of the use of multiple ORecordByte (for large content): http://orientdb.com/docs/2.1/Binary-Data.html
So, I have two types of object in my default cluster. Binary datas and ODocument whose field 'data' lists to the different record of binary datas.
Some of the ODocument records' RID are used in some other classes. But, the other records are orphanized and I would like to be able to retrieve them.
My idea was to use
select from cluster:default where #rid not in (select myField from MyClass)
But the problem is that I retrieve the other binary datas and I just want the record with the field 'data'.
In addition, I prefer to have a prettier request because I don't think the "not in" clause is really something that should be encouraged. Is there something like a JOIN which return records that are not joined to anything?
Can you help me please?
To resolve my problem, I did like that. However, I don't know if it is the right way (the more optimized one) to do it:
I used the following SQL request:
SELECT rid FROM (FIND REFERENCES (SELECT FROM CLUSTER:default)) WHERE referredBy = []
In Java, I execute it with the use of the couple OCommandSQL/OCommandRequest and I retrieve an OrientDynaElementIterable. I just iterate on this last one to retrieve an OrientVertex, contained in another OrientVertex, from where I retrieve the RID of the orpan.
Now, here is some code if it can help someone, assuming that you have an OrientGraphNoTx or an OrientGraph for the 'graph' variable :)
String cmd = "SELECT rid FROM (FIND REFERENCES (SELECT FROM CLUSTER:default)) WHERE referredBy = []";
List<String> orphanedRid = new ArrayList<String>();
OCommandRequest request = graph.command(new OCommandSQL(cmd));
OrientDynaElementIterable objects = request.execute();
Iterator<Object> iterator = objects.iterator();
while (iterator.hasNext()) {
OrientVertex obj = (OrientVertex) iterator.next();
OrientVertex orphan = obj.getProperty("rid");
orphanedRid.add(orphan.getIdentity().toString());
}
I am using ScalikeJDBC to fetch a large table, convert the data to JSON and then calling a web service with 50 JSON objects (rows) each. This is my code:
val rows = sql"SELECT * FROM bigtable"
val jsons = rows.map { row =>
// build JSON object for each row
}.toList().apply()
jsons.grouped(50).foreach { batch =>
// send 50 objects at once to an HTTP server
}
This works, but unfortunately, the intermediate list is huge and consumes alot of memory. I am looking for a way to iterate over the resultset in a "lazy" fashion, similar to foreach, except I want to iterate over batches of 50 rows. Is that possible with ScalikeJDBC?
I solved the memory issues by filling and clearing a mutable list instead of using grouped, but I am still looking for a better solution.
Try specifying fetchSize.
See also: http://scalikejdbc.org/documentation/operations.html#setting-jdbc-fetchsize
I have the following:
43 documents indexed in Solr
If I use the Java API to do a query without any grouping, such as:
SolrQuery query = new SolrQuery("*:*");
query.setRows(10);
I can then obtain the total number of matching elements like this:
solrServer.query(query).getResults().getNumFound(); //43 in this case
The getResults() method returns a SolrDocumentList instance which contains this value.
If however I use grouping, something like:
query.set("group", "true");
query.set("group.format", "simple");
query.set("group.field", "someField");
Then the above code for retrieving query results no loger works (throws NPE), and I have to instead use:
List<GroupCommand> groupCommands = solrServer.query(query).getGroupResponse().getValues();
List<Group> groups = groupCommand.getValues();
Group group = groups.get(groupIndex);
I don't understand how to use this part of the API to get the overall number of matching documents (the 43 from the non-grouping query above). First I thought that with grouping is no longer possible to get that, but I've noticed that if I do a similar query in the Solr admin console, with the same grouping and everything, it returns the exact same results as the Java API and also numFound=43. So obviously the code used for the console has some way to retrieve that value even when grouping is used:
My question is, how can I get that overall number of matching documents for a query using grouping executed via Solr Java API?
In looking at the source for Group that is returned from your groups.get(groupIndex) call, it has a getResults() method that returns a SolrDocumentList. The SolrDocumentList has a getNumFound() method that should return the overall number, I believe...
So you should be able to get this as the following:
int numFound = group.getResults().getNumFound();
Hope this helps.
Update: I believe as OP stated, group.getResults().getNumFound() will only return the number of items in the group. However, on the GroupCommand there is a getMatches() method that may be the corresponding count that is desired.
int matches = groupCommands.getMatches();
If you set the ngroups parameter to true (default false) this will return the number of groups.
eg:
solrQuery.set("group.ngroups", true);
https://cwiki.apache.org/confluence/display/solr/Result+Grouping
this can then be retrieved from your responding GroupCommand with:
int numGroups = tempGroup.getNGroups();
At least that was my understanding?
After much googling and console testing I need some help with arrays in rails. In a method I do a search in the db for all rows matching a certain requirement and put them in a variable. Next I want to call each on that array and loop through it. My problem is that sometimes only one row is matched in the initial search and .each causes a nomethoderror.
I called class on both situations, where there are multiple rows and only one row. When there are multiple rows the variable I dump them into is of the class array. If there is only one row, it is the class of the model.
How can I have an each loop that won't break when there's only one instance of an object in my search? I could hack something together with lots of conditional code, but I feel like I'm not seeing something really simple here.
Thanks!
Requested Code Below
#user = User.new(params[:user])
if #user.save
#scan the invites dbtable and if user email is present, add the new uid to the table
#talentInvites = TalentInvitation.find_by_email(#user.email)
unless #talentInvites.nil?
#talentInvites.each do |tiv|
tiv.update_attribute(:user_id, #user.id)
end
end
....more code...
Use find_all_by_email, it will always return an array, even empty.
#user = User.new(params[:user])
if #user.save
#scan the invites dbtable and if user email is present, add the new uid to the table
#talentInvites = TalentInvitation.find_all_by_email(#user.email)
unless #talentInvites.empty?
#talentInvites.each do |tiv|
tiv.update_attribute(:user_id, #user.id)
end
end