Declaring new variables on the fly? - salesforce

Has anyone found a way to declare variables on the fly?
I have some variables that can be limitless...
val1
val2
val3
val4...
Is there a way to create these on the fly, something like...
for(loop though){
val + '1' = dosomething('1')
}
I know it wont look anything like the above, but hoping you get the gist of it.

Apex is a compiled, not parsed / dynamic language. You could pull such thing off in say Javascript or PHP but (as far as I know) not in Java. And when you get to the bottom of it - Salesforce is built on Oracle database and Java so Apex is kind of thin wrapper on Java calls they deemed most necessary.
Just out of curiosity - why would you need that?
Use Jeremy's idea with loops if you need sequential access. Or...
If you need "unique key -> some value", use Maps.
Map<String, Double> myMap = new Map<String, Double>();
for(Integer i = 1; i < 10; ++i){
myMap.put('someKey' + String.valueOf(i),Math.floor(Math.random() * 1000));
}
System.debug(myMap);
System.debug(myMap.get('someKey7'));
The "double" in this example can be equally Integer, Id, Account, MyCustomClass... whatever your function returns.
There's one more trick I can think of that might be helpful if your data comes from external source. It's to use JSON / XML parsers that could cast any kind of data (held in a String) to a collection of your choice. It kind of goes back to List/Map idea but it's totally up to you how would you build / retrieve this string beforehand. Read about JSON methods for a start although if you don't have a structure that follows predictable pattern you might want to check out JSON/XML parsers (click here and scroll down to the examples).

This isn't possible in apex. You could however use a list:
List<String> values = new List<String>();
for (Integer i = 0; i < aList.size(); i++) {
values.add(dosomething(i));
}

Related

Appending values to DataSet in Apache Flink

I am currently writing an (simple) analytisis code to sum time connected powerreadings. With the data being assumingly raw (e.g. disturbances from the measuring device have not been calculated out) I have to account for disturbances by calculation the mean of the first one thousand samples. The calculation of the mean itself is not a problem. I only am unsure of how to generate the appropriate DataSet.
For now it looks about like this:
DataSet<Tupel2<long,double>>Gyrotron_1=ECRH.includeFields('11000000000'); // obviously the line to declare the first gyrotron, continues for the next ten lines, assuming separattion of not occupied space
DataSet<Tupel2<long,double>>Gyrotron_2=ECRH.includeFields('10100000000');
DataSet<Tupel2<long,double>>Gyrotron_3=ECRH.includeFields('10010000000');
DataSet<Tupel2<long,double>>Gyrotron_4=ECRH.includeFields('10001000000');
DataSet<Tupel2<long,double>>Gyrotron_5=ECRH.includeFields('10000100000');
DataSet<Tupel2<long,double>>Gyrotron_6=ECRH.includeFields('10000010000');
DataSet<Tupel2<long,double>>Gyrotron_7=ECRH.includeFields('10000001000');
DataSet<Tupel2<long,double>>Gyrotron_8=ECRH.includeFields('10000000100');
DataSet<Tupel2<long,double>>Gyrotron_9=ECRH.includeFields('10000000010');
DataSet<Tupel2<long,double>>Gyrotron_10=ECRH.includeFields('10000000001');
for (int=1,i<=10;i++) {
DataSet<double> offset=Gyroton_'+i+'.groupBy(1).first(1000).sum()/1000;
}
It's the part in the for-loop I'm unsure of. Does anybody know if it is possible to append values to DataSets and if so how?
In case of doubt, I could always put the values into an array but I do not know if that is the wise thing to do.
This code will not work for many reasons. I'd recommend looking into the fundamentals of Java and the basic data structures and also in Flink.
It's really hard to understand what you actually try to achieve but this is the closest that I came up with
String[] codes = { "11000000000", ..., "10000000001" };
DataSet<Tuple2<Long, Double>> result = env.fromElements();
for (final String code : codes) {
DataSet<Tuple2<Long, Double>> codeResult = ECRH.includeFields(code)
.groupBy(1)
.first(1000)
.sum(0)
.map(sum -> new Tuple2<>(sum.f0, sum.f1 / 1000d));
result = codeResult.union(result);
}
result.print();
But please take the time and understand the basics before delving deeper. I also recommend to use an IDE like IntelliJ that would point to at least 6 issues in your code.

How to use Collections.binarySearch() in a CodenameOne project

I am used to being able to perform a binary search of a sorted list of, say, Strings or Integers, with code along the lines of:
Vector<String> vstr = new Vector<String>();
// etc...
int index = Collections.binarySearch (vstr, "abcd");
I'm not clear on how codenameone handles standard java methods and classes, but it looks like this could be fixed easily if classes like Integer and String (or the codenameone versions of these) implemented the Comparable interface.
Edit: I now see that code along the lines of the following will do the job.
int index = Collections.binarySearch(vstr, "abcd", new Comparator<String>() {
#Override
public int compare(String object1, String object2) {
return object1.compareTo(object2);
}
});
Adding the Comparable interface (to the various primitive "wrappers") would also would also make it easier to use Collections.sort (another very useful method :-))
You can also sort with a comparator but I agree, this is one of the important enhancements we need to provide in the native VM's on the various platforms personally this is my biggest peeve in our current VM.
Can you file an RFE on that and mention it as a comment in the Number issue?
If we are doing that change might as well do both.

Need to figure out how to use DeepZoomTools.dll to create DZI

I am not familiar with .NET coding.
However, I must create DZI sliced image assets on a shared server and am told that I can instantiate and use DeepZoomTools.dll.
Can someone show me a very simple DZI creation script that demonstrates the proper .NET coding technique? I can embellish as needed, I'm sure, but don't know where to start.
Assuming I have a jpg, how does a script simply slice it up and save it?
I can imagine it's only a few lines of code. The server is running IIS 7.5.
If anyone has a simple example, I'd be most appreciative.
Thanks
I don't know myself, but you might ask in the OpenSeadragon community:
https://github.com/openseadragon/openseadragon/issues
Someone there might know.
Does it have to be DeepZoomTools.dll? There are a number of other options for creating DZI files. Here are a few:
http://openseadragon.github.io/examples/creating-zooming-images/
Example of building a Seadragon Image from multiple images.
In this, the "clsCanvas" objects and collection can pretty much be ignored, it was an object internal to my code that was generating the images with GDI+, then putting them on disk. The code below just shows how to get a bunch of images from file and assemble them into a zoomable collection. Hope this helps someone :-).
CollectionCreator cc = new CollectionCreator();
// set default values that make sense for conversion options
cc.ServerFormat = ServerFormats.Default;
cc.TileFormat = ImageFormat.Jpg;
cc.TileSize = 256;
cc.ImageQuality = 0.92;
cc.TileOverlap = 0;
// the max level should always correspond to the log base 2 of the tilesize, unless otherwise specified
cc.MaxLevel = (int)Math.Log(cc.TileSize, 2);
List<Microsoft.DeepZoomTools.Image> aoImages = new List<Microsoft.DeepZoomTools.Image>();
double fLeftShift = 0;
foreach (clsCanvas oCanvas in aoCanvases)
{
//viewport width as a function of this canvas, so the width of this canvas is 1
double fThisImgWidth = oCanvas.MyImageWidth - 1; //the -1 creates a 1px overlap, hides the seam between images.
double fTotalViewportWidth = fTotalImageWidth / fThisImgWidth;
double fMyLeftEdgeInViewportUnits = -fLeftShift / fThisImgWidth; ; //please don't ask me why this is a negative numeber
double fMyTopInViewportUnits = -fTotalViewportWidth * 0.3;
fLeftShift += fThisImgWidth;
Microsoft.DeepZoomTools.Image oImg = new Microsoft.DeepZoomTools.Image(oCanvas.MyFileName.Replace("_Out_Tile",""));
oImg.ViewportWidth = fTotalViewportWidth;
oImg.ViewportOrigin = new System.Windows.Point(fMyLeftEdgeInViewportUnits, fMyTopInViewportUnits);
aoImages.Add(oImg);
}
// create a list of all the images to include in the collection
cc.Create(aoImages, sMasterOutFile);

System.TypeException: Cannot have more than 10 chunks in a single operation

I have this very weird error, "System.TypeException: Cannot have more than 10 chunks in a single operation", has anyone seen/encountered this before ? Please can you guide me if you know how to solve this.
I am trying to insert different types of sObjects together in a list of sObject. The list is never larger than 10 rows.
This post here:
https://developer.salesforce.com/forums/ForumsMain?id=906F000000090nUIAQ
suggests that it is not the number of different sObjects, but the order of the objects that causes this chunk limit to be exceeded. In other words, "1,1,1,2,2,2" has one chunk, the transition from "1" to "2". "1,2,3,4,5,6" has six chunks, even though the number of elements is the same. Putting the objects into the list sorted in object order is the suggested solution.
Is it possible for you to create a reasonable test case with only 2 or 3 rows?
There are two possible explanations for this issue:
As Jagular noted, you did not order the sobjects you tried to insert, so there are more than 10 'chunks' in the list.
You try to insert > 2000 records, and > 1 sobject type. This one seems like a Salesforce Bug, since the error message doesn't match the issue.
Scenario 1 and its solution
When you have a hybrid list, make sure that the objects are not scattered without any order. For example, A,B,A,B,A,B,A,B…. Salesforce has an inherent trouble in switching sObject types for more than 10 times. They call this switching limit as Chunking Limit. So, on this hybrid list, if you would have sorted it and then passed it for DML, Salesforce would have been much happier. For example. A,A,A,A,B,B,B,B… In this case, salesforce only has to switch one time (that is read all A objects –>switch –> read all B objects). The max chunk limit is 10. So, here we are safe.
listToUpdate.sort();
UPDATE listToUpdate;
Scenario 2 and its solution
Another point that we have to bear in our mind is that if the hybrid list contains more number of objects for one type, we can run into TypeException. As mentioned in the screenshot, if list contains 1001 objects of type A and 1001 objects of type B, then total objects is equal to 2002. The maximum chunks allowed is 10. So, if you do a simple math, the number of objects in each chunk would be 2002/10 = 200. Salesforce also enforces another governor limit that each chunk should not contain 200 or more than 200 objects. In this case, we will have to foresee how much objects are possible to enter this code and we have to write code to pass lists of safe size for DML every time.
Scenario 3 and its solution
Scenario 3 and its solution
Third scenario that can happen is if the hybrid list contains objects of more than 10 types, then even if the size of the list if very small, switching happens when salesforce reads different sObject. So, we have to make sure that in this case, we allot separate lists for each sObject type and then pass it on for DML. Doing this in an apex trigger or apex class would cause you some trouble as multiple DML’s are initiated in a context. Passing this kind of multiple sObject lists for DML operations in a different context would really ease up the load you pump into the platform. Consider doing this kind of logic in a Batch Apex Job rather than a apex trigger or apex class.
Hope this helps.
Below a code that should cover all 3 Scenario from Arpit Sethi.
It's a piece of code I took from this topic: https://developer.salesforce.com/forums/?id=906F000000090nUIAQ.
and modified to cover Scenario 2.
private static void saveSobjectSet(List <Sobject> listToUpdate) {
Integer SFDC_CHUNK_LIMIT = 10;
// Developed this part due to System.TypeException: Cannot have more than 10 chunks in a single operation
Map<String, List<Sobject>> sortedMapPerObjectType = new Map<String, List<Sobject>>();
Map<String, Integer> numberOf200ChunkPerObject = new Map<String, Integer>();
for (Sobject obj : listToUpdate) {
String objTypeREAL = String.valueOf(obj.getSObjectType());
if (! numberOf200ChunkPerObject.containsKey(objTypeREAL)){
numberOf200ChunkPerObject.put(objTypeREAL, 1);
}
// Number of 200 chunk for a given Object
Integer numnberOf200Record = numberOf200ChunkPerObject.get(objTypeREAL);
// Object type + number of 200 records chunk
String objTypeCURRENT = String.valueOf(obj.getSObjectType()) + String.valueOf(numnberOf200Record);
// CurrentList
List<sObject> currentList = sortedMapPerObjectType.get(objTypeCURRENT);
if (currentList == null || currentList.size() > 199) {
if(currentList != null && currentList.size() > 199){
numberOf200ChunkPerObject.put(objTypeREAL, numnberOf200Record + 1);
objTypeCURRENT = String.valueOf(obj.getSObjectType()) + String.valueOf(numnberOf200Record);
}
sortedMapPerObjectType.put(objTypeCURRENT, new List<Sobject>());
}
sortedMapPerObjectType.get(objTypeCURRENT).add(obj);
}
while(sortedMapPerObjectType.size() > 0) {
// Create a new list, which can contain a max of chunking limit, and sorted, so we don't get any errors
List<Sobject> safeListForChunking = new List<Sobject>();
List<String> keyListSobjectType = new List<String>(sortedMapPerObjectType.keySet());
for (Integer i = 0;i<SFDC_CHUNK_LIMIT && !sortedMapPerObjectType.isEmpty();i++) {
List<Sobject> listSobjectOfOneType = sortedMapPerObjectType.remove(keyListSobjectType.remove(0));
safeListForChunking.addAll(listSobjectOfOneType);
}
update safeListForChunking;
}
}
Hope it helps,
Bye
Hi i kind of deviced a simple way to sort a list of different sobject types
public List<Sobject> SortRecordsByType(List<Sobject> records){
List<Sobject> response = new List<Sobject>();
Map<string,List<Sobject>> sortDictionary = new Map<string,List<Sobject>>();
for(Sobject record : records){
string objectTypeName = record.getSobjectType().getDescribe().getName();
if(sortDictionary.containsKey(objectTypeName)){
sortDictionary.get(objectTypeName).add(record);
}else{
sortDictionary.put(objectTypeName , new List<Sobject>{record});
}
}
// arrange in order
for(string objectName : sortDictionary.keySet()){
response.addAll(sortDictionary.get(objectName));
}
return response;
}
hopefully this solves your problem .

Saving type information of datastructures

I am building a framework for my day-to-day tasks. I am programming in scala using a lot of type parameter.
Now my goal is to save datastructures to files (e.g. xml files). But I realized that it is not possible using xml files. As I am new to this kind of problem I am asking:
Is there a way to store the types of my datastructures in a file??? Is there a way in scala???
Okay guys. You did a great job basicly by naming the thing I searched for.
Its serialization.
With this in mind I searched the web and was completly astonished by this feature of java.
Now I do something like:
object Serialize {
def write[A](o: A): Array[Byte] = {
val ba = new java.io.ByteArrayOutputStream(512)
val out = new java.io.ObjectOutputStream(ba)
out.writeObject(o)
out.close()
ba.toByteArray()
}
def read[A](buffer: Array[Byte]): A = {
val in = new java.io.ObjectInputStream(new java.io.ByteArrayInputStream(buffer))
in.readObject().asInstanceOf[A]
}
}
The resulting Byte-Arrays can be written to a file and everthing works well.
And I am totaly fine that this solution is not human readable. If my mind changes someday. There are JSON-parser allover the web.

Resources