The Basics of Parse Blocks in Swift - arrays

I'm having a little issue on understanding how exactly blocks work.
for x in self.activerestaurantIDArray
{
let namelabel = x.0
self.activenameArray.append(namelabel)
let distancelabel = x.1
self.activedistanceArray.append(distancelabel)
let imageFile = x.2
imageFile.getDataInBackgroundWithBlock({ (imageData: NSData?, error: NSError?) -> Void in
if error == nil {
let realimage = UIImage(data: imageData!)
self.activeimageArray.append(realimage!)
}
println(self.activenameArray)
println(self.activedistanceArray)
println(self.activeimageArray)
})
}
In the code above, I am appending information to an array from the tuple (named: activerestaurantIDArray) so that I will get separate arrays of name, distance and image. As for the image, I can only retrieve a PFFile from parse so I will have to transform the file into a UIImage.
However, when I do this, the appends for the activeImageArray is in effect only when the println() is within the Block (imageFile.getDataInBackgroundWithBlock).
If I were to println(self.activeimageArray) at any place outside of the box, the array will turn out nil. I am not really sure why that happens or how I should go about ensuring that the appended values carry out beyond the Block{}. Any help would be greatly appreciated.

Closures (also known as yes, blocks) are pieces of code that can be called later, they are essentially functions.
What you are doing in imageFile.getDataInBackgroundWithBlock is to download the data, and when the data is downloaded, imageFile.getDataInBackgroundWithBlock will call your block, passing you the contents of the thing it just downloaded.
It is done this way because downloading must take place in a separate thread. Otherwise your main thread will freeze, and not only will it cause annoyances for the user, but after 5 seconds of non-responsiveness, the OS will kill your app. getDataInBackgroundWithBlock spawns a different thread for the download operation, and then it lets you know by calling your block. This allows you to download anything without "freezing" or "hanging" your app.
So imageData, when outside the block, is nil because you are trying to use it before the block has completed the download operation. Your code will never run slower than a download operation, so you must use the block to implement the code you want to use when an operation finishes downloading, in this case, setting the imageData.
Edit to add, this is not exclusive to Parse. Almost any framework that can freeze your UI will do those operations and implement the notifications using blocks. It's still possible to be notified by using delegates, but ever since blocks were introduces in Objective-C, people have been moving away from that pattern.

Related

Why would angular.forEach be skipped as if it were doing async when it's not?

Code
var returnValue = false;
var myList = myService.getBigListOfStuff(); // an array that contains a bunch of integers, one of which is 4
var stupidArray = [4]; // just contains the number 4
angular.forEach(stupidArray, function(myListItem){
if(lodash.contains(myList, myListItem){
returnValue = true;
}
});
return returnValue;
Issue
4 is in the myList but it's returning false. Because the return statement is getting hit before the returnValue = true; statement is getting hit. The code is not doing anything async, like calling http.get. It's just looking up a number in a big array using lodash.contains.
Random Observations
This only happens in one of my environments where source mapping is used; not on my local machine. In that environment in the Chrome debugger several lines numbers for the code are greyed out, and I can't put break points on any of those lines, but on my local machine all line numbers are black and I can put break points on any of them. My friend thinks angular.forEach is async but I can't see any documentation to that effect. I looked at the source code for it and I don't see anything async in there.
What I've done
I changed the code to use a javascript for loop, and the problem went away in all environments. However, a developer had used angular.forEach all over the place, so there's a concern we may be returning values before they have been processed by the preceding loops.
angular.forEach is not async as you have found by doing the research into the source code.
A couple things to check:
You reference myService.getBigListOfStuff() and say that returns an array. Does it sometimes return a promise or anything async? If so, that could explain why it is not finding the item in the list as it has not been populated yet.
Can you verify the types match in each of the lists? The comments have eluded to you might have differing types across the lists ("4" vs. 4) and lodash does a strict equality compare.
You shouldn't have concerns over thinking angular.forEach is async as it most certainly is not. If this is happening in an environment, you should be able to turn sourcemaps off in your settings and set breakpoints anywhere. The location shouldn't be too difficult to find even in minified code where you can set a conditional breakpoint when the returnValue (or the minified variable for returnValue) is falsey upon returning from that method. You can then inspect both list returned from the service as well as your stupidArray declaration.

Swift threading issue in Array

In my project, have a data provider, which provides data in every 2 milli seconds. Following is the delegate method in which the data is getting.
func measurementUpdated(_ measurement: Double) {
measurements.append(measurement)
guard measurements.count >= 300 else { return }
ecgView.measurements = Array(measurements.suffix(300))
DispatchQueue.main.async {
self.ecgView.setNeedsDisplay()
}
guard measurements.count >= 50000 else { return }
let olderMeasurementsPrefix = measurements.count - 50000
measurements = Array(measurements.dropFirst(olderMeasurementsPrefix))
print("Measurement Count : \(measurements.count)")
}
What I am trying to do is that when the array has more than 50000 elements, to delete the older measurement in the first n index of Array, for which I am using the dropFirst method of Array.
But, I am getting a crash with the following message:
Fatal error: Can't form Range with upperBound < lowerBound
I think the issue due to threading, both appending and deletion might happen at the same time, since the delegate is firing in a time interval of 2 millisecond. Can you suggest me an optimized way to resolve this issue?
So to really fix this, we need to first address two of your claims:
1) You said, in effect, that measurementUpdated() would be called on the main thread (for you said both append and dropFirst would be called on main thread. You also said several times that measurementUpdated() would be called every 2ms. You do not want to be calling a method every 2ms on the main thread. You'll pile up quite a lot of them very quickly, and get many delays in their updating, as the main thread is going to have UI stuff to be doing, and that always eats up time.
So first rule: measurementUpdated() should always be called on another thread. Keep it the same thread, though.
Second rule: The entire code path from whatever collects the data to when measurementUpdated() is called must also be on a non-main thread. It can be on the thread that measurementUpdated(), but doesn't have to be.
Third rule: You do not need your UI graph to update every 2ms. The human eye cannot perceive UI change that's faster than about 150ms. Also, the device's main thread will get totally bogged down trying to re-render as frequently as every 2ms. I bet your graph UI can't even render a single pass at 2ms! So let's give your main thread a break, by only updating the graph every, say, 150ms. Measure the current time in MS and compare against the last time you updated the graph from this routine.
Fourth rule: don't change any array (or any object) in two different threads without doing a mutex lock, as they'll sometimes collide (one thread will be trying to do an operation on it while another is too). An excellent article that covers all the current swift ways of doing mutex locks is Matt Gallagher's Mutexes and closure capture in Swift. It's a great read, and has both simple and advanced solutions and their tradeoffs.
One other suggestion: You're allocating or reallocating a few arrays every 2ms. It's unnecessary, and adds undue stress on the memory pools under the hood, I'd think. I suggest not doing append and dropsFirst calls. Try rewriting such that you have a single array that holds 50,000 doubles, and never changes size. Simply change values in the array, and keep 2 indexes so that you always know where the "start" and the "end" of the data set is within the array. i.e. pretend the next array element after the last is the first array element (pretend the array loops around to the front). Then you're not churning memory at all, and it'll operate much quicker too. You can surely find Array extensions people have written to make this trivial to use. Every 150ms you can copy the data into a second pre-allocated array in the correct order for your graph UI to consume, or just pass the two indexes to your graph UI if you own your graph UI and can adjust it to accommodate.
I don't have time right now to write a code example that covers all of this (maybe someone else does), but I'll try to revisit this tomorrow. It'd actually be a lot better for you if you made a renewed stab at it yourself, and then ask us a new question (on a new StackOverflow) if you get stuck.
Update As #Smartcat correctly pointed this solution has the potential of causing memory issues if the main thread is not fast enough to consume the arrays in the same pace the worker thread produces them.
The problem seems to be caused by ecgView's measurements property: you are writing to it on the thread receiving the data, while the view tries to read from it on the main thread, and simultaneous accesses to the same data from multiple thread is (unfortunately) likely to generate race conditions.
In conclusion, you need to make sure that both reads and writes happen on the same thread, and can easily be achieved my moving the setter call within the async dispatch:
let ecgViewMeasurements = Array(measurements.suffix(300))
DispatchQueue.main.async {
self.ecgView.measurements = ecgViewMeasurements
self.ecgView.setNeedsDisplay()
}
According to what you say, I will assume the delegate is calling the measuramentUpdate method from a concurrent thread.
If that's the case, and the problem is really related to threading, this should fix your problem:
func measurementUpdated(_ measurement: Double) {
DispatchQueue(label: "MySerialQueue").async {
measurements.append(measurement)
guard measurements.count >= 300 else { return }
ecgView.measurements = Array(measurements.suffix(300))
DispatchQueue.main.async {
self.ecgView.setNeedsDisplay()
}
guard measurements.count >= 50000 else { return }
let olderMeasurementsPrefix = measurements.count - 50000
measurements = Array(measurements.dropFirst(olderMeasurementsPrefix))
print("Measurement Count : \(measurements.count)")
}
}
This will put the code in an serial queue. This way you can ensure that this block of code will run only one at a time.

quering for objects with angularFireCollection?

I used the implicit method for retrieving data objects:
setData = function(segment){
var url = 'https://myFireBase.firebaseio.com/';
var rawData = angularFire(url+segment,$rootScope,'data',{});
rawData.then(function(data){
// sorting and adjusting data, and then broadcasting and/or assinging
}
}
This code is located inside a service that gets called from different locations, by development stages it'll probably be around 100 - 150 so I got out of the controllers and into a service, but now firebase data-binding would obviously over-write the different segments so I turned back to explicit methid, to have the different firebases only sending the data to site instead of data-binding and over-writing each other:
var rawData = angularFireCollection(url+segment);
And right there I discovered why I chose the implicit in the first place: There's an argument for the typeof, i could tell firebase if I'm calling a string, an array, an object etc. I even looked at the angularfire.js and saw that if the argument is not given, if falls back to identifying it as an array by default.
Now, I'm definitely going to move to the explicit method (that is, if no salvation comes with angular2.0), and reconstructing my firebase jsons to fit the array-only policy is not that big of a deal, but surely there's an option to explicitly call objects, or am I missing something?
I'm not totally clear on what the question is - with angularFireCollection, you can certainly retrieve objects just fine. For example, in the bundled chat app (https://github.com/firebase/angularFire/blob/gh-pages/examples/chat/app.js#L5):
$scope.messages = angularFireCollection(new Firebase(url).limit(50));
Each message is stored as an object, with its own unique key as generated by push().
I'm also curious about what problems you found while using the implicit method as your app grew. We're really looking to address problems like these for the next iteration of angularFire!

Arrays in PowerBuilder

I have this code
n_userobject inv_userobject[]
For i = 1 to dw_1.Rowcount()
inv_userobject[i] = create n_userobject
.
.
.
NEXT
dw_1.rowcount() returns only 210 rows. Its so odd that in the range of 170 up, the application stop and crashes on inv_userobject[i] = create n_userobject.
My question, is there any limit on array or userobject declaration using arrays?
I already try destroying it after the loop so as to check if that will be a possible solution, but it is still crashing.
Or how can i be able to somehow refresh the userobject?
Or is there anyone out there encounter this?
Thanks for all your help.
First, your memory problem. You're definitely not running into an array limit. If I was to take a guess, one of the instance variables in n_userobject isn't being cleaned up properly (i.e. pointing to a class that isn't being destroyed when the parent class is destroyed) or pointing to a class that similarly doesn't clean itself up. If you've got PB Enterprise, I'd do a profiling trace with a smaller loop and see what is being garbage collected (there's a utility called CDMatch that really helps this process).
Secondly, let's face it, you're just doing this to avoid writing a reset method. Even if you get this functional, it will never be as efficient as writing your own reset method and reusing the same instance over again. Yes, it's another method you'll have to maintain whenever the instance variable list changes or the defaults change, but you'll easily gain that back in performance.
Good luck,
Terry.
I'm assuming the crash you're facing is at the PBVM level, and not a regular PB exception (which you can catch in your code). If I'm wrong, please add the exception details.
A loop of 170-210 iterations really isn't a large one. However, crashes within loops are usually the result of resource exhaustion. What we usually do in long loops is call GarbageCollect() occasionally. How often should it be called depends on what your code does - using it frequently could allow the use of less memory, but it will slow down the run. Read this for more.
If this doesn't help, make sure the error does not come from some non-PB code (imported DLL or so). You can check the stack trace during the crash to see the exception's origin.
Lastly, if you're supported by Sybase (or a local representative), you can send them a crash dump. They can analyze it, and see if it's a bug in PB, and if so, let you know when it was (or will be) fixed.
What I would normally do with a DataWindow is to create an object that processes the data in a row and call it for each row.
the only suggestion i have for this is to remove the rowcount from the for (For i = 1 to dw_1.Rowcount()) this will cause the code to recount the rows every time it uses one. get the count into a variable and then use the variable. it should run a bit better and be far more easy to debug.

How do I avoid a memory leak with LINQ-To-SQL?

I have been having some issues with LINQ-To-SQL around memory usage. I'm using it in a Windows Service to do some processing, and I'm looping through a large amount of data that I'm pulling back from the context. Yes - I know I could do this with a stored procedure but there are reasons why that would be a less than ideal solution.
Anyway, what I see basically is memory is not being released even after I call context.SubmitChanges(). So I end up having to do all sorts of weird things like only pull back 100 records at time, or create several contexts and have them all do separate tasks. If I keep the same DataContext and use it later for other calls, it just eats up more and more memory. Even if I call Clear() on the "var tableRows" array that the query returns to me, set it to null, and call SYstem.GC.Collect() - it still doesn't release the memory.
Now I've read some about how you should use DataContexts quickly and dispose of them quickly, but it seems like their ought to be a way to force the context to dump all its data (or all its tracking data for a particular table) at a certain point to guarantee the memory is free.
Anyone know what steps guarantee that the memory is released?
A DataContext tracks all the objects it ever fetched. It won't release this until it is garbage collected. Also, as it implements IDisposable, you must call Dispose or use the using statement.
This is the right way to go:
using(DataContext myDC = new DataContext)
{
// Do stuff
} //DataContext is disposed
If you don't need object tracking set DataContext.ObjectTrackingEnabled to false. If you do need it, you can use reflection to call the internal DataContext.ClearCache(), although you have to be aware that since its internal, it's subject to disappear in a future version of the framework. And as far as I can tell, the framework itself doesn't use it but it does clear the object cache.
As Amy points out, you should dispose of the DataContext using a using block.
It seems that your primary concern is about creating and disposing a bunch of DataContext objects. This is how linq2sql is designed. The DataContext is meant to have short lifetime. Since you are pulling a lot of data from the database, it makes sense that there will be a lot of memory usage. You are on the right track, by processing your data in chunks.
Don't be afraid of creating a ton of DataContexts. They are designed to be used that way.
Thanks guys - I will check out the ClearCache method. Just for clarification (for future readers), the situation in which I was getting the memory usuage was something like this:
using(DataContext context = new DataContext())
{
while(true)
{
int skipAmount = 0;
var rows = context.tables.Select(x => x.Dept == "Dept").Skip(skipAmount).Take(100);
//break out of loop when out of rows
foreach(table t in rows)
{
//make changes to t
}
context.SubmitChanges();
skipAmount += rows.Count();
rows.Clear();
rows = null;
//at this point, even though the rows have been cleared and changes have been
//submitted, the context is still holding onto a reference somewhere to the
//removed rows. So unless you create a new context, memory usuage keeps on growing
}
}
I just ran into a similar problem. In my case, helped establish the properties of DataContext.ObjectTrackingEnabled to false.
But it works only in the case of iterating through the rows as follows:
using (var db = new DataContext())
{
db.ObjectTrackingEnabled = false;
var documents = from d in db.GetTable<T>()
select d;
foreach (var doc in documents)
{
...
}
}
If, for example, in the query to use the methods ToArray() or ToList() - no effect

Resources