LabVIEW: How to exchange lots of variables between loops? - arrays

I have two loops:
One loop gets data from a device and processes it. Scales received variables, calculates extra data.
Second loop visualizes the data and stores it.
There are lots of different variables that need to passed between those two loops - about 50 variables. I need the second loop to have access only to the newest values of the data. It needs to be able to read those variables any time they are needed to be visualized.
What is the best way to share such vector between two loops?

There are various ways of sharing data.
The fastest and simplest is a local variable, however that is rather uncontrolled, and you need to make sure to write them at one place (plus you need an indicator).
One of the most advanced options is creating a class for your data, and use an instance (if you create a by-ref class, otherwise it won't matter), and create a public 'GET' method.
In between you have sevaral other options:
queues
semaphores
property nodes
global variables
shared variables
notifiers
events
TCP-IP
In short there is no best way, it all depends on your skills and application.

As long as you're considering loops within the SAME application, there ARE good and bad ideas, though:
queues (OK, has most features)
notifiers (OK)
events (OK)
FGVs (OK, but keep an eye on massively parallel access hindering exec)
semaphores (that's not data comms)
property nodes (very inefficient, prone to race cond.)
global variables (prone to race cond.)
shared variables (badly implemented by NI, prone to race cond.)
TCP-IP (slow, awkward, affected by firewall config)

The quick and dirty way to do this is to write each value to an indicator in the producer loop - these indicators can be hidden offscreen, or in a page of a tab control, if you don't want to see them - and read a local variable of each one in the consumer loop. However if you have 50 different values it may become hard to maintain this code if you need to change or extend it.
As Ton says there are many different options but my suggestion would be:
Create a cluster control, with named elements, containing all your data
Save this cluster as a typedef
Create a notifier using this cluster as the data type
Bundle the data into the cluster (by name) and write this to the notifier in the producer loop
Read the cluster from the notifier in the consumer loop, unbundle it by name and do what you want with each element.
Using a cluster means you can easily pass it to different subVIs to process different elements if you like, and saving as a typedef means you can add, rename or alter the elements and your code will update to match. In your consumer loop you can use the timeout setting of the notifier read to control the loop timing, if you want. You can also use the notifier to tell the loops when to exit, by force-destroying it and trapping the error.

Two ways:
Use a display loop with SEQ (Single Element Queue)
Use a event structure with User Event. (Do not put two event structures in same loop!! Use another)
Use an enum with case structure and variant to cast the data to expected type.
(Notifier isn't reliable to stream data, because is a lossy scheme. Leave this only to trigger small actions)

If all of your variables can be bundled together in a single cluster to send at once, then you should use a single element queue. If your requirements change later such that the transmission cannot be lossy, then it's a matter of changing the input to the Obtain Queue VI (with a notifier you'd have to swap out all of the VIs). Setting up individual indicators and local variables would be pretty darn tedious. Also, not good style.

If the loops are inside of the same VI then:
The simplest solution would be local variables.
Little bit better to use shared variables.
Better is to use functional global variables (FGVs)
The best solution would be using SEQ (Single Element Queue).
Anyway for better understanding please go trough this paper.

Related

Flink trigger on a custom window

I'm trying to evaluate Apache Flink for the use case we're currently running in production using custom code.
So let's say there's a stream of events each containing a specific attribute X which is a continuously increasing integer. That is a bunch of contiguous events have this attributes set to N, then the next batch has it set to N+1 etc.
I want to break the stream into windows of events with the same value of X and then do some computations on each separately.
So I define a GlobalWindow and a custom Trigger where in onElement method I check the attribute of any given element against the saved value of the current X (from state variable) and if they differ I conclude that we've accumulated all the events with X=CURRENT and it's time to do computation and increase the X value in the state.
The problem with this approach is that the element from the next logical batch (with X=CURRENT+1) has been already consumed but it's not a part of the previous batch.
Is there a way to put it back somehow into the stream so that it is properly accounted for the next batch?
Or maybe my approach is entirely wrong and there's an easier way to achieve what I need?
Thank you.
I think you are on a right track.
Trigger specifies when a window can be processed and results for a window can be emitted.
The WindowAssigner is the part which says to which window element will be assigned. So I would say you also need to provide a custom implementation of WindowAssigner that will assign same window to all elements with equal value of X.
A more idiomatic way to do this with Flink would be to use stream.keyBy(X).window(...). The keyBy(X) takes care of grouping elements by their particular value for X. You then apply any sort of window you like. In your case a SessionWindow may be a good choice. It will fire for each key after that key hasn't been seen for some configurable period of time.
This approach will be much more robust with regard to unordered data which you must always assume in a stream processing system.

Pthread: How exactly work rwlockattr

I've got a question about "rwlocks", especial about "rwlockattr".
I've got a linked list with several threads working with. Every Member on this list has got a "rwlock". So now I want to set up a rule to secure that threads who want access a write-lock have a higher priority. My intention is to use
int pthread_rwlockattr_setkind_np(pthread_rwlockattr_t *attr,int pref);
So now my Question; Do I need to initialize a "rwlockattr" for every "rwlock" in my linked List or is it enough to set up a global "rwlockattr", initialize it and set up the "PTHREAD_RWLOCK_PREFER_WRITER_NP" rule ?
regards
With every rwlock there is some by default attributes associated with it. For the pthread_rwlock_init() go through this link which will give you more information that how to use rwlock.
You can assign single attribute to your rwlocks. You globally create a single attribute and assign to your rwlocks having the same nature.
Go through this to understand the use of pthread_rwlock.
In general Attributes are to decide the nature of your rwlock.

Is it bad programming practice to store objects of type Foo into a static array of type Foo belonging to Foo in their construction?

Say I wanted to store objects statically inside their own class. Like this:
public class Foo
{
private static int instance_id = 0;
public static List<Foo> instances = new List<Foo>();
public Foo()
{
instances[instance_id++] = this;
}
}
Why?
I don't need to create unique array structures outside the class (one will do).
I want to map each object to a unique id according to their time of birth.
I will only have one thread with the class in use. Foo will only exist as one set in the program.
I did searching, but could find no mention of this data structure. Is this bad practice? If so, why? Thank you.
{please note, this question is not specific to any language}
There are a couple of potential problems I can see with this setup.
First, since you only have a single array of objects, if you need to update the code so that you have lots of different groups of objects in different contexts, you'll need to do a significant rewrite so that each object ends up getting associated with a different context. Depending on your setup this may not be a problem, but I suspect that in the long term this decision may come back to haunt you.
Second, this approach assumes that you never need to dispose of any objects. Imagine that you want to update your code so that you do a number of different simulations and aggregate the results. If you do this, then you'll end up having your giant array storing pointers to objects you're not using. This means that you'll (1) have a memory leak and (2) have to update all your looping code to skip over objects you no longer care about.
Third, this approach makes it the responsibility of the class, rather than the client, to keep track of all the instances. In some sense, if the purpose of what you're doing is to make it easier for clients to have access to a global list of all the objects that exist, you may want to consider just putting a different list somewhere else that's globally accessible so that the objects themselves aren't the ones responsible for keeping track of themselves.
I would recommend using one of a number of alternate approaches:
Just have the client do this. If the client needs to keep track of all the instances, just have them always create the array they need and populate it. That way, if multiple clients need different arrays, they can do so. You also avoid the memory leak issues if you do this properly.
Have each object take, as part of its constructor, a context in which to be constructed. For example, if all of these objects are nodes in a quadtree, have them take a pointer to the quadtree in which they'll live as a constructor parameter, then have the quadtree object store the list of the nodes in it. After all, it seems like it's really the quadtree's responsibility to keep track of everything.
Keep doing what you're doing, but using something with weak references. For example, you might consider using some variation on a WeakHashMap so that you do store everything, but if the objects are no longer needed, you at least don't have a memory leak.

NSRecursivelock v NSNotification performance

So I was using an array to hold objects, then iterating over them to call a particular method. As the objects in this array could be enumerated/mutated on different threads I was using an NSRecursiveLock.
I then realised that as this method is always called on every object in this array, I could just use an NSNotification to trigger it, and do away with the array and lock.
I am just double checking that this is a good idea? NSNotification will be faster than a lock, right?
Thanks
First, it seems that the locking should be done on the objects, not on the array. You are not mutating the array, but the objects. So you need one mutex per object. It will give you a finer granularity and allow concurrent updates to different objects to proceed in parallel (it wouldn't with a global lock).
Second, a recursive lock is complete overkill. If you wish to have mutual exclusion on the objects, each object should have a standard mutex. If what you are doing inside the critical section is cpu bound and really short, you might consider using a spinlock (it won't even trap in the OS. Only use for short CPU bound critical sections though). Recursive mutex are meant to be used when a thread k can (because of its logic) acquire another lock on an object it has already locked himself (hence the recursive name).
Third, NSNotifications (https://developer.apple.com/library/mac/documentation/Cocoa/Reference/Foundation/Classes/NSNotification_Class/) will allocate memory, drop the notification in a notification Center, do locking to actually implement that (the adding to the center) and finally dispatch the notifications in the center and deallocate the notification. So it is heavier than a plain simple locking. It "hides" the synchronization APIs inside the center, but it does not eliminate them, and you pay the price of memory allocations/deallocations.
If you wish to modify the array (add/remove), then you should also synchronize on these operations, but that would be unfortunate as access to independent entries of the array would now collide.
Hope that helps.

Create a non-blocking timer to erase data

Someone can show me how to create a non-blocking timer to delete data of a struct?
I've this struct:
struct info{
char buf;
int expire;
};
Now, at the end of the expire's value, I need to delete data into my struct. the fact is that in the same time, my program is doing something else. so how can I create this? even avoiding use of signals.
It won't work. The time it takes to delete the structure is most likely much less than the time it would take to arrange for the structure to be deleted later. The reason is that in order to delete the structure later, some structure has to be created to hold the information needed to find the structure later when we get around to deleting it. And then that structure itself will eventually need to be freed. For a task so small, it's not worth the overhead of dispatching.
In a difference case, where the deletion is really complicated, it may be worth it. For example, if the structure contains lists or maps that contain numerous sub-elements that must be traverse to destroy each one, then it might be worth dispatching a thread to do the deletion.
The details vary depending on what platform and threading standard you're using. But the basic idea is that somewhere you have a function that causes a thread to be tasked with running a particular chunk of code.
Update: Hmm, wait, a timer? If code is not going to access it, why not delete it now? And if code is going to access it, why are you setting the timer now? Something's fishy with your question. Don't even think of arranging to have anything deleted until everything is 100% finished with it.
If you don't want to use signals, you're going to need threads of some kind. Any more specific answer will depend on what operating system and toolchain you're using.
I think the motto is to have a timer and if it expires as in case of Client Server logic. You need to delete those entries for which the time is expired. And when a timer expires, you need to delete that data.
If it is yes: Then it can be implemented in couple of ways.
a) Single threaded : You create a sorted queue based on the difference of (interval - now ) logic. So that the shortest span should receive the callback first. You can implement the timer queue using map in C++. Now when your work is over just call the timer function to check if any expired request is there in your queue. If yes, then it would delete that data. So the prototype might look like set_timer( void (pf)(void)); add_timer(void * context, long time_to_expire); to add the timer.
b) Multi-threaded : add_timer logic will be same. It will access the global map and add it after taking lock. This thread will sleep(using conditional variable) for the shortest time in the map. Meanwhile if there is any addition to the timer queue, it will get a notification from the thread which adds the data. Why it needs to sleep on conditional variable, because, it might get a timer which is having lesser interval than the minimum existing already.
So suppose first call was for 5 secs from now
and the second timer is 3 secs from now.
So if the timer thread only sleeps and not on conditional variable, then it will wake up after 5 secs whereas it is expected to wake up after 3 secs.
Hope this clarifies your question.
Cheers,

Resources