I need to store an event data dump as a custom variable in piwik. But, what I realized is, piwik has a constraint that values of custom variables are set at a maximum of 200 characters in length.
Is there any way to change it? Or any workaround for the same?
Thanks
It is recommended to use Custom Dimentions instead of Custom Variables as the latter will be deprecated at some time in the future.
https://matomo.org/faq/general/faq_21117/
Unfortuntly Custom Dimentions also have a maximum length, but with 255 characters it is a bit longer.
Maybe at one point in the future it will be possible to save unlimited Custom Dimentions:
https://github.com/matomo-org/plugin-CustomDimensions/issues/79
https://github.com/matomo-org/matomo/issues/12348
Related
I have an array of indefinite length from which the user chooses to add/remove components. To reflect the changes in the backend I must perform ajax requests.
My question is what would be the most efficient way to reflect those changes?
Currently I have two approaches in mind:
1.) Find the differences between the two arrays and then appropriately address the differences by adding the missing / removing the extra elements one by one.
2.) Make use of the remove all service to remove every single item in the array in the backend using a single request then adding one by one whatever is in the second array.
At the first glance approach 1 seems to be better with small changes in particular. However if the user decides to remove 80 or so elements from the array and only add 2 more then approach 2 outclasses approach 1.
Perhaps there is a better solution?
Thanks!
I'm trying to evaluate Apache Flink for the use case we're currently running in production using custom code.
So let's say there's a stream of events each containing a specific attribute X which is a continuously increasing integer. That is a bunch of contiguous events have this attributes set to N, then the next batch has it set to N+1 etc.
I want to break the stream into windows of events with the same value of X and then do some computations on each separately.
So I define a GlobalWindow and a custom Trigger where in onElement method I check the attribute of any given element against the saved value of the current X (from state variable) and if they differ I conclude that we've accumulated all the events with X=CURRENT and it's time to do computation and increase the X value in the state.
The problem with this approach is that the element from the next logical batch (with X=CURRENT+1) has been already consumed but it's not a part of the previous batch.
Is there a way to put it back somehow into the stream so that it is properly accounted for the next batch?
Or maybe my approach is entirely wrong and there's an easier way to achieve what I need?
Thank you.
I think you are on a right track.
Trigger specifies when a window can be processed and results for a window can be emitted.
The WindowAssigner is the part which says to which window element will be assigned. So I would say you also need to provide a custom implementation of WindowAssigner that will assign same window to all elements with equal value of X.
A more idiomatic way to do this with Flink would be to use stream.keyBy(X).window(...). The keyBy(X) takes care of grouping elements by their particular value for X. You then apply any sort of window you like. In your case a SessionWindow may be a good choice. It will fire for each key after that key hasn't been seen for some configurable period of time.
This approach will be much more robust with regard to unordered data which you must always assume in a stream processing system.
I'm trying to accomplish something that is a bit out of the ordinary. I have a regular <input type="number"> field with a designated min and max value where the user can input a year. When initially hitting the up or down arrow on the input field, the value starts at the min attribute's value. What I'm trying to accomplish is to get it to start at the max attribute's value.
Are there any straight-forward ways to accomplish this using either HTML or AngularJS? I could write some javascript to do it, but I would prefer to avoid that. Note that I don't want to default the field to that value on load, only when interacting with the control.
Note that the min and max values are determined at run-time from a table, so they can vary.
is there a way to create a fixed size array in LabView?
I know that I can do some check on the array size, then discard values when an array size become greater than a specific value. But, I think that is a common problem, so there is some built in function in LabView to have a fixed size array?
As far as I know this is impossible, unless they changed something in one of their latest releases but I doubt it: it would probably require a serious rewrite of the core array code.
The closest you can get is writing your own (possibly polymorphic) array class in which you encapsulate an actual array, that you initialize once with a certain size. For the rest your class only exposes methods to get/set by index. No resize etc.
Or, if you are talking about arrays of controls etc on the front panel, you can probably do this at the UI level by hide the indexing control from it and making sure it cannot be resized graphically. Or probably it's also doable to create a custom control and strip lots of array functionality from it.
If the array size is fixed at design time, then you might consider using a cluster instead. There is even a primitive to convert an array to a cluster of fixed size, provided the length is less then 257. (Array To Cluster function.)
There is also a primitive to go the other way if you need to index the array.
One implementation that you could do is a queue with a fixed size. You can use preview queue and flush queue to implement the functionality you want. However a specific custom class is probably a better idea.
In regular desktop LabVIEW, fixed-sized arrays would be something you'd have to code as per the answers you've already gotten here. However, in LabVIEW FPGA with, say, cRIO, all arrays must be fixed-size.
When calling the Call Library Function Node to a WINAPI DLL, there are times where a structure element may be officially be defined as BYTE[130]. So how do you absolutely, positively make sure your cluster has exactly the space for 130 bytes?
You can't do it with arrays no matter what, because LabVIEW arrays are pointers to a structure (the first element being the length), meaning any array you insert will only allocate enough space for a pointer, 4 bytes.
The work-around I came up with is to insert a cluster that includes sixteen U64 and one U16, pass that through an unflatten to string and you'll find it's exactly 130 bytes long.
When the cluster returns from the call, merely type cast the flattened into string results into a U8 array
I'm working on a hot-upgrade feature and need to package up an array of structs to be stashed away for the new version to find them. I really want to avoid adding a conversion function for every possible version transition. Is this reasonable?
The most likely change to the struct is for more fields to be added to the structure in the future and if this happens then a default value for the new field will be available. I will also soon face the task of saving the array of structs into a configuration file, so extra credit for answers that can be applied to both hot-upgrade and configuration saving.
I don't have to worry about the hot-update mechanism I just give it a pointer and a size and it does fantastic magic :)
The most likely change to the struct is for more fields to be added to the structure in the future and if this happens then a default value for the new field will be available.
From version 1, always include sizeof(myStruct) as a field in the beginning of each struct. Then, when you need to add new fields, always do so in the end of each struct, never in the middle. Now when receiving (or reading from a file), first read the size field only, so that you know how many bytes will be coming after it. If the size is less than sizeof(myStruct) as determined by the receiver/reader, then you know that something is missing, and default values are needed.
I'd recommend using something like Google's protocol buffers, which automatically handle versioning. If you add new fields to your messages, it's very easy to handle.