How can i make 2 Apsalar segments that do not overlap? - analytics

So I am running into an issue with where i need to divide my users into segments in Apsalar but it's based off of one event
The event only happens when running in debug (so testers), and while i can still make a segment for them in Apsalar i will run into the issue where testers have their events mixed in with players
The end result of this is that tester data and player data are separate and I keep track of both.
I can do this with Flurry because they allow me to segment based on NOT having an event, but I was wondering if there was a way to do something similiar with Apsalar?

To accomplish your goal - separating tester data from player data - I recommend creating a unique application from within the Apsalar dashboard.
Doing so will allow you to divide your events, segments, cohorts, and other application data into two clear buckets on Apsalar's platform. I believe it is better to ensure you are not duplicating the data you are reporting and analyzing in Apsalar than to create an event based on the absence of an event. If you create multiple segments based on NOT having an event, I guarantee it will be difficult to parse which segments belong to your testing and production purposes.
Let me know if this helps.
Image URL:
http://ge.tt/4tJJSJR/v/0

Related

Efficient retrieval of lat-lon points that are within a square boundary

I have a react-native application that populates pins on a map that have been submitted by users. The front end gets the corners of the window and then the back end goes through each pin to check if it falls within the boundary, and returns the ones that do.
This is taking too long on the backend and I want to ask the community for ideas, because I doubt I have the best one.
My idea is to store tables of pins grouped by quadrants, effectively a cache, and then I can in almost constant time return the pins from the quadrants involved.
Is there a simpler way to do this?
Maybe using NoSQL?
🙏🏻
A month later it seems geohashing is probably the best way, plus AWS has a library for automatically handling this with dynamodb. Apparently it takes the corners of the screen, lat/lon, and automatically returns the items from the DB in the view, in, I assume, constant time, since that's the whole point of geohashing, getting performance that works at scale..
https://www.npmjs.com/package/dynamodb-geo
https://aws.amazon.com/blogs/compute/implementing-geohashing-at-scale-in-serverless-web-applications/
Otherwise, using a geohashing library that is built for serving mobile apps likely exists.

Portlet event send array of objects

We have multiple projects with multiple portlets and need to send an array of objects between them.
Our situation:
One of the porlets is like a "Master-portlet", it will be responsible for all the REST-calls and consume json-data and parse it to Java-Objects.
All the other portlets will receive an array of objects and show them to the user.
Our thoughts and solution:
We wanted to implement this by sending arrays of objects trough events. One of the "smaller" portlets will send an event to the "Master-portlet" and the "Master-portlet" will then answer with a new event and send the right array of objects back.
Our problem:
We dont know how to send arrays of objects trough events. Is this even possible?
Also we are not sure if this is the right way to solve this. Are events ment to send a bigger amount of data?
Is there a better solution for our case? Maybe it would be better to implement a database and all the portlets get the information from there?
Consider portlet events (and portlets) the UI layer of your application. Based on this, judge if the amount of data that you send back and forth makes sense or not. Also, if you closely couple the portlets, you're just hiding the fact that they can only function together - at least a questionable idea. You rather want them to react to common circumstances (events), but not rely on a specific source of events (master portlet) being available.
That being said: The more complex the data is that you send as payload of a JSR-286 event, the easier you run into classloading problems in cases where your portlets are in different webapplications. If you restrict yourself to Java native types (e.g. String, Map, etc) you will omit problems with the classloader.
Typically you want to communicate changes to the current context (e.g. new "current customer" selected - and an identifier) but not all of the particular data (e.g. the new customer's name and order history). The rest of the data typically comes through the business layer anyway.
That's not to say that you absolutely must not couple your portlets - just that my preference is to rather have them very loosely coupled, so that I can add individual small portlets that replace those that I thought of yesterday.
If you have some time, I've covered a bit of this in a webinar last year, I hope that this adds some clarification where I was too vague in this quick answer.

Keeping repository synced with multiple clients

I have a WPF application that uses entity framework. I am going to be implementing a repository pattern to make interactions with EF simple and more testable. Multiple clients can use this application and connect to the same database and do CRUD operations. I am trying to think of a way to synchronize clients repositories when one makes a change to the database. Could anyone give me some direction on how one would solve this type of issue, and some possible patterns that would be beneficial for this type of problem?
I would be very open to any information/books on how to keep clients synchronized, and even be alerted of things other clients are doing(The only thing I could think of was having a server process running that passes messages around). Thank you
The easiest way by far to keep every client UI up to date is just to simply refresh the data every so often. If it's really that important, you can set a DispatcherTimer to tick every minute when you can get the latest data that is being displayed.
Clearly, I'm not suggesting that you refresh an item that is being edited, but if you get the fresh data, you can certainly compare collections with what's being displayed currently. Rather than just replacing the old collection items with the new, you can be more user friendly and just add the new ones, remove the deleted ones and update the newer ones.
You could even detect whether an item being currently edited has been saved by another user since the current user opened it and alert them to the fact. So rather than concentrating on some system to track all data changes, you should put your effort into being able to detect changes between two sets of data and then seamlessly integrating it into the current UI state.
UPDATE >>>
There is absolutely no benefit from holding a complete set of data in your application (or repository). In fact, you may well find that it adds detrimental effects, due to the extra RAM requirements. If you are polling data every few minutes, then it will always be up to date anyway.
So rather than asking for all of the data all of the time, just ask for what the user wants to see (dependant on which view they are currently in) and update it every now and then. I do this by simply fetching the same data that the view requires when it is first opened. I wrote some methods that compare every property of every item with their older counterparts in the UI and switch old for new.
Think of the Equals method... You could do something like this:
public override bool Equals(Release otherRelease)
{
return base.Equals(otherRelease) && Title == otherRelease.Title &&
Artist.Equals(otherRelease.Artist) && Artists.Equals(otherRelease.Artists);
}
(Don't actually use the Equals method though, or you'll run into problems later). And then something like this:
if (!oldRelease.Equals(newRelease)) oldRelease.UpdatePropertyValues(newRelease);
And/Or this:
if (!oldReleases.Contains(newRelease) oldReleases.Add(newRelease);
I'm guessing that you get the picture now.

WPF and Active Objects

I have a collection of "active objects". That is, objects that need to preiodically update themselves. In turn, these objects should be used to update a WPF-based GUI.
In the past I would just have each object include it's own thread, but that only makes sense when working with a finite number of objects with well-defined life-cycles. Now I'm using objects that only exist when needed by a form so the life cycle is unpredicable. Also, I can have dozens of objects all making database and web service calls.
Under normal circumstances the update interval is 1 second, but it can take up to 30 seconds due to timeouts.
So, what design would you recommend?
You may use one dispatcher (scheduler) for all or group of active objects. Dispatcher can process high priority tasks at the first place then other ones.
You can see this article about the long-running active objects with code to find out how to do it. In additional I recommend to look at Half Sync/ Half Async pattern.
If you have questions - welcome.
I am not an expert, but I would just have the objects fire an event indicating when they've changed. The GUI can then refresh the necessary parts of itself (easy when using data binding and INotifyPropertyChanged) whenever it receives an event.
I'd probably try to generalize out some sort of data bus, if possible, and when objects are 'active' have them add themselves to a list of objects to be updated. I'd especially be tempted to use this pattern if the objects are backed by a database, as that way you can aggregate multiple queries, instead of having to do a single query per each object.
If there end up being no listeners for a specific object, no big deal, the data just goes nowhere.
The core updater code can then use a single timer (or multiple, or whatever is appropriate) to determine when to get updates. Doing this as more of a dataflow, and less of a 'state update' will probably save a lot of sanity in the end.

How to model Data Transfer Objects for different front ends?

I've run into reoccuring problem for which I haven't found any good examples or patterns.
I have one core service that performs all heavy datasbase operations and that sends results to different front ends (html, silverlight/flash, web services etc).
One of the service operation is "GetDocuments", which provides a list of documents based on different filter criterias. If I only had one front-end, I would like to package the result in a list of Document DTOs (Data transfer objects) that just contains the data. However, different front-ends needs different amounts of "metadata". The simples client just needs the document headline and a link reference. Other clients wants a short text snippet of the document, another one also wants a thumbnail and a third wants the name of the author. Its basically all up to the implementation of the GUI what needs to be displayed.
Whats the best way to model this:
As a lot of different DTOs (Document, DocumentWithThumbnail, DocumentWithTextSnippet)
tends to become a lot of classes
As one DTO containing all the data, where the client choose what to display
Lots of unnecessary data sent
As one DTO where certain fields are populated based on what the client requested
Tends to become a very large class that needs to be extended over time
One DTO but with some kind of generic "Metadata" field containing requested metadata.
Or are there other options?
Since I want a high performance service, I need to think about both network load and caching strategies.
Does anyone have any good patterns or practices that might help me?
What I would do is give the front end the ability to request the presence of the wanted metadata ( say getDocument( WITH_THUMBNAILS | WITH_TEXT_SNIPPET ) )
Then this DTO is built with only this requested information.
Adding all the possible metadata is as you said, unacceptable.
I will surely stay with one class defining all the possible methods (getTitle(), getThumbnail()) and if possible it will return a placeholder when the thumbnail was not requested. Something like "Image not available".
If you want to model this like a pattern, take a look at the factory patterns.
Hope this helps you.
Is there any noticable cost to creating a DTO that has all the data any of your views could need and using it everywhere? I would do that, especially since it insulates you from a requirement change down the line to have one of the views incorporate data one of the other views uses
ex. Maybe your silverlight/flash view doesn't show the title itself b/c it's in the thumb now, but they decide they want to sort by it later.
To clarify, I do not necesarily think you need to pass down all of the data every time, but I think your DTO class should define all of them. Just don't fall into the pits of premature optimization or analysis paralysis. Do the simplest thing first, then justify added complexity. Throw it all in and profile it. If the perf is unacceptable, optimize and try again.

Resources