I've been experimenting with Apigee's custom statistics for gathering data about requests coming into the API. Now I have a whole bunch of temporary statistics names that I no longer need, like bozo, bozo1, my_test, etc.
How do I get rid of certain dimensions so they don't show up in the Custom Dimensions part of the Drilldowns dropdown?
I tried doing the following DELETE call, but it didn't work:
curl -X DELETE https://api.enterprise.apigee.com/v1/o/{org}/environments/test/stats/bozo -u {username}:{password}
I don't see anything in the API Documentation
The only way to get it deleted is if you have the commercial version of Apigee Edge and open a support ticket -- the ops team may be able to get in cleaned out.
This is really important when defining custom stats with generic labels like "Title" -- if you have two stats collectors with different data but the same label, you'll get mixed up data in Analytics and muck up your data.
Related
I have a scorecard that looks at the number of URL clicks driven by all queries which works as expected. I am now trying to display the number of clicks driven by the top 10 queries in the scorecard. I was able to limit the number of rows in my table by disabling pagination to show only the top 10 queries but now I'm looking to sum the clicks in a scorecard to provide a quick summary rather than having a table.
I don't think what you want to do is possible dynamically via just the Search Console connector. Google Data Studio does not provide any way to calculate rankings via calculated fields, so there's no way for you to know which query is in the top 10 without looking at a sorted table. A few imperfect alternatives (roughly in order of increasing complexity):
You apply a filter so that the score card only aggregates values above a certain threshold. This would be hardcoded, so you would be filtering on the Clicks (ie aggregate all URL clicks above 100)
You apply a filter to the score card so that it only aggregates clicks from the top 10 URLs. This would not be a dynamically updating filter, so you'd have to look at the table to see which URLs are in the top 10, which would change as time goes on. This would end up being a filter like: "Include URLS Contains www.google.com,www.stackoverflow.com"
If you do not mind using google sheets as an intermediary, you could dump your Search Console data into a spreadsheet so that you can manipulate it however you like and then use the spreadsheet as the data source for data studio (as opposed to the Search Console connector). It looks like there might be some addons out there that you can use out of the box although I haven't used it myself, so not sure how difficult it is. Alternatively, you can build something out yourself via the Google Script and the Search Console API
You could build a custom Data Studio Community Visualization. (BTW just because they are called 'Community Visualizations' does not mean you have to make them publicly available.) Essentially here, you would be building a scorecard like component that aggregates the data according to your own rules, although this does require more coding experience. (Before you build one, check if something like what you need exists in the gallery, but at a quick glance, I don't see anything that would meet your needs.)
I have been looking through the documentation for both the General Statistics and Advanced Statistics - but it seems only aggregated statistics are available. From what I am able to find, the most detailed information is available for a single day within a single category. But is it possible to retrieve statistics for a single marketing email, using e.g. it's name as parameter? Or is it necessary to use the Event Webhook, to store all events (opens, clicks, etc.) in my end, and do all the calculations myself?
Thank you
brgds
Lukas
At this time, it's necessary to use the Event Webhook. Using Event Webhook also allows you to implement unique_args, for more detailed & granular stats. You can use as many args per message as you'd like, naming the keys and values to whatever makes sense for you.
I have an application that contains a set of text documents that users can search for. Every user must be able to search based on the text of the documents. What is more, users must be able to define custom tags and associate them to a document. Those tags are used in two ways:
1)Users must be able to search for documents based on specific tag ids.
2)There must be facets available for the tags.
My solution was adding a Mutivalued field in each document to pose as an array that contains the tagids that this document has been tagged with. So far so good. I was able to perform queries based on text and tagids ( for example text:hi AND tagIds:56 ).
My question is, would that solution work in production mode in an environment that users add but also remove tags from the documents ? Remember , I have to have the data available in real time, so whenever a user removes/adds a tag I have to reindex that document and commit immediately. If that's not a good solution, what would be an alternative ?
Stackoverflow uses Solr - this is in case if you doubt Solr abilities in production mode.
And although I couldn't find much information on how they have implemented tags, I don't think your approach sounds wrong. Yes, tagged documents will have to be reindexed (that means a slight delay) but other than that I don't see anything wrong with it.
I'm going to write simple news site on redis with supporting followers.
I can't imagine how can I organize users timeline like in twitter. I read about Retwis ( http://redis.io/topics/twitter-clone ), but its feed creating method seems stupid. What if I want to remove entries? I'll should to remove all entry references from followers feeds. What if I already do not follow some users?
There are several ways to attack what you describe using a bit of imagination, here are some examples that address your questions:
What if I want to remove entries?
One could mantain a set such as post:$postid:users for each post, holding all the userids that may have the post in their feeds; when the post is to be deleted one just has to extract all members from this set and iterate through the ids to remove it from each uid:$userid:posts set; speaking of which you would have to turn that last one into a set instead of a list like the original article suggests in order to be able to extract and remove individual items but that is trivial, the logic is pretty similar.
What if I already do not follow some users?
When the feed is being generated for each individual user you have to necessarily iterate and read each post:$postid key, from which you have access to the author userid; so before showing the post you read this id and look it up in the uid:$userid:following set, if it's there we show the post, if it's not we delete it from uid:$userid:posts and don't show it.
In a nutshell, this is what you have to keep in mind in order to build this kind of logic in redis:
You'll need many commands, but that's ok, Redis is supposed to be fast enough to handle it well.
Data will repeat, but that is also ok; it may look insane for someone with a relational DBMS background to store a set of users for each post if each user already has a set with their posts, but this is the only way around building relationships in a non-relational data store like redis.
Generally speaking think of sets and sorted sets when designing something relational in Redis.
With redis you get to do everything yourself, but once you get your head around it it's actually pretty powerful.
I have a location auto-complete field which has auto complete for all countries, cities, neighborhoods, villages, zip codes. This is part of a location tracking feature I am building for my website. So you can imagine this list will be in the multi-millions of rows. Expecting over 20 million atleast with all the villages and potal codes. To make the auto-complete work well I will use memcached so we dont hit the database always to get this list. It will be used a lot as this is the primary feature on the site. But the question is:
Is only 1 instace of the list stored in memcached irrespective of the users pulling the info or does it need to maintain a separate instance for each? So if say 20 million people are using it at the same time, will that differ from just 1 person using the location auto-complete? I am open to other ideas also on how to implement this location auto complete so it performs well.
Or can i do something like this: When a user logs in in the background I send them the list anyways, so by the time they reach the auto complete textfield their computer will have it ready to load instant?
Take a look at Solr (or Lucene itself), using NGram (or EdgeNGram) tokenizers you can get good autocomplete performance on massive datasets.