I'm building some reporting tool. Ideally I want to avoid going through web server logs myself and use (some of) the power of Piwik.
The stuff I get from the visitor log would be a good start, this is at http://example.com/piwik/index.php?module=CoreHome&action=index&idSite=1&period=day&date=yesterday#/module=Live&action=getVisitorLog&idSite=1&period=day&date=yesterday
Unfortunately I can't find a getVisitorLog action in the HTTP API docs at
http://developer.piwik.org/api-reference/reporting-api#Actions (and it's also not an undocumented feature, method=Actions.getVisitorLog gives me
The method 'getVisitorLog' does not exist or is not available in the module '\Piwik\Plugins\Actions\API'.
Is there another way to get to this? Or should I write a plugin for Piwik?
Apparently it is possible through the Live plugin API:
http://developer.piwik.org/api-reference/reporting-api#Live
This works as desired:
http://example.com/piwik/index.php?module=API&method=Live.getLastVisitsDetails&format=JSON&idSite=1&period=day&date=2015-07-21&expanded=1&token_auth=XXXXX&filter_limit=100
Related
I'm trying to migrate an a web app from Google Appengine to a dedicated server and I've got stuck to the logging issue. Basically I would like to organise the logs per request/context(like on GAE) so that I can easily review the errors/trace on each request. The most advanced logging library I could find is the glog package but still I can't figure it out how to log per request/context.
Each request gives you a http.Request-object to work with.
If you're using sessions, then you'll have a sessions.Session-object to work with.
You will want to use those objects to help log per request/context, as they identify the request / session.
Ok, i found this link https://code.google.com/p/gwt-platform/wiki/CrawlerSupport#Using_gwtp-crawler-service that explain how you can make your GWTP app crawlable.
I got some GWTP experience, but i know nothing about AppEngine.
Google said its "crawlservice.appspot.com" can parse any Ajax page. Now I have a page "http://mydomain.com#!article" that has an artice that was pulled from Database. Say that page has the text "this is my article". Now I open this link:
crawlservice.appspot.com/?key=123456&url=http://mydomain.com#!article, then i can see all javascript but I couldn't find the text "this is my article".
Why?
Now let check with a real life example
open this link https://groups.google.com/forum/#!topic/google-web-toolkit/Syi04ArKl4k & you will see the text "If i open that url in IE"
Now you open http://crawlservice.appspot.com/?key=123456&url=https://groups.google.com/forum/#!topic/google-web-toolkit/Syi04ArKl4k you can see all javascript but there is no text "If i open that url in IE",
Why is it?
SO if i use http://crawlservice.appspot.com/?key=123456&url=mydomain#!article then Can google crawler be able to see the text in mydomain#!article?
also why the key=123456, it means everyone can use this service? do we have our own key? does google limit the number of calls to their service?
Could you explain all these things?
Extra Info:
Christopher suggested me to use this example
https://github.com/ArcBees/GWTP-Samples/tree/master/gwtp-samples/gwtp-sample-crawler-service
However, I ran into other problem. My app is a pure GWTP, it doesn't have appengine-web.xml in WEB-INF. I have no idea what is appengine or GAE mean or what is Maven.
DO i need to register AppEngine?
My Appp may have a lot of traffic. Also I am using Godaddy VPS. I don't want to register App Engine since I have to pay for Google for extra traffic.
Everything in my GWTP App is ok right now except Crawler Function.
So if I don't use Google App Engine, then how can i build Crawler Function for GWTP?
I tried to use HTMLUnit for my app, but HTMLUnit doesn't work for GWTP (See details in here Why HTMLUnit always shows the HostPage no matter what url I type in (Crawlable GWT APP)? )
I believe you are not allowed to crawl Google Groups. Probably they are actively trying to prevent this, so you do not see the expected content.
There's a couple points I wish to elaborate on:
The Google Code documentation is no longer maintained. You should look on Github instead: https://github.com/ArcBees/GWTP/wiki/Crawler-Support
You shouldn't use http://crawlservice.appspot.com. This isn't a Google service, it's out of date and we may decide to delete it down the road. This only serves as a public example. You should create your own application on App Engine (https://appengine.google.com/)
There is a sample here (https://github.com/ArcBees/GWTP-Samples/tree/master/gwtp-samples/gwtp-sample-crawler-service) using GWTP's Crawler Service. You can basically copy-paste it. Just make sure you update the <application> tag in appengine-web.xml to the name of your application and use your own service key in CrawlerModule.
Finally, if your client uses GWTP and you followed the documentation, it will work. If you want to try it manually, you must encode the Query Parameters.
For example http://crawlservice.appspot.com/?key=123456&url=http://www.arcbees.com#!service will not work because the hash (everything including and after #) is not sent to the server.
On the other hand http://crawlservice.appspot.com/?key=123456&url=http%3A%2F%2Fwww.arcbees.com%2F%23!service will work.
Scenario: You are going to do scheduled database maintenance. You will hence be unable to serve dynamic content (just assume the caching system in front of the database also needs to be maintained).
During that time, what's the correct way of handling web requests trying to access a dynamic resource?
What's the correct HTTP error code, if any, that goes along with the notice that your service is currently not available? Should you use errors in the 5XX range?
What are the implications in terms of SEO? Will it hurt if search engine crawlers try to access your site and see lots of error codes or pages with the same notice instead of dynamic content? Can you easily recover from that?
503 Service Unavailable is the correct response to use in this situation.
Depending on how your site works, you could just put up a static HTML page replacing everything saying that the site is undergoing maintenance.
I'm currently developing a mobile application that will fetch data from server by request (page load) or by notification received (e.g. GCM).
Currently I'm starting to think about how to build the backend for that app.
I thought about using PHP to handle the http requests to my database (mySQL) and to return the response as JSON. As I see it there are many ways to implement such server and would like to hear to hear thoughts about my ideas for implementations:
1. create a single php page that will receive an Enum/Query, execute and send the results.
2. create a php page for every query needs to be made.
Which of my implementations should I use? if none please suggest another. Thank you.
P.S, this server will only use as a fetcher for SQL and push notifications. if you have any suggestion past experience about how to perform it (framework, language, anything that comes to mind) I'd be happy to learn.
You can use PHP REST Data services framework https://github.com/chaturadilan/PHP-Data-Services
I am also looking for information about how to power a web and mobile application that has to get and save data on the server.
I've been working with a PHP framework such as Yii Framework, and I know that this framework, and others, have the possibility to create a API/Web service.
APIS can be SOAP or REST, you should read about the differences of both to see wich is best for mobile. I think the main and most important one is that for SOAP, you need a Soap Client library on the device you are trying to connect, but for REST you just make a http request to the url.
I have built a SOAP API with Yii, is quite easy, and I have use it to communicate between two websites, to get and put data in the same database.
As for your question regarding to use one file or multiple files for every request, in the case of SOAP built on Yii, you have to normally define all the functions available to the API on the server side in only one file(controller) and to connect to that webservice you end up doing:
$client=new SoapClient("url/of/webservice);
$result=$client->methodName($param1, $param2, etc..);
So basically what you get is that from your client, you can run any method defined on the server side with the parameters that you wish.
Assuming that you use to work program php in the "classic way" I suggest you should start learning a framework, there are many reasons to do it but in the end, it is because the code will result more clean and stable, for example:
You shouldn't be writing manual queries (sometimes yes), but you can use the framework's models to handle data validation and storage into the database.
Here are some links:
http://www.larryullman.com/series/learning-the-yii-framework/
http://www.yiiframework.com/doc/guide/1.1/en/topics.webservice
http://www.yiiframework.com/wiki/175/how-to-create-a-rest-api/
As I said, I am also looking to learn how to better power a mobile application, I know this can be achieved with a API, but I don't know if that is the only way.
create a single php page that will receive an Enum/Query, execute and send the results.
I created a single PHP file named api.php that does exactly this, the project is named PHP-CRUD-API. It is quite popular.
It should take care of the boring part of the job and provides a sort of a framework to get started.
Talking about frameworks: you can integrate the script in Laravel, Symfony or SlimPHP to name a few.
in the official trigger.io docs there seems to be no provision for custom http headers when it comes to the forge.file module. I need this so I can download files behind an http authentication scheme. This seems like an easy thing to add, if support is not already there.
any workarounds? any chance of a quick fix in the next update? I know I could use forge.request instead, but I'd like to keep a local copy (saveURL).
thanks
Unfortunately the file module just uses simple "download url" methods rather than a full HTTP request library, which makes it a fairly big task to add support for custom headers.
I've added a task to our backlog for this, but I don't have a timeframe for it being added.
Currently on iOS you can do basic auth by using urls in the form http://user:password#url.com in case that helps.
Maybe to avoid this you can configure your server differently, or have a proxy server in front that allows you to pass authentication details as get parameters?