Piwik goals(checkout steps) - matomo

I am looking for a way to create checkout steps goal in Piwik with the same functionality as it's in Google analytics(Goal Funnel). As far as I have managed with this - Piwik can do that but it doesnt provide separating those steps(only contains/regex for URL). Maybe someone knows a solution for this?
P.S. Using Piwik 1.10.1

In the end I managed to solve this by creating goal for each unique checkout step and calling trackPageView() on frontend for each step

Related

How to log client side errors to a centralised file or dashboard (in React)

I want log all the client side errors in a centralised logging file .Can anyone tell me how to do this in react.(Client side error logging).I am not able to find support for this in react.Can anyone tell how to implement it by a small demo
As others have commented on, React runs in the browser and on the client side, so writing files is not possible.
However, I was searching myself for centralised logging for a frontend application and came across this post. Perhaps the question is not asked correctly, but basically I wanted to log and report crashes in the frontend to a central place or dashboard. When the app users run into issues, I at least know by seeing these errors in the dashboard. I found a few websites / services that can do this for you.
First tear is a free subscription
Rollbar
Sentry
LogRocket
Bugsnag
ClickUp
First tear is paid
Instabug
Raygun
Dynatrace
I've used this list myself to do some research and thought to share it here for anyone else who might currently be in my position. Perhaps it will help you along and you find it useful. Examples of how to implement them are in the docs. See which solution works best for you.

Recover Cloud functions default service account with the undelete POST call

I have used Google Cloud Functions for quite a long time, with no real authentication problem for now.
Today I meet this error while deploying a new function
ERROR: (gcloud.functions.deploy) ResponseError: status=[400], code=[Bad Request], message=[Default service account 'PROJECT-ID#appspot.gserviceaccount.com' doesn't exist. Please recreate this account (for example by disabling and enabling the Cloud Functions API), or specify a different account.]
I tried several things :
disable/enable GCF API : no service account recovered
gcloud beta app repair reference here
No default service account recovered
the undelete API POST call
If I understand well the current GCP features, using the last option is my best solution, but somehow I keep getting a 400 error
I found my unique-id in my log activity at the creation of the default service account
I really can't see where is the problem in the undelete API call and would be really thankful if you could help with it
Thanks to #Maxim, I know now that my problem comes from the fact that the deleting of this service account happened more than 30 days ago. Which means that it has already been purged from the system and it's not recoverable anymore.
In case you meet this same kind of problem, please try out this link :
https://cloud.google.com/iam/docs/creating-managing-service-accounts#undeleting_a_service_account
I see three alternative ways in how to proceed here next:
Create a new project from scratch to work from.
File a support case via the support center.
Open a private issue by providing your project number in the following component.
I believe it's convenient in reaching out GCP Support for help at this stage, and recommend you to do so; seeing as you've attempted most if not all ways of Service Account recovery to no success.
On a last note, as for the latter option, the contents of the private issue will only be visible to you, and to the GCP Support staff (us). If you choose this option, please let me know when it's opened, and I'll start working on it as soon as possible.

Importing apache logs into piwik

I am in the process of switching my site analytics from GA to Piwik and would like to incorporate all the historic data that I can. I have already concatenated the full trail of apache log files I have in my possession. However, what I do next is not at all clear to me and the Piwik documentation does not help. It says something along the lines of
python /path/to/piwik/misc/log-analytics/import_logs.py --url=http://analytics.example.com access.log
I have my concatenated log file, all.logs, in the log-analytics folder. I would have thought that I just need to issue
python /path/to/piwik/misc/log-analytics/import_logs.py all.logs
but that throws up an error message. When I provide the URL to the site in question too I get an error saying that it gets back an HTML document (naturally) which it does not like.
I'd be most grateful to anyone who might be able to put me on the right track here.
I think --url=http://analytics.example.com let's you set the URL of Piwik, not your website.

Joomla site shows IP address instead of url

I am finalizing my Joomla-site with a Siteground host and encounter the following problem: my site shows the IP-adress instead of the url. Although I type in the url (www.nooitmeerfile.be by the way), it shows the IP-adress.
Could someone please give me a step-by-step explanation on how to fix this? I happen to find a lot of fragmented answers cluttered around the web. I am a novice user, and I'm stuck :-)
Thanks!
Possible steps to debug your issue:
Check configuration.php public $live_site = ''; and try to add your
domain (without trailing slash).
Check if .htaccess file is not altered and try to download and use a fresh copy.
Check if you are using 3rd party sef components / plugins and try to disable them.
Clear your joomla and browser cache.
Contact your hosting provider if something in your hosting is misconfigured.
Hope this helps

why i couldn't see any text in "http://crawlservice.appspot.com/?key=123456&url=http://mydomain.com#!article"?

Ok, i found this link https://code.google.com/p/gwt-platform/wiki/CrawlerSupport#Using_gwtp-crawler-service that explain how you can make your GWTP app crawlable.
I got some GWTP experience, but i know nothing about AppEngine.
Google said its "crawlservice.appspot.com" can parse any Ajax page. Now I have a page "http://mydomain.com#!article" that has an artice that was pulled from Database. Say that page has the text "this is my article". Now I open this link:
crawlservice.appspot.com/?key=123456&url=http://mydomain.com#!article, then i can see all javascript but I couldn't find the text "this is my article".
Why?
Now let check with a real life example
open this link https://groups.google.com/forum/#!topic/google-web-toolkit/Syi04ArKl4k & you will see the text "If i open that url in IE"
Now you open http://crawlservice.appspot.com/?key=123456&url=https://groups.google.com/forum/#!topic/google-web-toolkit/Syi04ArKl4k you can see all javascript but there is no text "If i open that url in IE",
Why is it?
SO if i use http://crawlservice.appspot.com/?key=123456&url=mydomain#!article then Can google crawler be able to see the text in mydomain#!article?
also why the key=123456, it means everyone can use this service? do we have our own key? does google limit the number of calls to their service?
Could you explain all these things?
Extra Info:
Christopher suggested me to use this example
https://github.com/ArcBees/GWTP-Samples/tree/master/gwtp-samples/gwtp-sample-crawler-service
However, I ran into other problem. My app is a pure GWTP, it doesn't have appengine-web.xml in WEB-INF. I have no idea what is appengine or GAE mean or what is Maven.
DO i need to register AppEngine?
My Appp may have a lot of traffic. Also I am using Godaddy VPS. I don't want to register App Engine since I have to pay for Google for extra traffic.
Everything in my GWTP App is ok right now except Crawler Function.
So if I don't use Google App Engine, then how can i build Crawler Function for GWTP?
I tried to use HTMLUnit for my app, but HTMLUnit doesn't work for GWTP (See details in here Why HTMLUnit always shows the HostPage no matter what url I type in (Crawlable GWT APP)? )
I believe you are not allowed to crawl Google Groups. Probably they are actively trying to prevent this, so you do not see the expected content.
There's a couple points I wish to elaborate on:
The Google Code documentation is no longer maintained. You should look on Github instead: https://github.com/ArcBees/GWTP/wiki/Crawler-Support
You shouldn't use http://crawlservice.appspot.com. This isn't a Google service, it's out of date and we may decide to delete it down the road. This only serves as a public example. You should create your own application on App Engine (https://appengine.google.com/)
There is a sample here (https://github.com/ArcBees/GWTP-Samples/tree/master/gwtp-samples/gwtp-sample-crawler-service) using GWTP's Crawler Service. You can basically copy-paste it. Just make sure you update the <application> tag in appengine-web.xml to the name of your application and use your own service key in CrawlerModule.
Finally, if your client uses GWTP and you followed the documentation, it will work. If you want to try it manually, you must encode the Query Parameters.
For example http://crawlservice.appspot.com/?key=123456&url=http://www.arcbees.com#!service will not work because the hash (everything including and after #) is not sent to the server.
On the other hand http://crawlservice.appspot.com/?key=123456&url=http%3A%2F%2Fwww.arcbees.com%2F%23!service will work.

Resources