Track .zip Downloads with Piwik - matomo

I got a little bit of trouble on my website.
For Tracking the Downloads on our website we use piwik.
Unfortunately Piwik can't track the downloads that are proceed with a Direct Link.
All our downloadable stuff is downloaded on a single click on soundcloud or a shortend link on other social media stuff.
How can i track this downloads with piwik? is it even possible?

Related

Auto upload pictures from DSLR to Node.js web app

I have a web app in Node.js and Angular. In our company we need to take pictures of some products.
In an ideal situation, we would take the pictures with the app open (in a browser on a desktop) and it will automagically upload the pictures to the app.
I know it sounds very easy and I'm pretty sure it won't be. But is there a way to do this? Can it be done via WiFi or only USB?
We are using a DSLR. Right now is Canon (I know they have an SDK) but it would be great to not depend on the camera model or brand.
Thanks!

AngularJS application problems appearance in Google search

I have a personal project which consumes my free time and effort for about a year without significant profit. I have problems with it appearance in Google and would really appreciate to get help here.
This project (http://yuppi.com.ua - similar to craiglist in US) is WEB-based AngularJS 1.2 application that uses PHP rest API hosted on GoDaddy. And in order to make this application popular it have to be very visible in internet and very searchable in Google and users have to be able to share pages via social networks or skype.
According to Google specification, google crawlers doesn't run javascript to get content of a web page before index, so I've added _escaped_fragment_ page that displays content of web page without javascript. For example:
Page: http://yuppi.com.ua/#!/items/sub/18/_
Dirty : yuppi.com.ua/?_escaped_fragment_=/items/sub/18/_
This dirty page will be redirected here where google will see content.
http://yuppi.com.ua/server/crawler_proxy/routee.php?path=/items/sub/18/
So basically I have two versions on HTML file for that page. One version is the one that available to users, which has styles, a lot more HTML tags etc. And the second is the version for Google crawler - very light-weight without any styles. And I am expecting to see clean link to my site in Google, not dirty.
So, If to search all links to a web site in Google you will see that one of the links displays it's "dirty" state.
Another problem is sharing links in Skype.
When I send a link to someone, I am expecting that this link will be transformed to thumbnail image but it is not happens. Instead I see ungly link to my web site.
Please help me to understand how to make happy everyone: users, google crawler, GoDaddy and me.
I was encountering the same problems last year with a big project and we ended to use : https://prerender.io/.
It's a prerendering system that work with a phantomjs browser to detect bot request and render a full html template. It does also instanciate a cache service to not render again a template that haven't change.
Hope it help's.

Downloading files from URL

I have a question related to a security on website. Lets say that a visitor is currently on a http://www.example.com/. And visitor navigates to gallery page. There he can find unique images that are displayed to him according to his log in details he provided earlier. A simple inspect on a picture shows him the URL to that picture: www.example.com/images/image_589326.png.
My question is: Is there a way for a user to somehow download all the files from
www.example.com/images/ folder, or somehow find the names of all images that are in that folder and simply view them with absolute URL.
Yes, that is possible. There are two main ways that this can be achieved: scraping & enumeration.
Through scraping someone would make a script that would look at the gallery page and make a list of all of the images, and then download them all.
Enumeration would just request http://www.example.com/images/image_000001.png through http://www.example.com/images/image_999999.png and download of the images that are present.
If the site is not proprely set up you may also be able to get a directory listing from http://www.example.com/images to see all of the files in the images/ directory.

LinkedIn share links to PDF documents

I am trying to create buttons on a web page that allow users to share links to PDF documents on LinkedIn. LinkedIn loads a window without any errors but offers no link or preview of the PDF or any indication of what is being shared.
Here are the two methods I have tried. First the plugin method.
<script type="in/share" data-url="http://example.net/DocumentDownload.aspx?Command=Core_Download&entryID=114"></script>
And, secondly with a custom url.
TEST
Encoding the url makes no difference.
The above links are direct document links from a DNN web site using Document Exchange. If I change the urls to any html page it works fine and LinkedIn seems to be able to extract the useful information right from the page and use that for the share details.
Can LinkedIn handle this kind of thing? There is nothing to guide me on the type of links that can be shared. I can't find any information about it. There are no errors in the web console.
Not sure, but you should try to provide LinkedIn with the link that has .pdf at the end, like http://example.com/documents/file1.pdf. I guess LinkedIn just checks the URL if it has .pdf file at the end to decide if it is a PDF document or not.
I have no problem sharing pdf's on LinkedIn. Check it out...
https://www.linkedin.com/sharing/share-offsite/?url=https://www.revoltlib.com/anarchism/the-conquest-of-bread/view.pdf
Works perfectly fine. And view.pdf is a script, not a file, either, so, it's not looking for a PDF file to analyze, so much as headers that indicate you have a PDF file available to analyze, so, in PHP, at DocumentDownload.aspx, we would do...
header('Content-type: application/pdf; charset=utf-8');
This header let's the sharing app know that it can analyze the document as a PDF file and extract useful information from it, as you can see from the screen shot.

how to remove sitemap.aspx in dnn6

i need to remove sitemap.aspx from the site.
In dnn 6,there is a sitemap.aspx page that simply shows an xml sitemap.i cannot edit/remove that file.so i need to remove that page and recreate it with a simple html sitemap.
NOTE:the page name should be sitemap.aspx
Sitemap.aspx isn't a physical page you can delete.
You can, however, rename it to something else. It's in your web.config file, under the 'handlers' section. Just look for sitemap.aspx, and change it to something else, like 'searchenginesitemap.aspx'. Don't forget to update your robots.txt file to point to the new sitemap name, or go to the various webmaster console pages in search engines and advise them of the new location.
The sitemap.aspx is used to create the xml sitemap for search engines. By changing this you break this functionality and limit the search-ability of your site.
That being said, in Host Settings->Advance Settings you could setup a new Friendly Url that would match .*/sitemap.aspx to another url/page on your site.
I have long stopped using DNN's native sitemap.aspx... ITS BUGGY!... and here is how i found out.
I generated my own "CLEAN" sitemap.xml using a free 3rd party tool. And uploaded it to the root of my DNN website.... re-submitted the the domainname.com/sitemap.xml to Google via web master tools and as a result we now get a 1ST PAGE and TOP 10 RANKING.
Mostly in the top 5... where as before using DNN's native sitemap.aspx we would get random errors which was pretty ANNOYING. Plus we got very bad Google Page Rank, But those were just my findings of better results. Note:I also place the location of the sitemap within the robots.txt file...
Although i will admit that it is extremely ANNOYING that you cannot just edit the DNN Sitemap url. This creates an issue if you've built the the site on a test server and then migrate over to production... your DNN Sitemap url only reads the firs portal alias from when you first developed the site.
Anyway, this was my findings... others may vary... just sharing.

Resources