Our marketing team has placed a lot of these tracking pixels on our site. Most of them just make a simple HTTP GET to a URL, usually by using a IMG tag, but some document.write in an iframe/script node as well.
What I would like to know, is what exactly these track. Source IP? What if you are behind a proxy?
These sites cause the visitors browser to go to the site to load the image or javascript. What the site does is store a cookie and/or fingerprint of the visitor. Your site also tells the tracking site something about the visitor -- whether they purchased something or particular aspects of the pages that were vistited. The tracking site than can connect this visit with other visits to your sites, other sites, banner ads or more.
It's called a Web bug.
Related
I have just verified the custom URL for my Sites Google. When I want to assign it, it says, "This URL is already in use by another Google service." Meanwhile, I don't remember using the URL for any Google service. I just verified it with Google Webmasters. Anyway, I use Plesk for my domain services. Any help?
This is my site: https://sites.google.com/view/alvisyhrn/home
This is my URL: www.alvisyahrin.com
Your help will be much appreciated.
Thank you.
I use Google Domains but was running into the same error message. This post suggests creating and then deleting a synthetic redirect record (e.g. www.alvisyahrin.com -> http://google.com) in Google Domains. This displayed a "All resource records in this synthetic record will be deleted." message before deleting, and seems to have done the trick, since as soon as I deleted the synthetic record Sites was willing to use it as a custom domain.
I realize you're using a different registrar for your domain, but visiting your site now it looks like you managed to get things working (I assume by doing something like this). Hopefully this will be a helpful breadcrumb for Google Domains users that run into this, at least.
I will need to build an ecommerce store. I cant really wrap it around my head so far, but my main vision:
I would pay a monthly fee for a solution (something like woocommerce or shopify) so I can keep my products online in their database. Also this solution would have all the needed things bundled (like emails, order trackings, inventory, reimbursement). It would generate an email or other sign to my client when an order happens. I can imagine this happening on Wordpress with some pre-built templates.
Here comes the second part, because I would prefer to build the front-end on my own. Do You know any solution where I can simply communicate with GET / POST requests to the endpoints. So when webshop loads the products would be rendered with CSS to the users. In case of order (linked to STRIPE most probably) the required details (SKU, Quantity, user info) is saved and sent towards the solution.
How would You build it? What would You recommend as a service?
Regards,
Koppany
I'm much aware of one company known as Scale Labs which provides the cross-border e-commerce solutions and all kinds of services related the e-commerce sector.
You can also take the help of alternatives to Scale Labs which includes Pitney Bowes, Acommerce asia, etc but personally, I have gone through the Scale Labs. They helped me from scratch and now am earning a good profit from my store. Here's the link to their official website: Cross-Border e-commerce solutions.
If you are still confused then let me know. Thanks
I am working on a chrome bookmarking extension with google app engine as the backend. I am the only user now but I thought that if in the future there are other users the url needs to include the user name for the extension to interact with the backend. So I was thinking to change
http://ting-1.appspot.com/useradminpage
to
http://ting-1.appspot.com/user_name/useradminpage
where "user_name" is the gmail user id.
But I looked at twitter url and I see that they have
http://twitter.com/#!/user_name/
What is the purpose of "#!"? Is my scheme good enough in this case?
The # in a URL signifies the 'fragment identifier'. Historically this has been used to identify a part of a document identified by an 'anchor' tag, but recently webapp developers have begun to use it to pass information about the page state to Javascript code running in the page. This is used because it's possible for Javascript code to modify the fragment of the current page without causing the page to reload - meaning it can update as you browse through the webapp, and go right back to where you were when you reload the page.
The fragment is not sent to the server when the browser loads a page, so Twitter's server just sees a request for twitter.com; it's up to the Javascript code in the page to examine the fragment and determine what to do after that.
In your particular case, assuming you're using the App Engine User service to authenticate users, you have a number of options for how to distinguish users in your URLs:
Use their email address. In theory this can change, and users may not want their address in a URL they will share. If the URLs are private, this is more or less a moot point.
Use their user_id. This is opaque and reveals no useful information about the user, so it's safe, but it's also meaningless and hard to remember.
Let users pick a nickname for their URLs, like Facebook and other services do, on a first-in, first-served basis.
I'm writing a web application that will track incoming traffic to a website and track the origin of the traffic and its behaviour on our site, so that we can get some idea of the return on investment of our marketing campaigns, the actual keywords and their value to us (rather than to google) and the lost traffic, and our lost spend.
Part of this involves looking at the referrer information from the browser on the first page visited. Referrers like Google Organic and Google Paid Search are easy to identify using regex matching to look for particular strings within the referrer (I'm using php's $_SERVER). The same is true for Bing, Ask, Yahoo, LinkedIn and Facebook.
But, I'm having a problem with one particular source - Google Content Network. Sometimes traffic coming from these ads has a nice link that begins http://googleads.g.doubleclick.net/pagead/ads?
which is obviously easy to code for. On the other hand, the traffic from sites showing our ads sometimes comes with the Referrer of the site itself as though it were a hard coded link. This second hard coded type link is causing problems as we can't differentiate it from regular referred traffic.
So, other than tagging the urls our ads are pointing to with something like '?source=gcn', or scraping the referring page to look for a hard coded link or a google ads iframe, has anyone got any magic sauce to overcome this issue?
Thanks in advance
Ross
So, it seems I've been looking in completely the wrong place for a solution to this.
In a nutshell, the problem is that I need to access Google PPC information regarding visitors to my site but google doesn't always pass this information along in the referrer and certainly is problematic where Display Network appears on a page using javascript to insert it directly into the dom.
Where should I have been looking? Google Analytics. The __utmz cookie contains a wealth of information regarding the route that traffic got the site... including whether they came via PPC / Organic or Display Network and the search terms (where applicable) that got them there.
See the following page for more information:
http://code.google.com/apis/analytics/docs/concepts/gaConceptsCookies.html
Who'd have thought! Anyway, there is some great documentation on what the cookies do and how they are constructed. Problem Solved.
Ross
I want to write a program that analyzes your fantasy baseball team and notifies you of recommended actions, possibly multiple times per day. The problem is, you aren't playing fantasy baseball on my site, you're playing on yahoo, or cbs, or espn, etc.
On the majority of these sites, fantasy teams and leagues are not public, so you must be logged in and a member of the league to see the teams in the league.
All that I need is the plain html for the team page on each of those sites to be sent to my server, where I can then parse and analyze the file and send user notifications.
The problem is that I need username/password combinations to easily get this data to my server when I need it, and I think there will be a lot of people who wouldn't want to entrust their yahoo/espn/cbs password to me.
I have come up with several possible ways to solve this problem:
The most obvious way is to ask for their credentials for the site on which their team is hosted. Then I could just programmatically log in and request the data I need. I'm guessing a number of people would be comfortable giving me their credentials, and a number of them not so much.
Write a desktop client, which the user then downloads. The client would require their credentials, but it could then basically do exactly the same thing that the server based version would do, log in, request the page, and send the page back to my server. The difference being that their password would never need to leave their desktop. Their computer would need to be on, and this program running for this method to work.
Write browser add-ons that navigate to the page I need, use the cookie that is saved from a previous login to login to the site, and send the page back to my server. This doesn't require my software to ever ask for their password, but if the cookie expires I am hosed, and I don't know much about browser add-ons besides.
I'm sure there are other options, but these are what I've come up with so far.
I have two questions:
1. What are the other possibilities for this type of task?
2. Am I over-estimating people's reluctance to give me their yahoo (for example) password? Is option (1) above the obvious choice?
It was suggested in the comments that I try yahoo pipes, and that looked like a promising suggestion so I explored it a bit. Having looked now at this, I don't think that is an option. So, it looks like I'll be going with option 1.
This is a problem I grappled with a couple of years ago when I wanted to do the same thing. Our site is http://benchcoach.com and the options we were considering were the following:
Original we considered getting the user's credentials and login. We would then log in and scrape their league and team info. The problem there is that after reading several of the various terms of service, this would definitely be violating the terms of service. On top of this, Yahoo! was definitely one of the sites we were considering and their users have email (where we could get access to sensitive data), and Yahoo! wallet. In addition, it would be pretty trivial for Yahoo/ESPN/CBS to block our programmatic logins by IP Address.
The solution we settled on (not 100% happy but it does seem to work) was asking our users to install a bookmarklet (like delicious, digg or reddit) which would post the current html page to our servers, where we could parse the data and load our database. If they were still logged into their Yahoo/ESPN/CBS account, we would direct them directly to the pages, otherwise, those sites would prompt for authentication. Clicking the bookmarklet once more, would post the page to our servers.
The pros of this approach was that we never collected anyone's credentials so any concern of security would have been alleviated. Secondly, it would make it impossible for Yahoo/ESPN/CBS to block access to our service since we would never be connecting directly to their servers but rather the user's browser would be posting the contents of their browser to our server.
The problems with this is that it takes 2 clicks to post a page to our site. For head to head leagues, we needed 3-4 pages so it would take our user 6-8 clicks to sync their league to our servers. We're still looking at options for this.
One important note is that I ran into the product manager of the Yahoo Fantasy Football site at a conference a year ago. We talked about how we were getting the Yahoo data, and he confirmed that getting credentials would violate their TOS and they may stop us. While I don't think they would have, it would have made it hard to invest time and energy to develop this only to have them block our site and pissing of users by closing their accounts.
A potentially more complicated answer could possibly be done with (for example) yahoo pipes.
Hypothetically, you create a pipe which prompts the user for their credentials and provides them with a url which contains their scraped data. They enter this URL in their site, and never have to provide their credentials directly. Even better, for the security-conscious, it would be possible to examine what the pipe was actually doing before entering any information.
The downside would be increased complexity (as well as you'd have to write and maintain the pipe). Having said that, you could provide a link directly to the published pipe from your site, to make things as easy as possible.
Option 1 is the obvious choice. People who trust your site will provide the details. There is no other way you can login to other site while screen scraping.