This question is in regard to Google AdWords conversion tracking. My client already has conversion tracking on the confirmation page that appears when users successfully sign up for a trial account, but they want to use the same conversion tracking code for confirmation pages that display when they convert their trial account to a paid subscription. They want to see how many trial accounts are being converted to paid accounts/subscriptions. Can they use the same conversion tracking code on multiple confirmation pages? Or should they set up a separate conversion tracking code to track conversions from trial to paid accounts? What is the best practice, and the best way to track how many trial accounts are converting to paid accounts?
Thank you in advance for any insight and/or suggestions.
They could use the same conversion tracking code in different places, but I would not recommend that because later it will be difficult to differentiate between both conversions.
I AdWords you have multiple types of conversions, my suggest is that they should use different conversions for each conversion action.
there are two ways to differentiate the conversions.
1. You setup different conversion types in your adwords account and set the tags on the pages relevant pages
- free trial thank you page
- paid subscription thank you page.
Same conversion tag, but you update the values within the script to reflect the "steps" in the conversion funnel.
Option 1 - is much easier to implement.
Related
How does Zapier/IFTTT implement the triggers and actions for different API providers? Is there any generic approach to do that, or they are implemented by individual?
I think the implementation is based on REST/Oauth, that is generic from high level to see. But for Zapier/IFTTT, it defines a lot of trigger conditions, filters. These conditions, filters should be specific to different provider. Is the corresponding implementation in individual or in generic? If in individual, there must be a vast labor force. If in generic, how to do that?
Zapier developer here - the short answer is, we implement each one!
While standards like OAuth make it easier to reuse some of the code from one API to another, there is no getting around the fact that each API has unique endpoints and unique requirements. What works for one API will not necessarily work for another. Internally, we have abstracted away as much of the process as we can into reusable bits, but there is always some work involved to add a new API.
PipeThru developer here...
There are common elements to each API which can be re-used, such as OAuth authentication, common data formats (JSON, XML, etc). Most APIs strive for a RESTful implementation. However, theory meets reality and most APIs are all over the place.
Each services offers its own endpoints and there are no commonly agreed upon set of endpoints that are correct for given services. For example, within CRM software, its not clear how a person, notes on said person, corresponding phone numbers, addresses, as well as activities should be represented. Do you provide one endpoint or several? How do you update each? Do you provide tangential records (like the company for the person) with the record or not? Each requires specific knowledge of that service as well as some data normalization.
Most of the triggers involve checking for a new record (unique id), or an updated field, most usually the last update timestamp. Most services present their timestamps in ISO 8601 format which makes parsing timestamp easy, but not everyone. Dropbox actually provides a delta API endpoint to which you can present a hash value and Dropbox will send you everything new/changed from that point. I love to see delta and/or activity endpoints in more APIs.
Bottom line, integrating each individual service does require a good amount of effort and testing.
I will point out that Zapier did implement an API for other companies to plug into their tool. Instead of Zapier implementing your API and Zapier polling you for data, you can send new/updated data to Zapier to trigger one of their Zaps. I like to think of this like webhooks on crack. This allows Zapier to support many more services without having to program each one.
I've implemented a few APIs on Zapier, so I think I can provide at least a partial answer here. If not using webhooks, Zapier will examine the API response from a service for the field with the shortest name that also includes the string "id". Changes to this field cause Zapier to trigger a task. This is based off the assumption that an id is usually incremental or random.
I've had to work around this by shifting the id value to another field and writing different values to id when it was failing to trigger, or triggering too frequently (dividing by 10 and then writing id can reduce the trigger sensitivity, for example). Ambiguity is also a problem, for example in an API response that contains fields like post_id and mesg_id.
Short answer is that the system makes an educated guess, but to get it working reliably for a specific service, you should be quite specific in your code regarding what constitutes a trigger event.
What is best to do first: configure Sitecatalyst (Omniture) within the platform (so naming the reports/variables), or deploying the tags?
You can get some standard reports by just tagging for the standard variables (pagename, products and events like scAdd, purchase etc) but you have to define the metrics your company specifically needs at some point to get the most out of Analytics to report against KPIs etc.
It is important to understand (on paper etc) the business KPIs and reports your business needs to see, then define/configure the variables (eVars/events/Props) so that they support the KPIs/Reports, then do tagging to support these variables, then when you have data in the system design the reports/dashboards in Analytics (SiteCatalyst). Then iterate over this lots of times.
I would answer "at the same time". In case of standard implementation, configuration would happen a bit earlier, in case of usage of Context Data Variables it can go the opposite way, but as Crayon mentioned, the reports doesn't make sense until you do both of the activities.
And after all I would highly recommend to the the analysis and documentation before both of these steps.
We use a tool that tracks individual users' mouse movements and clicks on our site. Right now it only tracks anonymous visitors, but we're thinking of using it to track specific logged in users' data. We'd be using it for analytics, but we'd like to have the data in case we need to analyze how a particular person uses the site.
Are people, in general, alright with this? Does this constitute privacy infringement?
The short answer is it is your site, for the most part (for now) you can track whatever you want on it.
However, some things to consider...
a) 3rd party analytics tools have their own privacy policies and Terms of Services that may or may not allow this, so if you are using something like Google Analytics, Omniture SiteCatalyst, WebTrends, Yahoo Web Analytics, etc.. then you need to read over their Privacy Policy and Terms of Service to make sure you are allowed to track this sort of thing. Offhand I don't think any of the ones I mentioned disallow tracking mouse movements/clicks specifically (and in fact, some of them have features/plugins for it, called "clickmap" tracking, or similar), but some do have restrictions on other data you may couple with this. For example, I know Google does not allow you to associate any data with the user's IP address. You cannot send it to GA in a custom variable, nor can you store it on your own server in any way that you can associate it with data you send to GA (for example, storing the user's IP in your own database along with a unique id, and then sending the unique id to GA, where you can then lookup IP by that unique id).
b) Privacy is indeed a concern that is currently being discussed by the powers-that-be, and your ability to track certain things may indeed be limited in the future. For now, it's mostly about personally identifiable information, and it's mostly happening in Europe, and tracking mouse movement/clicks generally isn't personally identifiable, but who knows what the future may bring.
c) Make sure you understand the costs involved in tracking mouse movements/clicks. In order to track something, a request has to be made, data sent somewhere. The more granular the data, the more requests and/or data needs to be sent. Whether it is your own baked up tracking solution on your own server or a 3rd party, this will cost something one way or the other. Imagine sending a request to a server for every x,y position of the mouse as it moves...this could quickly add up, and a lot of 3rd party solutions place a limit on how many requests can be made per visit(or) or day on an account.
d) On that note, if you are using a 3rd party solution, tracking something this granular may affect tracking more important stuff. As mentioned in "c", many 3rd party solutions limit how many requests can be made per visit(or) or day on your account, etc.. and if you hit the limit, any requests after that won't be tracked. Imagine if you have tracking on a sale confirmation page, tracking details about a sale made, which is very important tracking, being tossed out because of too many requests of mouse movements on some random page...
e) On that note... consider how actionable tracking mouse movements and clicks really is to you. This is a question you have to really ask yourself whenever you want to track something: "How actionable is this?" Basically, imagine yourself having the tracking in place and looking at the data...then what? What will you do with that data? Assuming the ultimate goal is to make more money, increase conversions on your site, etc.. do you really think knowing the paths a mouse cursor took on a given webpage will help you increase sales/conversions? How will you be able to know if the mouse movements are related to content on your page, or if they were just some random jerks/movements while reading content or making room on a desk, etc..? At best, the data will be polluted...
Clicks on links or specific action buttons on a page? Sure, those are certainly worth tracking. And most 3rd party solutions automatically track a lot of that stuff, or offer custom coding solutions for manual wiring up of things. And there are plenty of reports that can be made showing activity from them.
Imagine that you have thousands or millions documents signed in CAdES, XAdES or PAdES format. Signing certificate for end user is typically issued for 1-3 years. After few years, certificate will expire, revocation data (CRLs) required for verification will not be available and original crypto algorithms will not guaranee anything after 10-20 years.
I am courious if there is some mature and ready to use solution for this. I know that this can be handled by archive timestamps, but I need real product which will automatically maintain data required for long term validation, add timestamps automatically, etc.
Could you recommend me some application or library? Is it standalone solution or something what can be integrated with filenet or similar system?
The EU does currently try to endorse Advanced Digital Signatures based on the CAdES, XAdES and PAdES standards. These were specifically designed with the goal of providing the possibility for long-term archiving and validation.
CAdES is based on CMS, XAdES on XML-DSig and PAdES on the signatures defined in ISO 32000-1, which themselves again are based on CMS.
One open source solution for XAdES is the Belgian eid project, you could have a look at that.
These are all standards for signatures, they do not, however, go into detail on how you would actually implement an archiving solution, this would still be up to you.
These are all standards for signatures, they do not, however, go into detail on how you would actually implement an archiving solution, this would still be up to you.
However, this is something what am I looking for. It seems that Belgian eid mentioned above does not address it at all. (I added some clarification to my original question).
You may find this web site helpful. It's an official site even though its pointing to an IP address. The site discusses in detail your problem and offers a great deal of advise in dealing with long term electronic record storage through a Standards based approach.
The VERS standard is quite extensive and fully supports digital signatures and how best to deal with expired signatures.
The standard is also being adopted by leading EDMS/ECM providers.
If I got your question right, our SecureBlackbox components support XAdES, PAdES and CAdES standards and pulls necessary revocation information (and timestamps) and embeds them in to the signature automatically.
I'm coding a new {monthly|yearly} paid site with the now typical "referral" system: when a new user signs up, they can specify the {username|referral code} of other user (this can be detected automatically if they came through a special URL), which will cause the referrer to earn a percentage of anything the new user pays.
Before reinventing the wheel, I'd like to know if any of you have experience with storing this kind of data in a relational DB. Currently I'm using MySQL, but I believe any good solution should be easily adapted to any RDBMS, right?
I'm looking to support the following features:
Online billing system - once each invoice is paid, earnings for referrals are calculated and they will be able to cash-out. This includes, of course, having the possibility of browsing invoices / payments online.
Paid options vary - they are different in nature and in costs (which will vary sometime), so commissions should be calculated based on each final invoice.
Keeping track of referrals (relationship between users, date in which it was referred, and any other useful information - any ideas?)
A simple way to access historical referring data (how much have been paid) or accrued commissions.
In the future, I might offer to exchange accrued cash for subscription renewal (covering the whole of the new subscription or just a part of it, having to pay the difference if needed)
Multiple levels - I'm thinking of paying something around 10% of direct referred earnings + 2% the next level, but this may change in the future (add more levels, change percentages), so I should be able to store historical data.
Note that I'm not planning to use this in any other project, so I'm not worried about it being "plug and play".
Have you done any work with similar requirements? If so, how did you handle all this stuff? Would you recommend any particular DB schema? Why?
Is there anything I'm missing that would help making this a more flexible implementation?
Rather marvellously, there's a library of database schemas. Although I can't see something specific to referrals, there may be something related. At least (hopefully) you should be able to get some ideas.