•on August 13th, 2012
Analytic software does not ignore admin and team visits by default. You have to create filters to ignore the visits. In Google Analytics, filters can be set to exclude IP address visits, and report visits only from a subdomain or directory.
Steps to create a Filter to ignore team and admin visits in GA
1) Under Admin, click the Filters tab
2) Click ‘New Filter’ button and enter an appropriate name – “Exclude Admin”
3) Select ‘Custom Filter’ and enter the action “Exclude IP addresses”
4) To include all the IP range, Google has created a tool to generate the regex for the IPs (Click here)
5) Enter the first and last IP address for your organization and click ‘Generate RegEx’
6) Copy the Regular Expression and paste it in the Filter pattern mentioned in 3)
7) Select the profile, where you want the filter, and click Add. The selected profile will be shown as below:
8) Click Save
Google analytics will exclude visits from all the IPs mentioned in the range.
Steps to create a Filter to ignore team and admin visits in Clicky
1) Visit Preferences-> Visitor Tags and Filters
2) Enter the IP range and give an appropriate name
3) Select “Do not log visits from this IP / UID.”
4) Select “Global” and click ‘Submit’
On August 9th 2012, Google released the guidelines for conducting A/B and multivariate tests. The guidelines were released in response to the commonly asked question – “How will testing impact search ranking?” Google has warned marketers from running the experiments for too long.
Below are some guidelines for running an effective test with minimal impact on your site’s search performance.
1) Don’t Cloak
Cloaking is the process of showing different set of contents to bots and humans based on the user-agent value.
When you visit a site, your browser sends an HTTP request
HTTP Request Header
Server sends an HTTP response.
HTTP Response Header
Status: HTTP/1.1 200 OK
Date: Sat, 11 Aug 2012 09:23:46 GMT
In cloaking, the website checks the USER-AGENT and if it is a search bot like Googlebot, then it will show the SEO optimized page and for other user-agents, it will show a different page.
2) Use rel=”canonical”
Add rel=”canonical” link in the head section for all your variation pages like:
<link rel="canonical" href="http://www.example.com/control "/>
<link rel="canonical" href="http://www.example.com/control "/>
The Google team prefers rel=”canonical” to NOINDEX for testing. If you don’t mention rel=”canonical” and just use NOINDEX, there is a possibility that Google Bot will ignore the “NOINDEX” directive and randomly pick one of the variations as the main page. When the bot sees the actual main page, it will consider it as a duplicate and might de-index the page.
3) Use 302 redirects instead of 301s
Google guidelines recommend use of 302 redirects instead of 301s for all A/B tests. There is a reason behind this guideline. 301 redirects passes nearly 90 to 99 percent of link value (studies by SEOMoz) while 302 passes zero value. When you are running your A/B tests, you don’t want your control pages to lose link juice.
302 redirect is a temporary redirect and search engines will ignore the variation pages as long as you are running the experiments. Once your experiments are complete, 301 redirect all your variation pages. Visitors to your site might have reached the variation page and would have bookmarked it or shared it with their friends. Don’t lose that traffic.
4) Don’t run the experiments for too long
This is a controversial guideline. The Conversion rate optimization evangelists recommend testing the page until you reach statistical significance. Google recommends the same
“A good testing tool should tell you when you’ve gathered enough data to draw a reliable conclusion”
That is great; but here comes the warning:
“If we discover a site running an experiment for an unnecessarily long time, we may interpret this as an attempt to deceive search engines and take action accordingly”
Google does not have access to the visitor data for each variation, unless you have integrated the testing tool with Google Analytics. For low to medium traffic sites, it might take considerable number of months before reaching any statistical significance. The “unnecessarily long time” in the guideline is ambiguous and might scare marketers from more flexible tools like VWO and Optimizely, and force them to use GA Content Experiments (which honestly require lot of work to set up a simple A/B test).
P.S: Don’t forget to remove the scripts and markups after you have found the winner.
Behavioral targeting provides businesses with the ability to serve customized marketing messages, content and products, according to the visitor's preference, demographic and behavior. From 2011, companies started implementing Behavioral Targeting in Ad networks. The result – a 670% improvement in Click Through rate. Re-targeting Ads through Google AdWords is an example of behavioural targeting. If you have seen Ads chasing you based on your browsing and search pattern, then you already know a little bit about BT.
How does Behavioral-targeting work?
What makes Behavioral targeting effective?
Showing relevant marketing message, content and Ads is not what makes BT unique. The algorithm predicts user behaviour based on actions – search, click, page interaction and purchase. BT uses machine-learning algorithms to predict the visitor’s next action.
What it means for Online Marketers?
In Conversion Rate Optimization - testing can only be as good as your ideas. Similarly, for behavioural targeting, you have to:
1) Understand your visitors, their goals, age and preferences.
2) Understand the intent of their search
3) Understand why they bounced from a particular page
Most BT tools will customize content and advertisement according to demographic. But it need not be the most effective way to improve conversion or improve user experience. Before subscribing, understand the functionality of each BT tools. Is it rule based or adaptive? If it is adaptive, find out whether the targeting is based on demographic.
If you have a fair understanding of how your visitors behave, it is time to customize your marketing message and content for each one of them. Conversion rate optimization tools like VWO and Optimizely have integrated simple to complex rule based behavioural targeting in their system. But if you are looking for companies that focus entirely on behavioural targeting, here are a couple of them
BTBuckets is a free* tag-based solution that allows you to automatically segment and target users based on behavioural, demographic, and technographic information. Serve personalized content to these clusters according to the rules that you set.
Personyze is a complete set of tools focused on enhancing user-experience and increasing conversions and revenue. The suite features highly advanced and powerful segmentation and personalization capabilities, as well as analytics and testing tools
Blogs, Google Adwords
•on August 9th, 2012
Adwords Automated rules allow you to set conditions that trigger automatic account changes. For PPC marketers, this feature is useful in campaigns where frequent changes are common. As a Digital agency, you cannot ignore a campaign and expect rules to take over. The daily monitoring and changes are inevitable. But automated rules are useful if you have International clients. The time zone difference restricts you from making hourly changes. The quality and number of visitors to your landing page changes throughout the day.
Let us say that while you were asleep, the conversion rate for your international client increased by over 25% during a 4-hour period. Is it a trend? Find out by increasing the impression share for the keyword. You can do that by bidding for the first page.
Scenario: Increase keyword bid during a 4-hour period when the conversion increases by 25%.
Increased Conversion: 12:00 am to 4 am (time when we were sleeping)
The automated rules are available at campaign, Ad Group, Ads and keyword level. For the above, scenario, we need campaign level (budgeting) and keyword level rules.
1) First let us find out the rules available at the keyword level
• Change Max CPC bids when
• Raise Bids to top of page CPC when
• Raise bids to first page CPC when
• Pause keywords when
• Enable keywords when
2) We need the third rule – “Raise bids to first page CPC”
3) The Increase keyword bids to first page CPC window gives us multiple options for setting the automated rules
a) Apply To: Selected keywords, All Keywords or All but deleted keywords
For the above scenario, apply the rule only for the selected keyword
b) Automatic Action: Max Bid or Manual Bid
If the max cpc bid is within the daily budget for the campaign, then tick the ‘Max bid’ checkbox.
If your average ad position is at the top of the second page, then estimate the additional budget and fill the bid value, manually.
c) Requirements: Conversions, Performance, Ad Group, Ad Group Name, Campaign, Campaign Name, Max CPC, Dest URL, Keyword Text, Auction insights, Status, Qual Score, Match Type and Labels
Requirements are conditions that you can set for triggering the rules. Fourteen variables are available for the requirements.
In the above scenario, we just need the Conversions variable, specifically the conversion rate.
d) Frequency: One Time, Daily, Weekly and Monthly (Time)
We want to test the Ad daily for a week. So we will pick daily at 12:00 am, when we saw the first increase in conversion.
4) Give a recognizable name for the rule and set notification rules through Email results. Email notification will be send when the rules are triggered or when there are changes or errors, or only when there is any error in the rule.
5) Click the Automated Rules under Campaign Tab to find the list and logs of each rule. You can control and monitor the rules from this section.
Like this article? Share it with your PPC Marketing friends.
After you have developed a testable hypothesis, it is to time to create the variations of your control page. Follow these recommendations before creating the variations:
Your test is as good as your ideas. Testing tools like VWO, Optimizely and GA Content Experiments can only implement your ideas. It is your team’s responsibility to come up with variation ideas.
a) Ask your team to list variation ideas
b) Provide an explanation for each variation
c) The recommended variation should be based on data (analytics or experiment results)
2) Start with the best practice list
Follow the best practices for creating web elements and page layout. For example, short forms have proven to convert more than long forms.
Now create the best practice list for each of the following elements:
Call to Action (Use of Verb)
Forms (Number of Fields/Button/Call to Action)
3) Check the Control Page
Check your control page and evaluate whether the page follows the conventions mentioned in the best practice list. Otherwise, develop your variations based on the list.
4) Define Performance Metrics
Before starting the experiment, define the performance metrics. For buttons, it is the percentage of visitors that have clicked (Download Buttons) or percentage of visitors that have purchased (Checkout Buttons).
5) Test Few Elements
It is common to see companies starting tests with many variations. Firstly, this will increase the time for completing the experiment. And most importantly, you will learn very little about the visitors. Pick one element, say Call to Action in a Button. Create variations only for that element.
For example, if you are managing an e-commerce site the ‘Add to Cart’ button contributes directly to the Business’s bottom line. Many experiments have proved that ‘Add to Cart’ is the most effective ‘Call to Action’ (CTA). But can you create a variation that beats ‘Add to Cart’?
6) Pick Experiment Type
After you have defined performance metrics, created variations and picked the elements for the test, select the experiment type. You can pick A/B or multivariate testing.
If you are testing more than one element, choose multivariate testing. If you want to see the impact of changing one element, choose A/B tests.
7) Dramatic or Subtle Change
When you are testing one element of the page like Call to Action, you have a pre-defined set of CTAs. But if you are testing page layout, subtle changes will not give any marked improvement. For such elements, make dramatic changes.
1) Test three-column layout with two-columns.
2) Change background colour
3) Increase the Font of the Copy
4) Change the Navigation
8 Evaluate and Learn from the Experiment
Most often experiments are run indefinitely without any evaluation or marketers stop the experiment when they find a winner. The idea behind Conversion rate optimization is two-fold: find a winner and learn from the experiment.
Make sure that the marketing manager creates a Case Study about the experiment. The Case study should include the elements tested, the duration of the test, the variations created, the logic behind variations and the conclusions.
9) Make Changes and Observe
Once you have found the winner, make the changes. Sadly, on an average, 80% of the experiments fail. If the variation has improved conversion by over 20% then don’t hesitate to make the changes. Monitor the conversion trend for the next three months.
Before you jump into creating variations of your control page, formulate a hypothesis. Hypothesis is an assumption about the occurrence of an event. For example, if you believe that changing your checkout button color from blue to green will improve conversion, then your hypothesis would be something like:
“Green Checkout buttons attract more clicks than blue checkout buttons. According to 2012 Studies on Eye Tracking; Green color has been proven to attract shopper’s attention”
This is a testable hypothesis as you can use the 2012 Studies on Eye tracking to find out variables that prove a higher shopper’s attention. It can be click through rate, time on page, bounce rate or scroll rate.
But a much more effective way to create testable hypothesis is to use analytics data for the control page. In this case, take the page with blue buttons and list the following variable values
1) Click Through Rate (CTR)
2) Time on page
3) Bounce Rate
Now use the most important variable for the experiment to create a testable hypothesis. For this experiment, CTR is the most important one. According to Analytics, Click through Rate is 15% for Blue Checkout button
“Green Checkout buttons will attract more than 25% click through rate compared to 15% for Blue Checkout buttons”
From the start itself, we had shortlisted the web element for this experiment (Button Color). In real-life experiments, you have to run a series of tests to find the web elements that influence conversion in your webpage. Otherwise, you would be guessing and the tests would take considerable number of days to reach statistical significance.
In general, the following web elements influence conversion
4) Page Layout
5) Background colour
What factors influence conversion?
1) Trust: Always assume that your visitors are sceptical about your offer, your product and your company. What can you do to gain the trust of the visitor? It can be guarantees in your offer, testimonials about your product and information about your company (contact, address and team), that can regain the trust.
2) Relevance: Visitors coming from different traffic source (search, social media and paid network, email) behave differently. However, on reaching the page, every visitor immediately asks one question – “Have I reached the right page?” The headline, copy, navigation and related pages will influence relevance.
3) Distraction: If the web elements in the page are distracting and hinder the visitors from completing their task, then the conversion will go down. Some of the distracting elements are advertisement inside the content that obstructs the reader’s eye path, video auto-play and auto-expanded advertisements.
•on August 6th, 2012
Bing SEO Reports is similar to the HTML Improvements feature in Google webmasters tool. In addition to HTML suggestions, Bing creates the report based on 15 SEO Best practices. The reports are run every two weeks. The main difference between SEO Analyzer and SEO Reports is that, the later generates reports based on the domains listed in the account. Some of the SEO suggestions are:
1. The <img> tag does not have an ALT attribute defined
2. The page contains multiple titles
3. Evaluated size of HTML is estimated to be over 125 KB and risks not being fully cached
4. The page is missing meta language information
5. The title is too short or too long
6. The <h1> tag is missing
7. There are multiple <h1> tags on the page
8. The description is missing in the head section of the page
9. The description is too long or too short
Here is an example of an SEO Reports Summary
The webmaster can click one of the SEO suggestions to see all the non-complaint pages.
Let us take an example of a detailed SEO Analysis Report for non-compliance with the rule : Only one <h1> tag should exist in a page.
SEO Suggestion There are multiple <h1> tags on the page.
Severity: High (Explains the need for attention. Severity can be High, Low or Moderate. Give Immediate attention to High Severity issues)
Error Count: 71 (Number of Errors)
Non-Compliant Pages: 26 (Number of pages with the Error)
Recommended Action: Remove redundant <h1> tags from the page source, so that only one <h1> tag exists.
Register your website at Bing Webmaster Tool
•on August 6th, 2012
Microsoft has been trying hard to attract more webmasters to Bing with some nifty tools in the Bing Webmaster’s Tool collection. The latest and the most effective SEO tool is the Bing SEO Analyzer.
As the name suggests, by entering a URL, the SEO Analyzer checks any verified domain pages for SEO errors.
There will be SEO suggestions in the left-hand side with live preview on the right
Webmasters can preview the errors live through SEO Analyzer
Error Count shows the total number of errors detected on the page
Inside the live preview window, you can see button with ‘+’ sign. On clicking it, the buttons will expand showing the issue at that location.
The page source will show the error at code level.
On 1st June 2012, Google announced that it would retire Google website optimizer (GWO) on Aug 1st 2012. GWO was a free website optimization tool that allowed webmasters to improve conversion rate and user experience.
Google Analytics Content Experiments has replaced Google website optimizer. Here are some difference between Google Website Optimizer and GA Content Experiments
1) Google Website Optimizer required different codes in Control, Experiment and Conversion Pages. If you have Google Analytics already set up in your website (most websites already have Google Analytics code set up), GA Content Experiments are ready to go with just one code for the original page.
2) GA Content Experiments allows only A/B Tests, while Google website optimizer allowed both A/B tests and Multivariate tests. Google has marketed the product as an A/B/N testing tool (multiple versions of the page) but most website optimization tools in the market by default uses A/B/N. Also, GA Content Experiments only allows 5 versions of the same page.
Note: A/B tests allows webmasters to tests two to multiple versions of a page, while multivariate tests allows webmasters to simultaneously test multiple elements of a page – headline, fonts, call to actions, product description, image, reviews etc.
3) You don’t need any separate interface for running A/B tests. Content Experiments are part of Google Analytics. Check out Content -> Experiments like shown below:
How to run your first Content Experiment
1) Click Experiments
2) Click Create Experiments Button
3) Enter the URL of the page that you have selected for the Experiment
Before you select the page, here are some guidelines:
a) Pick a page that has a considerable number of visitors. Not all websites can claim 1000s of visitors per page but pick a comparatively popular page from your website.
b) Pick the most important element of the page that impacts conversion like headline, call to action button and copy. If Headline is the most important element, create 5 headline versions
4) Next step is to choose experiment pages.
5) You can add upto 5 variations
6) After you have selected the experiment pages, click ‘Next Step’
7) Now you can set Experiment Options
a) First, select Experiment Objective Metric
You can either choose existing goals or add new goals as part of the Experiment metric.
b) Next, select the percentage of visitors for the experiment and Click ‘Next Step’
c) Now you can either add the experiment code, yourself or email the code to your webmaster.
d) If you are adding the code, copy the code and add it to the head section of the page. You don’t have to add any code for the variation pages.
e) Once you have added the code, review the experiment.
You have just created your first experiment using Google Analytics Content Experiments. If you have any questions about setting up content experiments, comment below or contact us.
We have been running a few tests to find out what makes the visitors click links, inside and below an article. When you have tools to measure user engagement at page level, make use of them. You will be surprised to learn that lot of your assumptions are wrong. It hurts but this is going to be one of the best learning experiences.
For example, we were under the impression that use of Brand Names in anchor text is going to boost click through rate. Let us take two examples for our test (although this was not the tests, we used similar branding vs. non-branding links).
Download Professor X’s lecture on Decision making using Macroeconomics Models (Professor X is a thought leader in the field of Decision making using Macroeconomics models. He has helped over 524 companies make informed decisions with his models)
Download 54 mins lecture on Decision making using Macroeconomics Models
If you go by common sense, a bio about the professor below the branded link should ensure a high click through rate in the first experiment. Visitor’s lack of attention has precedence over common sense. Although the first experiment was a more compelling call to action, the 2nd link got over 67% more clicks.
This is where we have to learn to make conclusions from the experiment. If you don’t takeaway anything from this article, just remember to do this one exercise.
Answer the following questions
1) What is the conclusion from the experiment?
In Call to Actions where a brand’s popularity is uncertain, a detailed explanation of the brand below the link will not enable a higher click through rate.
2) What have you learned about the audience?
The audience don’t have the patience and attention to read details about the brand. It is better to provide a general call to action with a simple value proposition. In this case, 54 mins lecture on Decision making using Macroeconomics Models
Are we certain that this behaviour is similar for all audience?
That is where Statistical Significance is so important.
In the above case, it was a simple A/B test, where we tested two links. The first – a branded call to action with details provided below the link. The second – a non-branded call to action with no details provided. For A/B tests, the number of visitors required to reach statistical significance is low but still always look at such experiments with doubt.
Doubt = Null Hypothesis
Statistician Ronald Fisher has already developed a hypothesis called Null Hypothesis for this. According to this hypothesis – no variation exists between variables, or that a single variable is no different from zero. It is presumed to be true until statistical evidence nullifies it for an alternative hypothesis
In the above case, null hypothesis would be something like “In the above experiments, the difference in click though rate is because of chance rather than the use of non-branded call to action without explanation”
How will you prove the contrary?
You have to use a large enough sample size and collect data till you can prove statistical significance.
What should be the Sample Size?
Although the generic rule of thumb is that you should have 1000 conversions on each versions, a more scientific way of calculating the sample size are the following:
1) Define the variable for conversion (clicks, sign ups etc.)
2) What is a substantial difference in conversion (for A/B tests, it is any number greater than 10%)
In the above case, if version A has a click through rate of 2% and B a CTR of 15%, we can say that there is a substantial difference in CTR.
3) What is the baseline conversion
For the past 5 months, we have been getting 1.4 – 2.6% conversion on Version A with the median conversion at 2%. In this case, we will take the baseline conversion as 2%
4) Use Power Analysis
The basic principle of power analysis is to calculate the probability of finding a real difference in the two versions. If there is an 80% probability that one version is better than the other, then the sample size is accepted.
Don’t worry about Power Analysis, most A/B testing tools have Power Analysis tools in-built in them.
How to use Statistical significance to reject Null Hypothesis?
Calculating statistical significance is beyond the scope of this article but we have online tools to perform the calculation. But remember this one number – 5%. This variable is the Significance level. A significance level of 0.05 means that difference in conversion resulting from chance is less than 5%. If we get a number less than 0.05, we can safely reject the null hypothesis.
The A/B testing tools that we are using collects the following information
Existing conversion rate (%) %
Expected improvement in conversion rate (%) %
Number of combinations (variations)
Average number of daily visitors
Percent visitors included in test
Based on the above values, it calculates the total number of days to run the test.
If you don’t remember anything from this article, just remember to answer the following questions after the experiment.
1) What is the conclusion from the experiment?
2) What have you learned about your audience?
You might make wrong conclusions or the behaviour of your audience might change in the next experiment but if you don’t learn from each experiment, the assumptions that you make in the next experiment would be far away from reality.