abtasty https://www.abtasty.com/ Tue, 02 Apr 2024 08:08:08 +0000 en-GB hourly 1 https://wordpress.org/?v=6.4.2 https://www.abtasty.com/wp-content/uploads/2024/02/cropped-favicon-32x32.png abtasty https://www.abtasty.com/ 32 32 Trial and Error of Building a Culture of Experimentation in Marketing | Rand Fishkin https://www.abtasty.com/blog/1000-experiments-club-rand-fishkin/ https://www.abtasty.com/blog/1000-experiments-club-rand-fishkin/#respond Tue, 26 Mar 2024 16:09:15 +0000 https://www.abtasty.com/?p=147336 Rand Fishkin discusses the importance of “non-attributable” marketing and why companies should take more risks and allow themselves the freedom to fail. Rand Fishkin is the co-founder and CEO of SparkToro, a software company that specializes in audience research for […]

The post Trial and Error of Building a Culture of Experimentation in Marketing | Rand Fishkin appeared first on abtasty.

]]>
Rand Fishkin discusses the importance of “non-attributable” marketing and why companies should take more risks and allow themselves the freedom to fail.

Rand Fishkin is the co-founder and CEO of SparkToro, a software company that specializes in audience research for targeted marketing. Previously, Rand was the co-founder and CEO of Moz, where he started SEOmoz as a blog that turned into a consulting company, then a software business. Over his seven years as CEO, Rand grew the company to 130+ employees, $30M+ in revenue, and brought website traffic to 30M+ visitors/year. 

He’s also dedicated his professional life to helping people do better marketing through his writing, videos, speaking, and his latest book, Lost and Founder.

AB Tasty’s VP Marketing Marylin Montoya spoke with Rand Fishkin about the culture of experimentation and fear of failure when it comes to marketing channels and investments. Rand also shares some of his recommendations on how to get your brand in front of the right audience. 

Here are some key takeaways from their conversation.

Taking a more risk-based approach

Rand believes there’s too much focus on large markets that people often overlook the enormous potential of smaller markets to go down the more typical venture path. In that sense, founders become biased towards huge, totally addressable markets.

“They don’t consider: here’s this tiny group of people. Maybe there are only three or 4000 people or companies who really need this product, but if I make it for them, they’re going to love it. I think that there’s a tremendous amount of opportunity there. If folks would get out of their head that you have to look for a big market,” Rand says.

People avoid such opportunities because of the regulatory challenges, restrictions, and other barriers to entry that often come with them but for Rand, these underserved markets are worth the risk because competition is scarce. There’s a real potential to build something truly special for those willing to overcome the challenges that come with it, Rand argues. 

There are a lot of underserved niches and many business opportunities out there in the tech world, if companies would shift away from the “growth at all cost” mentality. 

“The thing about being profitable is once you’re there, no one can take the business from you. You can just keep iterating and finding that market, finding new customers, finding new opportunities. But if you are constantly trying to chase growth unprofitably and get to the metrics needed for your next round, you know all that goes out the window,” Rand says.

Freedom to fail

Similarly, Rand states that there’s a huge competitive advantage in committing resources toward marketing channels where attribution is hard or impossible because no one else is investing in these kinds of channels. That’s where Rand believes companies should allocate their resources.

“If you take the worst 10 or 20%, worst performing 10 or 20% of your ads budget, your performance budget, and you shift that over to hard-to-measure, experimental, serendipitous, long-term brand investment types of channels, you are going to see extraordinary results.”

However, the problem is getting buy-in from more senior stakeholders within a company because of these “hard-to-attribute” and “hard-to-measure” channels. In other words, they refuse to invest in channels where they can’t prove an attribute – a change of conversion rate or sales – or return on investment. Thus, any channels that are poor at providing proof of attribution get underinvested. Rand strongly believes that it’s still possible to get clicks on an organic listing of your website and get conversions even if a brand doesn’t spend anything on ads. 

“I think brand and PR and content and social and search and all these other organic things are a huge part of it. But ads are where those companies can charge because the CEO, CMO, CFO haven’t figured out that believing in hard-to-measure channels and hard-to-attribute channels and putting some of your budget towards experimental stuff is the right way to do things,” Rand argues.

According to Rand, these are exactly the kinds of channels where more resources need to be allocated as they generate a higher return on investment than any ad a company might spend on the more typical and bigger name platforms. 

“Your job is to go find the places your audience pays attention to and figure out what your brand could do to be present in those places and recommended by the people who own those channels.”

According to Rand, there is a learning curve in finding the message that resonates with this audience and the content that drives their interest as well as the platforms where you can connect with them and this will all depend on who your audience is.

Experiment with AI

For Rand, the AI boom is more realistic and interesting than previous big tech trends. He especially sees its biggest advantage in solving big problems within organizations that can be best solved with large language model generative AI. 

However, it’s important not to insert AI in a business or create problems just for the sake of using it or to apply it to the wrong places.

“If you find that stuff fascinating and you want to experiment with it and learn more about it, that’s great. I think that’s an awesome thing to do. Just don’t don’t go trying to create problems just to solve this, to use it.” 

He believes the best use case for AI is for more tedious jobs that would be otherwise too time-consuming as opposed to using it for more tactical or strategic marketing advice. Nonetheless, he does believe that there are a lot of interesting and useful solutions and products being built with AI that will solve many problems.

What else can you learn from our conversation with Rand Fishkin?

  • The importance of brand and long-term brand investments
  • Why it’s hard to get leadership to shift away from common ad platforms
  • How social networks have become “closed networks”
  • Why attention needs to shift to your audience and how they can become “recommenders” of your product

About Rand Fishkin

Rand Fishkin is the co-founder and CEO of SparkToro, makers of fine audience research software to make audience research accessible to everyone. He’s also the founder and former CEO of Moz and also co-founded Inbound.org alongside Dharmesh Shah, which was sold to Hubspot in 2014. Rand has become a frequent worldwide keynote speaker over the years on marketing and entrepreneurship with a mission to help people do better marketing. 

About 1,000 Experiments Club

The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.

The post Trial and Error of Building a Culture of Experimentation in Marketing | Rand Fishkin appeared first on abtasty.

]]>
https://www.abtasty.com/blog/1000-experiments-club-rand-fishkin/feed/ 0
CX Optimization Webseries APAC: Episode #1 – CRO Trends in 2024 https://www.abtasty.com/blog/cxo-webseries-episode-1/ https://www.abtasty.com/blog/cxo-webseries-episode-1/#respond Tue, 26 Mar 2024 08:27:27 +0000 https://www.abtasty.com/?p=146665 The opportunity cost of NOT testing is never knowing how much revenue you are losing from not knowing. Dave Anderson, VP Product Marketing and Strategy We are living in a time where people treat products and services as commodities. Customers […]

The post CX Optimization Webseries APAC: Episode #1 – CRO Trends in 2024 appeared first on abtasty.

]]>

The opportunity cost of NOT testing is never knowing how much revenue you are losing from not knowing.

Dave Anderson, VP Product Marketing and Strategy

We are living in a time where people treat products and services as commodities. Customers of today expect an experience alongside whatever they have purchased. Optimizing digital experiences can directly impact a company’s bottom line by improving conversion rates, reducing customer frustration, and enhancing brand sentiment. 

Hosted by Julia Simon, VP APAC at AB Tasty

Featuring Dave Anderson, VP Product Marketing and Strategy at Contentsquare

In this episode, Dave joins us to discuss various facets of customer experience and experimentation trends in Asia Pacific. They unravel key insights regarding the impact of Customer Experience (CX) Optimization on revenue generation, the widespread adoption of optimization practices across industries, the importance of collaboration between teams, and the value of continuous experimentation.

Dive deep into Episode #1

1. Impact of CX Optimization on Revenue: 

Businesses that focus on understanding the needs of their customers increase revenue by making new buyers loyal and loyal customers purchase consistently. Providing a great customer experience directly impacts a company’s bottom line by improving conversion rates, reducing customer frustration, and in the long run increasing customer lifetime value.

2. Adoption of Optimization Practices Across Industries:

Virtually every industry including education, finance, retail, and telecommunications is now embracing CX optimization as a means to meet evolving customer expectations. They discuss how companies leverage social proof, countdown banners, personalisation strategies and more to enhance digital experiences and stay competitive in today’s market.

3. Importance of Collaboration Between Teams: 

Collaboration between different teams in an organization is key to driving a successful CX strategy. The need for alignment between UX, product, tech, and marketing teams is important to ensure that optimization efforts are cohesive and well executed.

4. Value of Continuous Experimentation: 

Continuous experimentation is the cornerstone of a successful optimization strategy. Our content also underscores the importance of testing hypotheses, analyzing results, and iterating based on insights to drive ongoing improvements in digital experiences. Closing up this section, they determined that organizations need to adopt a culture of experimentation and data-driven decision-making to remain agile and responsive to evolving customer needs.

The post CX Optimization Webseries APAC: Episode #1 – CRO Trends in 2024 appeared first on abtasty.

]]>
https://www.abtasty.com/blog/cxo-webseries-episode-1/feed/ 0
Analytics Reach New Heights With Google BigQuery + AB Tasty https://www.abtasty.com/blog/analytics-new-heights-google-bigquery-ab-tasty/ https://www.abtasty.com/blog/analytics-new-heights-google-bigquery-ab-tasty/#respond Mon, 25 Mar 2024 09:40:10 +0000 https://www.abtasty.com/?p=147102 AB Tasty and Google BigQuery have joined forces to provide seamless integration, enabling customers with extensive datasets to access insights, automate, and make data-driven decisions to push their experimentation efforts forward. We have often discussed the complexity of understanding data […]

The post Analytics Reach New Heights With Google BigQuery + AB Tasty appeared first on abtasty.

]]>
AB Tasty and Google BigQuery have joined forces to provide seamless integration, enabling customers with extensive datasets to access insights, automate, and make data-driven decisions to push their experimentation efforts forward.

We have often discussed the complexity of understanding data to power your experimentation program. When companies are dealing with massive datasets they need to find an agile and effective way to allow that information to enrich their testing performance and to identify patterns, trends, and insights.

Go further with data analytics

Google BigQuery is a fully managed cloud data warehouse solution, which enables quick storage and analysis of vast amounts of data. This serverless platform is highly scalable and cost-effective, tailored to support businesses in analyzing extensive datasets for making well-informed decisions. 

With Google BigQuery, users can effortlessly execute complex analytical SQL queries, leveraging its integrated machine-learning capabilities.

This integration with AB Tasty’s experience optimization platform means customers with large datasets can use BigQuery to store and analyze large volumes of testing data. By leveraging BigQuery’s capabilities, you can streamline data analysis processes, accelerate experimentation cycles, and drive innovation more effectively.

Here are some of the many benefits of Google BigQuery’s integration with AB Tasty to help you trial better:

  • BigQuery as a data source

With AB Tasty’s integration, specific data from AB Tasty can be sent regularly to your BigQuery set. Each Data Ingestion Task has a name, an SQL query to get what you need, and timed frequency for data retrieval. This information helps make super-focused ads and messages, making it easier to reach the right people.

  • Centralized storage of data from AB Tasty

The AB Tasty and BigQuery integration simplifies campaign analysis too by eliminating the need for SQL or BI tools. Their dashboard displays a clear comparison of metrics on a single page, enhancing efficiency. You can leverage BigQuery for experiment analysis without duplicating reporting in AB Tasty, getting the best of both platforms. Incorporate complex metrics and segments by querying our enriched events dataset and link event data with critical business data from other platforms. Whether through web or feature experimentation, it means more accurate experiments at scale to drive business growth and success.

  • Machine learning

BigQuery can also be used for machine learning on experimentation programs, helping you to predict outcomes and better understand your specific goals. BigQuery gives you AI-driven predictive analytics for scaling personalized multichannel campaigns, free from attribution complexities or uncertainties. Access segments that dynamically adjust to real-time customer behavior, unlocking flexible, personalized, and data-driven marketing strategies to feed into your experiments.

  • Enhanced segmentation and comprehensive insight

BigQuery’s ability to understand behavior means that you can segment better. Its data segmentation allows for categorizing users based on various attributes or behaviors. With data that is sent to Bigquery from experiments, you can create personalized content or features tailored to specific user groups, optimizing engagement and conversion rates.

Finally, the massive benefit of this integration is to get joined-up reporting – fully automated and actionable reports on experimentation, plus the ability to feed data from other sources to get the full picture.

A continued partnership

This integration comes after Google named AB Tasty an official Google Cloud Partner last year, making us available on the Google Cloud Marketplace to streamline marketplace transactions. We are also fully integrated with Google Analytics 4. We were also thrilled to be named as one of the preferred vendors from Google for experimentation after the Google Optimize sunset. 


As we continue to work closely with the tech giant to help our customers continue to grow, you can find out more about this integration here.

The post Analytics Reach New Heights With Google BigQuery + AB Tasty appeared first on abtasty.

]]>
https://www.abtasty.com/blog/analytics-new-heights-google-bigquery-ab-tasty/feed/ 0
Optimizing Revenue Beyond Conversion Rate https://www.abtasty.com/blog/beyond-conversion-rate/ https://www.abtasty.com/blog/beyond-conversion-rate/#respond Mon, 04 Mar 2024 07:32:57 +0000 https://www.abtasty.com/?p=145438 When it comes to CRO, or Conversion Rate Optimization, it would be natural to assume that conversion is all that matters. At least, we can argue that conversion rate is at the heart of most experiments. However, the ultimate goal […]

The post Optimizing Revenue Beyond Conversion Rate appeared first on abtasty.

]]>
When it comes to CRO, or Conversion Rate Optimization, it would be natural to assume that conversion is all that matters. At least, we can argue that conversion rate is at the heart of most experiments. However, the ultimate goal is to raise revenue, so why does the CRO world put so much emphasis on conversion rates?

In this article, we’ll shed some light on the reason why conversion rate is important and why it’s not just conversions that should be considered.

Why is conversion rate so important?

Let’s start off with the three technical reasons why CRO places such importance on conversion rates:

  1. Conversion is a generic term. It covers the fact that an e-commerce visitor becomes a customer by buying something, or simply the fact that this visitor went farther than just the homepage, or clicks on a product page, or adds this product to the cart. In that sense, it’s the Swiss Army Knife of CRO.
  2. Conversion statistics are far easier than other KPI statistics, and they’re the simplest from a maths point of view. In terms of measurement, it’s pretty straightforward: success or failure.
    This means off-the-shelf code or simple spreadsheet formulas can compute statistics indices for decision, like the chance to win or confidence intervals about the expected gain. This is not that easy for other metrics as we will see later with Average Order Value (AOV).
  3. Conversion analysis is also the simplest when it comes to decision-making. There’s (almost) no scenario where raising the number of conversions is a bad thing. Therefore, deciding whether or not to put a variation in production is an easy task when you know that the conversion rate will rise. The same can’t be said about the “multiple conversions” metric where, unlike the conversion rate metric that counts one conversion per visitor even if this visitor made 2 purchases, every conversion counts and so is often more complex to analyze. For example, the number of product pages seen by an e-commerce visitor is harder to interpret. A variation increasing this number could have several meanings: the catalog can be seen as more engaging or it could mean that visitors are struggling to find what they’re looking for. 

Due to the aforementioned reasons, the conversion rate is the starting point of all CRO journeys. However, conversion rate on its own is not enough. It’s also important to pay attention to other factors other than conversions to optimize revenue. 

Beyond conversion rate

Before we delve into a more complex analysis, we’ll take a look at some simpler metrics. This includes ones that are not directly linked to transactions such as “add to cart” or “viewed at least one product page”.

If it’s statistically assured to win, then it’s a good choice to put the variation into production, with one exception. If the variation is very costly, then you will need to dig deeper to ensure that the gains will cover the costs. This can occur, for example, if the variation holds a product recommender system that comes with its cost. 

The bounce rate is also simple and straightforward in that the aim is to keep the figure down unlike the conversion rate. In this case, the only thing to be aware of is that you want to lower the bounce rate unlike the conversion rate. But the main idea is the same: if you change your homepage image and you see the bounce rate statistically drop, then it’s a good idea to put it in production.

We will now move onto a more complex metric, the transaction rate, which is directly linked to the revenue. 

Let’s start with a scenario where the transaction rate goes up. You assume that you will get more transactions with the same traffic, so the only way it could be a bad thing is that you earn less in the end. This means your average cart value (AOV) has plummeted. The basic revenue formula shows it explicitly: 

Total revenue = traffic * transaction rate * AOV 

Since we consider traffic as an external factor, then the only way to have a higher total revenue is to have an increase in both transaction rate and AOV or have at least one of them increase while the other remains stable. This means we also need to check the AOV evolution, which is much more complicated. 

On the surface, it looks simple: take the sum of all transactions and divide that by the number of transactions and you have the AOV. While the formula seems basic, the data isn’t. In this case, it’s not just either success or failure; it’s different values that can widely vary.

Below is a histogram of transaction values from a retail ecommerce website. The horizontal axis represents values (in €), the vertical axis is the proportion of transactions with this value. Here we can see that most values are spread between 0 and €200, with a peak at ~€50. 


The right part of this curve shows a “long/fat tail”. Now let’s try to see how the difference within this kind of data is hard to spot. See the same graph below but with higher values, from €400 to €1000. You will also notice another histogram (in orange) of the same values but offset by €10.

We see that the €10 offset which corresponds to a 10-unit shift to the right is hard to distinguish. And since it corresponds to the highest values this part has a huge influence when averaging samples. Due to the shape of this transaction value distribution, any measure of the average value is somewhat blurred, which makes it very difficult to have clear statistical indices. For this reason, changes in AOV need to be very drastic or measured over a huge dataset to be statistically asserted,  making it difficult to use in CRO.

Another important feature is hidden even further on the right of the horizontal axis. Here’s another zoom on the same graph, with the horizontal axis ranging from €1000 to €4500. This time only one curve is shown.

From the previous graph, we could have easily assumed that €1000 was the end, but it’s not. Even with a most common transaction value at €50, there are still some transactions above €1000, and even some over €3000. We call these extreme values. 

As a result, whether these high values exist or not makes a big difference. Since these values exist but with some scarcity, they will not be evenly spread across a variation, which can artificially create difference when computing AOV. By artificially, we mean the difference comes from a small number of visitors and so doesn’t really count as “statistically significant”. Also, keep in mind that customer behavior will not be the same when buying for €50 as when making a purchase of more than €3000.

There’s not much to do about this except know it exists. One good thing though is to separate B2B and B2C visitors if you can, since B2C transaction values are statistically bigger and less frequent. Setting them apart will limit these problems.

What does this mean for AOV?

There are three important things  to keep in mind when it comes to AOV:

  1. Don’t trust the basic AOV calculation; the difference you are seeing probably does not exist, and is quite often not even in the same observed direction! It’s only displayed to give an order of magnitude to interpret changes in conversion rates but shouldn’t be used to state a difference between variations’ AOV. That’s why we use a specific test, the Mann-Whitney U test, that’s adapted for this kind of data.
  2. You should only believe the statistical index on AOV, which is only valid to assess the direction of the difference between AOV, not its size. For example, you notice a +€5 AOV difference and the statistical index is 95%; this only means that you can be 95% sure that you will have an AOV gain, but not that it will be €5.
  3. Since transaction data is far more wild than conversion data, it will need stronger differences or bigger datasets to reach statistical significance. But since there are always fewer transactions than visitors, reaching significance on the conversion rate doesn’t imply being significant on AOV.

This means that a decision on a variation that has a conversion rate gain can still be complex because we rarely have a clear answer about the variation effect on the AOV.

This is yet another reason to have a clear experimentation protocol including an explicit hypothesis. 

For example, if the test is about showing an alternate product page layout based on the hypothesis that visitors have trouble reading the product page, then the AOV should not be impacted. Afterwards, if the conversion rate rises, we can validate the winner if the AOV has no strong statistical downward trend. However, if the changes are in the product recommender system, which might have an impact on the AOV, then one should be more strict on measuring a statistical innocuity on the AOV before calling a winner. For example, the recommender might bias visitors toward cheaper products, boosting sales numbers but not the overall revenue.

The real driving force behind CRO

We’ve seen that the conversion rate is at the base of CRO practice because of its simplicity and versatility compared to all other KPIs. Nonetheless, this simplicity must not be taken for granted. It sometimes hides more complexity that needs to be understood in order to make profitable business decisions, which is why it’s a good idea to have expert resources during your CRO journey. 

That’s why at AB Tasty, our philosophy is not only about providing top-notch software but also Customer Success accompaniment.

The post Optimizing Revenue Beyond Conversion Rate appeared first on abtasty.

]]>
https://www.abtasty.com/blog/beyond-conversion-rate/feed/ 0
The Future of Fashion https://www.abtasty.com/blog/future-of-fashion/ https://www.abtasty.com/blog/future-of-fashion/#respond Thu, 22 Feb 2024 16:28:32 +0000 https://www.abtasty.com/?p=144947 5 Pillars to Reshape Customer Experience In the ever-evolving landscape of fashion and e-commerce, digital innovation has become a driving force behind transforming the customer experience. The intersection of technology and fashion has given rise to new opportunities for brands […]

The post The Future of Fashion appeared first on abtasty.

]]>

5 Pillars to Reshape Customer Experience

In the ever-evolving landscape of fashion and e-commerce, digital innovation has become a driving force behind transforming the customer experience. The intersection of technology and fashion has given rise to new opportunities for brands to connect with their customers in more meaningful and engaging ways. 

In this guest blog post from Conversio, a leading UK-based optimization and analytics agency, we explore key trends in fashion e-commerce and how brands can leverage digital strategies to enhance the customer experience.

1. The Mobile Customer: Shopping on the Go

The mobile customer has become a dominant force in the fashion industry. Today’s consumers expect a seamless and intuitive mobile experience when browsing, shopping, and making purchases. Brands must prioritize mobile optimization, ensuring their websites and apps are responsive, fast-loading, and user-friendly. By providing a frictionless mobile experience, fashion brands can capture the attention and loyalty of the on-the-go consumer.

2. The Rise of Social: Influencing Fashion Choices

Social media platforms have revolutionized the way we discover, engage with, and purchase fashion items. From influencers showcasing the latest trends to shoppable posts and personalized recommendations, social media has become an integral part of the customer journey. Fashion brands must embrace social commerce and leverage these platforms to connect with their audience, build brand awareness, and drive conversions. By actively engaging with customers on social media, brands can create a community around their products and foster brand loyalty.

3. Increasing Returns Rates: The Challenge of Fit and Expectations

One of the ongoing challenges in fashion e-commerce is the issue of increasing returns rates. Customers want convenience and flexibility when it comes to trying on and returning items. Brands must address this challenge by providing accurate size guides, detailed product descriptions, and visual representations. Additionally, incorporating virtual try-on technologies and utilizing user-generated content can help improve the customer’s confidence in their purchase decisions and reduce returns rates.

4. Measuring the Customer Experience

To truly enhance the customer experience, brands must measure and analyze key metrics to gain insights into their customers’ behaviors and preferences. Conversion rate optimization (CRO) is a crucial aspect of this process. By A/B testing, tracking and optimizing conversion rates, brands can identify areas for improvement and implement strategies to increase conversions. Additionally, measuring customer satisfaction, engagement, and loyalty through surveys, feedback, and data analytics can provide valuable insights into the effectiveness of the customer experience.

5. Improving the Fashion CX through Experimentation

To stay ahead in the competitive fashion industry, brands must embrace a culture of experimentation. A/B testing different elements of the customer experience, such as website layout, product recommendations, and personalized messaging, can help identify what resonates best with customers. By continuously iterating and refining their digital strategies, fashion brands can deliver a more tailored and enjoyable experience for their customers.

Our Key Takeaways

As fashion brands navigate the digital landscape, there are several key takeaways to keep in mind:

  • Brand Perception: Recognise that 90% of new customers won’t see your homepage. Focus on delivering a consistent and compelling brand experience across all touchpoints.
  • Post-Purchase: Extend your focus beyond the conversion. Invest in post-purchase experiences, such as order tracking, personalised recommendations, and exceptional customer service, to foster customer loyalty and encourage repeat purchases.
  • Measure Everything: Establish a robust measurement framework to track and validate the value of your content, campaigns, and overall customer experience. Leverage data to make data-driven decisions and continuously optimize your strategies.

In conclusion, digital fashion has reshaped the customer experience, offering new avenues for engagement, personalization, and convenience. By understanding and embracing key trends, testing and measuring customer experience, and experimenting with innovative strategies, fashion brands can successfully navigate the digital landscape and deliver exceptional experiences that resonate with their target audience.

Want to get more detail? Watch the webinar below:

The post The Future of Fashion appeared first on abtasty.

]]>
https://www.abtasty.com/blog/future-of-fashion/feed/ 0
Finding Our Better: AB Tasty’s New Brand Identity https://www.abtasty.com/blog/meet-the-new-ab-tasty/ https://www.abtasty.com/blog/meet-the-new-ab-tasty/#respond Mon, 05 Feb 2024 12:35:24 +0000 https://www.abtasty.com/?p=142785 Cheers to Change Good things happen to those who change. And that’s exactly what we did. Change is what propels us towards progress.Change is how we find our better.Change is how we dare to go further. Today marks a significant […]

The post Finding Our Better: AB Tasty’s New Brand Identity appeared first on abtasty.

]]>
Cheers to Change

Good things happen to those who change. And that’s exactly what we did.

Change is what propels us towards progress.
Change is how we find our better.
Change is how we dare to go further.

Today marks a significant day in our history as a company. Today, we’re thrilled to share our updated brand identity with you. We’re stepping into a new era that better aligns our forever commitment to “test and learn” with our position in the market as a partner that helps brands push ideas even further.

With over 13 years in the industry, we’ve seen dynamic changes in the market. Brands now understand the importance and impact of continual experience optimization. The thriving experimentation sector has launched us into our most successful financial quarters following our strategic technology acquisitions. Beyond our strengthened AI and personalization portfolios, it’s become crystal clear that what makes us unique is our people. And our people are what make our customers happy.

Time to Talk Tasty

You may have noticed a few recent changes to AB Tasty – and we don’t mean just our new brand colors.

“Electric Blue” and “Crash Test Yellow” 

Although our vibrant visual identity may catch you by surprise, our rebrand is much more than just a cosmetic makeover. We’ve been very intentional with our decisions at each step of the way. 

Over the past 14 months, we’ve embraced some exciting technological advancements within our platform: 

  • In October 2022, we saw a big need in the market for more personalization and acquired a company specializing in recommendations and search solutions. 
  • In June 2023, we extended our personalization offering to help teams better cater to their different audiences and compete on a higher level. We acquired an emotions-based, personalization technology that enriches and expands our portfolio.
  • Then, we unified those platforms with our own API-based experimentation, personalization engine, and web solution.

Now, we’re happy to say that we are one unified platform offering everything that brands need for complete experience optimization. With our new brand identity, we proudly promote everything we are, everything we can be, and everything we want to be. 

Our strategic shift in branding was the logical next step after our tremendous period of growth.

New Look, Same Commitment

One thing hasn’t changed – and that’s our commitment to our clients. They are, and always will be our focus. 

Everything we’ve done will better suit the needs of our clients. Unifying our products into one harmonious platform allows for endless optimization opportunities and our our messaging reflects our human touch and leading expertise.

We are the optimization partners pushing brave ideas from the inside out.

Our Brand Story

Our clients need to be different, not just better. And for that, they need an optimization partner in their progress. Our commitment to customer support is consistently recognized on G2 and is something our clients rave about. Our team and the level of support we offer our clients have always been and will always be what makes AB Tasty great. That’s why we embed ourselves at the heart of company culture to push brave ideas from the inside out. 

How can we do that? By focusing on our three pillars as our foundation. 

  1. Human Touch: Our people are everything – they bring the soul and substance to our technology. We build relationships with our clients to transcend the transactional with our deep partnerships and client understanding.
  2. Leading Expertise: We back brave ideas with data and knowledge. We stay ahead as leaders of the industry and continue to learn with our “test and learn” culture. We make every move by choice, not chance by de-risking brave ideas. 
  3. Unifying Product: Our product connects teams, platforms, tools, and collaborators. We transform cultures changing the way our clients work and think. We work as a team with one vision and common goals. 

We do all of this so our clients can level up. We make their next step our next challenge. Giving them the courage and push they need to dare to go further. 

Conclusion

Every next step looks different for our clients, company, and people. That’s why we provide the courage and conviction to make it happen. 

We help our customers DARE TO GO FURTHER.

The post Finding Our Better: AB Tasty’s New Brand Identity appeared first on abtasty.

]]>
https://www.abtasty.com/blog/meet-the-new-ab-tasty/feed/ 0
A/A Testing: What is it and When Should You Use it? https://www.abtasty.com/blog/aa-testing/ https://www.abtasty.com/blog/aa-testing/#respond Fri, 02 Feb 2024 13:21:17 +0000 https://www.abtasty.com/?p=143201 A/A tests are a legacy from the early days of A/B testing. It’s basically creating an A/B test where two identical versions of a web page or element are tested against each other. Variation B is just a copy of […]

The post A/A Testing: What is it and When Should You Use it? appeared first on abtasty.

]]>
A/A tests are a legacy from the early days of A/B testing. It’s basically creating an A/B test where two identical versions of a web page or element are tested against each other. Variation B is just a copy of A without any modification.

One of the goals of A/A tests is to check the effectiveness and accuracy of testing tools. The expectation is that, if no winner is declared, the test is a success. Whereas detecting a statistical difference would mean a failure, indicating a problem somewhere in the pipeline.

But it’s not always that simple. We’ll dive into this type of testing and the statistics and tech behind the scenes. We’ll look at why a failed A/A test is not a proof of pipeline failure, and that a successful A/A test isn’t a foolproof sanity check.

What is tested during an A/A test?

Why is there so much buzz around A/A testing? An A/A test can be a way to verify two components of an experimentation platform: 

  1. The statistical tool: It may be possible that the formulas chosen don’t fit the real nature of the data, or may contain bugs.
  2. The traffic allocation: The split between variations must be random and respect the proportions it has been given. When a problem occurs, we talk about Sample Ratio Mismatch (SRM); that is, the observed traffic does not match the allocation setting. This means that the split has some bias impacting the analysis quality.
    Let’s explore this in more detail.

Statistical tool test

Let’s talk about a “failed” A/A test

The most common idea behind A/A tests is that the statistical tool should yield no significant difference. It is considered a “failed” A/A test if you detect a difference in performance during an A/A test. 

However, to understand how weak this conclusion is, you need to understand how statistical tests work. Let’s say that your significance threshold is 95%. This means that there is still a 5% chance that the difference you see is a statistical fluke and no real difference exists between the variations. So even with a perfectly working statistical tool, you still have one chance in twenty (1/20=5%) that you will have a “failed” A/A test and you might start looking for a problem that may not exist.

With that in mind, an acceptable statistical procedure would be to perform 20 A/A tests and expect to have 19 that yield no statistical difference, and one that does detect a significant difference. And even in this case, if 2 tests show significant results, it’s a sign of a real problem. In other words, having 1 successful A/A test is in fact not enough to validate a statistical tool. To validate it fully, you need to show that the tests are successful 95% of the time (=19/20).

Therefore, a meaningful approach would be to perform hundreds of A/A tests and expect ~5% of them to “fail”. It’s worth noting that if it “fails” less than 5% of the time it’s also a problem, maybe indicating that the statistical test simply says “no” too often, leading to a strategy that never detects any winning variation. So one A/A “failed” test doesn’t tell much in reality. 

What if it’s a “successful A/A test”? 

A “successful” A/A test (yielding no difference) is not proof that everything is working as it should. To understand why, you need to check another important tool in an A/B test: the sample size calculator.

In the following example, we see that from a 5% conversion rate, you need around 30k visitors per variation to reach the 95% significance level if a variation yields a 10% MDE (Minimal Detectable Effect).

But in the context of an A/A test, the Minimal Detectable Effect (MDE) is in fact 0%. Using the same formula, we’ll plug 0% as MDE.

At this point, you will discover that the form does not let you put a 0% here, so let’s try a very small number then. In this case, you get almost 300M visitors, as seen below.

In fact, to be confident that there is exactly no difference between two variations, you need an infinite number of visitors, which is why the form does not let you set 0% as MDE.

Therefore, a successful A/A test only tells you that the difference between the two variations is smaller than a given number but not that the two variations perform exactly the same.

This problem comes from another principle in statistical tests: the power. 

The power of a test is the chance that you discover a difference if there is any. In the context of an A/A test, this refers to the chance you discover a statistically significant discrepancy between the two variations’ performance. 

The more power, the more chance you will discover a difference. To raise the power of a test you simply raise the number of visitors.

You may have noticed that in the previous screenshots, tests are usually powered at 80%. This means that even if a difference exists between the variations in performance, 20% of the time you will miss it. So one “successful” A/A test (yielding no statistical difference) may just be an occurrence of this 20%. In other words, having just one successful A/A test doesn’t ensure the efficiency of your experimentation tool. You may have a problem and there is a 20% chance that you missed it. Additionally, reaching 100% of power will need an infinite number of visitors, making it impractical.

How do we make sure we can trust the statistical tool then? If you are using a platform that is used by thousands of other customers, chances are that the problem would have already been discovered. 

Because statistical software does not change very often and it is not affected by the variation content (whereas the traffic allocation might change, as we will see later), the best option is to trust your provider, or you can double-check the results with an independent provider. You can find a lot of independent calculators on the web. They only need the number of visitors and the number of conversions for each variation to provide the results making it quick to implement.

Traffic allocation test

In this part, we only focus on traffic, not conversions. 

The question is: does the splitting operation work as it should? We call this kind of failure a SRM or Sample Ratio Mismatch. You may ask yourself how a simple random choice could fail. In fact, the failure happens either before or after the random choice. 

The following demonstrates two examples where that can happen:

  • The variation contains a bug that may crash some navigators. In this case, the corresponding variation will lose visitors. The bug might depend on the navigator and then you will end up with bias in your data.
  • If the variation gives a discount coupon (or any other advantage), and some users find a way to force their navigator to run the variation (to get the coupon), then you will have an excess of visitors for that variation that is not due to random chance, which results in biased data.


It’s hard to detect with the naked eye because the allocation is random, so you never get sharp numbers. 

For instance, a 50/50 allocation never precisely splits the traffic in groups with the exact same size. As a result, we would need statistical tools to check if the split observed corresponds with the desired allocation. 

SRM tests exist. They work more or less the same way as an A/B test except that the SRM formula indicates whether there is a difference between the desired allocation and what really happened. If there is indeed an SRM, then there is a chance that this difference is not due to pure randomness. This means that some data is lost or bias occurred during the experiment entailing trust for future (real) experiments.

On the one hand, detecting an SRM during an A/A test sounds like a good idea. On the other hand, if you think operationally it might not be that useful because the chance of a SRM is low.  

Even if some reports say that they are more frequent than you may think, most of the time it happens on complex tests. In that sense, checking SRM within an A/A test will not help you to prevent having one on a more complex experiment later. 

If you find a Sample Ration Mismatch on a real experiment or in an A/A test, the following actions remain the same: find the cause, fix it, and restart the experiment. So why waste time and traffic on an A/A test that will give you no information? A real experiment would have given you real information if it worked fine on the first try. If a problem does occur, we would detect it even in a real experiment since we only consider traffic and not conversions.

A/A tests are also unnecessary since most trustworthy A/B testing platforms (like AB Tasty) do SRM checks on an automated basis. So if an SRM occurs, you will be notified anyway. 

So where does this “habit” of practicing A/A tests come from?

Over the years, it’s something that engineers building A/B testing platforms have done. It makes sense in this case because they can run a lot of automated experiments, and even simulate users if they don’t have enough at hand, performing a sound statistical approach to A/A tests. 

They have reasons to doubt the platform in the works and they have the programming skills to automatically create hundreds of A/A tests to test it properly. Since these people can be seen as pioneers, their voice on the web is loud when they explain what an A/A test is and why it’s important (from an engineering perspective).

However, for a platform user/customer, the context is different as they’ve paid for a ready-to- use and trusted platform and can start a real experiment as soon as possible to get a return on investment. Therefore, it makes little sense to waste time and traffic on an A/A test that won’t provide any valuable information.

Why sometimes it might be better to skip A/A tests

We can conclude that a failed A/A test is not a problem and that a successful one is not  proof of sanity. 

In order to gain valuable insights from A/A tests, you would need to perform hundreds of them with an infinite number of visitors. Moreover, an efficient platform like AB Tasty does the corresponding checks for you.

That’s why, unless you are developing your own A/B testing platform, running an A/A test may not give you the insights you’re looking for. A/A tests require a considerable amount of time and traffic that could otherwise be used to conduct A/B tests that could give you valuable insights on how to optimize your user experience and increase conversions. 

When it makes sense to run an A/A test

It may seem that running A/A tests may not be the right call after all. However, there may be a couple of reasons why it might still be useful to perform A/A tests. 

First is when you want to check the data you are collecting and compare it to data already collected with other analytics tools but keep in mind that you will never get the exact same results. The reason is that most of the metric definitions vary on different tools. Nonetheless this comparison is an important onboarding step to ensure that the data is properly collected.

The other reason to perform an A/A test is to know the reference value for your main metrics so you can establish a baseline to analyze your future campaigns more accurately. For example, what is your base conversion rate and/or bounce rate? Which of these metrics need to be improved and are, therefore, a good candidate for your first real A/B test?

This is why AB Tasty has a feature that helps users build A/A tests dedicated to reach these goals and avoids the pitfalls of “old school”  methods that are not useful anymore. With our new A/A test feature, A/A test data is collected in one variant (not two); let’s call this an “A test”. 

This allows you to have a more accurate estimation of these important metrics as the more data you have, the more accurate the measurements are. Meanwhile, in a classic A/A test, data is collected in two different variants which provides less accurate estimates since you have less data for each variant.

With this approach, AB Tasty enables users to automatically set up A/A tests, which gives better insights than classic “handmade” A/A tests.

The post A/A Testing: What is it and When Should You Use it? appeared first on abtasty.

]]>
https://www.abtasty.com/blog/aa-testing/feed/ 0
How to Rebrand Your Site Using Experimentation in 5 Easy Steps https://www.abtasty.com/blog/rebrand-site-experimentation-5-easy-steps/ Tue, 09 Jan 2024 09:30:40 +0000 https://www.abtasty.com/?p=135694 Revamp your website strategically with AB Tasty for a successful redesign. Discover step-by-step guide from REO on utilizing customer research, prioritization, prototyping, and A/B testing to refine and enhance your site's performance and user experience.

The post How to Rebrand Your Site Using Experimentation in 5 Easy Steps appeared first on abtasty.

]]>
 

We invited Holly Ingram from our partner REO Digital, an agency dedicated to customer experience, to talk us through the practical ways you can use experimentation when doing a website redesign.

 

Testing entire site redesigns at once is a huge risk. You can throw away years of incremental gains in UX and site performance if executed incorrectly. Not only do they commonly fail to achieve their goals, but they even fail to achieve parity with the old design. That’s why an incremental approach, where you can isolate changes and accurately measure their impact, is most commonly recommended. That being said, some scenarios warrant an entire redesign, in which case, you need a robust evidence-driven process to reduce this risk. 

Step 1 – Generative research to inform your redesign 

With the level of collaboration involved in a redesign, changes must be based on evidence over opinion. There’s usually a range of stakeholders who all have their own ideas about how the website should be improved and despite their best intentions, this process often leads to prioritizing what they feel is important, which doesn’t always align with customers goals. The first step in this process is to carry out research to see your site as your customers do and identify areas of struggle. 

It’s important here to use a combination of quantitative research (to understand how your users behave) and qualitative research (to understand why). Start off broad using quantitative research to identify areas of the site that are performing the worst, looking for high drop-off rates and poor conversion. Now you have your areas of focus you can look at more granular metrics to gather more context on the points of friction. 

  • Scroll maps: Are users missing key information as it’s placed below the fold?  
  • Click maps: Where are people clicking? Where are they not clicking? 
  • Traffic analysis: What traffic source(s) are driving users to that page? What is the split between new and returning? 
  • Usability testing: What do users that fit your target audience think of these pages? What helps them? What doesn’t help? 
  • Competitor analysis: How do your competitors present themselves? How do they tackle the same issues you face?

Each research method has its pros and cons. Keep in mind the hierarchy of evidence. The hierarchy is visually depicted as a pyramid, with the lowest-quality research methods (having the highest risk of bias) at the bottom of the pyramid and the highest-quality methods (with the lowest risk of bias) at the top. When reviewing your findings place more importance on findings that come from research methods at the top of the pyramid, e.g. previous A/B test findings, than research methods that come at the bottom (e.g. competitor analysis).

Step 2 – Prioritise areas that should be redesigned 

Once you have gathered your data and prioritised your findings based on quality of evidence you should be able to see which areas you should focus on first. You should also have an idea of how you might want to improve them. This is where the fun part comes in, and you can start brainstorming ideas. Collaboration is key here to ensure a range of potential solutions are considered. Try and get the perspective of designers, developers, and key stakeholders. Not only will you discover more ideas, but you will also save time as everyone will have context on the changes. 

 It’s not only about design. A common mistake people make when doing a redesign is purely focussing on design and making the page look ‘prettier’, and not changing the content. Through research, you should have identified content that performs well and content that could do with an update. Make sure you consider this when brainstorming.

Step 3 – Pilot your redesign through a prototype 

It can be tempting once you’ve come up with great ideas to go ahead and launch it. Even if you are certain this new page will perform miles better than the original, you’d be surprised how often you’re wrong. Before you go ahead and invest a lot of time and money into building your new page,  it’s a good idea to get some outside opinions from your target audience. The quickest way to do this is to build a prototype and get users to feedback on it through user testing. See what their attention is drawn to, if there’s anything on the page they don’t like or think is missing. It’s much quicker to make these changes before launching than after. 

Step 4 – A/B test your redesign to know with statistical certainty whether your redesign performs better

Now you have done all this work conducting research, defining problem statements, coming up with hypotheses, ideating solutions and getting feedback, you want to see if your solution actually works better!

However, do not make the mistake of jumping straight into launching on your website. Yes it will be quicker, but you will never be able to quantify the difference all of that work has made to your key metrics. You may see conversion rate increase, but how do you know that is due to the redesign and nothing else (e.g. a marketing campaign or special offer deployed around the same time)? Or worse, you see conversion rate decrease and automatically assume it must be down to the redesign when in fact it’s not.  

With an A/B test you can rule out outside noise. For simplicity, imagine the scenario where you have launched your redesign, in reality it made no difference, but due to a successful marketing campaign around the same time you saw an increase in conversion rate. If you had launched your redesign as an A/B test, you would see no difference between the control and the variant, as both would have been equally affected by the marketing campaign. 

This is why it is crucial you A/B test your redesign. Not only will you be able to quantify the difference your redesign has made, you will be able to tell whether that change is statistically significant. This means you will know the probability that the change you have seen is due to the test rather than random chance. This can help minimize the risk that redesigns often bring.  

Once you have your results you can then choose whether you want to launch the redesign to 100% of users, which you can do through the testing tool whilst you wait for the changes to be hardcoded. As the redesign has already been built for the A/B test, hardcoding it should be a lot quicker!

Step 5 – Evaluative research to validate how your redesign performs 

Research shouldn’t stop once the redesign has been launched. We recommend conducting post-launch analysis to evaluate how it performs over time. This especially helps measure metrics that have a longer lead time, such as returns or cancellations.

Redesigns are susceptible to visitor bias, as rolling out a completely different experience can be shocking and uncomfortable for your returning visitors. They are also susceptible to novelty effects, where users can react more positively just because something looks new and shiny. In either case, these effects will wear off with time. That’s why it’s important to monitor performance after it’s deployment.

Things to look out for: 

  • Bounce rate 
  • On-page metrics (scroll rate, click-throughs, heatmap, mouse tracking) 
  • Conversion rate 
  • Funnel progression 
  • Difference in performance for new vs. returning users 

Redesigns are all about preparation. It may seem thorough, but it should be with such a big change. If you follow the right process you could dramatically increase sales and conversions, but if done wrong you may have wasted some serious time, effort and money. Don’t skimp on the research and keep a user-centred approach and you could create a website your audience loves.

If you want to find out more about how a redesign worked with a real customer of AB Tasty’s and REO – take a look at this webinar where La Redoute details how they tested the new redesign of their site and sought continuous improvement.

The post How to Rebrand Your Site Using Experimentation in 5 Easy Steps appeared first on abtasty.

]]>
How to Better Handle Collateral Effects of Experimentation: Dynamic Allocation vs Sequential Testing https://www.abtasty.com/blog/dynamic-allocation-sequential-testing/ Thu, 07 Dec 2023 10:12:48 +0000 https://www.abtasty.com/?p=135510 When talking about web experimentation, the topics that often come up are learning and earning. However, it’s important to remember that a big part of experimentation is encountering risks and losses. Although losses can be a touchy topic, it’s important […]

The post How to Better Handle Collateral Effects of Experimentation: Dynamic Allocation vs Sequential Testing appeared first on abtasty.

]]>
When talking about web experimentation, the topics that often come up are learning and earning. However, it’s important to remember that a big part of experimentation is encountering risks and losses. Although losses can be a touchy topic, it’s important to talk about and destigmatize failed tests in experimentation because it encourages problem-solving, thinking outside of your comfort zone and finding ways to mitigate risk. 

Therefore, we will take a look at the shortcomings of classic hypothesis testing and look into other options. Basic hypothesis testing follows a rigid protocol: 

  • Creating the variation according to the hypothesis
  • Waiting a given amount of time 
  • Analyzing the result
  • Decision-making (implementing the variant, keeping the original, or proposing a new variant)

This rigid protocol and simple approach to testing doesn’t say anything about how to handle losses. This raises the question of what happens if something goes wrong? Additionally, the classic statistical tools used for analysis are not meant to be used before the end of the experiment.

If we consider a very general rule of thumb, let’s say that out of every 10 experiments, 8 will be neutral (show no real difference), one will be positive, and one will be negative. Practicing classic hypothesis testing suggests that you just accept that as a collateral effect of the optimization process hoping to even it out in the long term. It may feel like crossing a street blindfolded.

For many, that may not cut it. Let’s take a look at two approaches that try to better handle this problem: 

  • Dynamic allocation – also known as “Multi Armed Bandit” (MAB). This is where traffic allocation changes for each variation according to their performance, implicitly lowering the losses.
  • Sequential testing – a method that allows you to stop a test as soon as possible, given a risk aversion threshold.

These approaches are statistically sound but they come with their assumptions. We will go through their pros and cons within the context of web optimization.

First, we’ll look into the classic version of these two techniques and their properties and give tips on how to mitigate some of their problems and risks. Then, we’ll finish this article with some general advice on which techniques to use depending on the context of the experiment.

Dynamic allocation (DA)

Dynamic allocation’s main idea is to use statistical formulas that modify the amount of visitors exposed to a variation depending on the variation’s performance. 

This means a poor-performing variation will end up having little traffic which can be seen as a way to save conversions while still searching for the best-performing variation. Formulas ensure the best compromise between avoiding loss and finding the real best-performing variation. However, this implies a lot of assumptions that are not always met and that make DA a risky option. 

There are two main concerns, both of which are linked to the time aspect of the experimentation process: 

  • The DA formula does not take time into account 

If there is a noticeable delay between the variation exposure and the conversion, the algorithm may go wrong resulting in a visitor being considered a ‘failure’ until they convert. This means that the time between a visit and a conversion will be falsely counted as a failure.

As a result, the DA will use the wrong conversion information in its formula so that any variation gaining traffic will automatically see a (false) performance drop because it will detect a growing number of non-converting visitors. As a result, traffic to that variation will be reduced.  

The reverse may also be true: a variation with decreasing traffic will no longer have any new visitors while existing visitors of this variation could eventually convert. In that sense, results would indicate a (false) rise in conversions even when there are no new visitors, which would be highly misleading.

DA gained popularity within the advertising industry where the delay between an ad exposure and its potential conversion (a click) is short. That’s why it works perfectly well in this context. The use of Dynamic Allocation in CRO must be done in a low conversion delay context only.

In other words, DA should only be used in scenarios where visitors convert quickly. It’s not recommended for e-commerce except for short-term campaigns such as flash sales or when there’s not enough traffic for a classic AB test. It can also be used if the conversion goal is clicking on an ad on a media website.

  • DA and the different days of the week 

It’s very common to see different visitor behavior depending on the day of the week. Typically, customers may behave differently on weekends than during weekdays.  

With DA, you may be sampling days unevenly, implicitly giving more weight on some days for some variations. However, you should weigh each day the same because, in reality, you have the same amount of weekdays. You should only use Dynamic Allocation if you know that the optimized KPI is not sensitive to fluctuations during the week.

The conclusion is that DA should be considered only when you expect too few total visitors for classic A/B testing. Another requirement is that the KPI under experimentation needs a very short conversion time and no dependence on the day of the week. Taking all this into account: Dynamic Allocation should not be used as a way to secure conversions.

Sequential Testing (ST)

Sequential Testing is when a specific statistical formula is used enabling you to stop an experiment. This will depend on the performance of variations with given guarantees on the risk of false positives. 

The Sequential Testing approach is designed to secure conversions by stopping a variation as soon as its underperformance is statistically proven. 

However, it still has some limitations. When it comes to effect size estimation, the effect size may be wrong in two senses: 

  • Bad variations will be seen as worse than they really are. It’s not a problem in CRO because the false positive risk is still guaranteed. This means that in the worst-case scenario, you will discard not a strictly losing variation but maybe just an even one, which still makes sense in CRO.
  • Good variations will be seen as better than they really are. It may be a problem in CRO since not all winning variations are useful for business. The effect size estimation is key to business decision-making. This can easily be mitigated by using sequential testing to stop losing variations only. Winning variations, for their part, should be continued until the planned end of the experiment, ensuring both correct effect size estimation and an even sampling for each day of the week.
    It’s important to note that not all CRO software use this hybrid approach. Most of them use ST to stop both winning and losing variations, which is wrong as we’ve just seen.

As we’ve seen, by stopping a losing variation in the middle of the week, there’s a risk you may be discarding a possible winning variation. 

However, to actually have a winning variation after ST has shown that it’s underperforming, this variation will need to perform so well that it becomes even with the reference. Then, it would also have to perform so well that it outperforms the reference and all that would need to happen in a few days. This scenario is highly unlikely.

Therefore, it’s safe to stop a losing variation with Sequential Testing, even if all weekdays haven’t been evenly sampled.

The best of both worlds in CRO 

Dynamic Allocation is the best approach to experimentation instead of static allocation when you expect a small volume of traffic. It should be used only in the context of ‘short delay KPI’ and with no known weekday effect (for example: flash sales). However, it’s not a way to mitigate risk in a CRO strategy.

To be able to run experiments with all the needed guarantees, you need a hybrid system using Sequential Testing to stop losing variations and a classic method to stop a winning variation. This method will allow you to have the best of both worlds.

 

The post How to Better Handle Collateral Effects of Experimentation: Dynamic Allocation vs Sequential Testing appeared first on abtasty.

]]>
AB Tasty & ActionIQ: Introducing a Powerful New Integration for Enhanced Personalization and Customer Experiences https://www.abtasty.com/blog/ab-tasty-actioniq-introducing-a-powerful-new-integration-for-enhanced-personalization-and-customer-experiences/ Wed, 06 Dec 2023 18:04:40 +0000 https://www.abtasty.com/?p=135005 We are excited to announce a new integration between AB Tasty, the experience optimization company, and ActionIQ, a leading customer data platform (CDP) provider. This integration brings together the power of AI-powered experimentation and personalization from AB Tasty with ActionIQ’s […]

The post AB Tasty & ActionIQ: Introducing a Powerful New Integration for Enhanced Personalization and Customer Experiences appeared first on abtasty.

]]>
We are excited to announce a new integration between AB Tasty, the experience optimization company, and ActionIQ, a leading customer data platform (CDP) provider. This integration brings together the power of AI-powered experimentation and personalization from AB Tasty with ActionIQ’s robust CDP capabilities, enabling brands to deliver highly relevant and personalized customer experiences at scale.

AB Tasty and ActionIQ power advanced personalization and experimentation use cases by enriching user profiles with 3rd party data. Together, we turn anonymous browsers into known visitors and known visitors into loyal customers.

ActionIQ creates a persistent, consolidated record of all customer attributes and data, giving brands a complete picture of individual customers and their behaviors. This data can be used to identify bottlenecks and understand customer preferences, behaviors, and needs.

Using ActionIQ with AB Tasty also simplifies your experimentation. A CDP reduces the data wrangling required for each experiment, making it easy to create experiment segments and suppression groups on which to experiment. Experiment variants can be created based on customer attributes and behaviors across systems, including entry channel, initial product purchased, content consumed, and current sales funnel stage. Variant behavior is then gathered from source systems and third-party enhancements for analysis.

Moreover, making improvements to the customer experience has never been easier. You can use the insights gained from experimentation to improve customers’ experience while they’re interacting with your digital products. This can be done by experimenting with new product features, content, and workflows based on data from your CDP.

But wait, there’s more! ActionIQ and AB Tasty can increase your team’s productivity. By integrating a CDP with an experimentation platform, you can accelerate time to value with new tools and democratize data access, delivering and maintaining clean, complete data feeds to different business stakeholders’ systems of choice via pre-built connectors and/or API, without depending on engineering.

We all know the importance of personalization, but it’s always easier said than done. With ActionIQ and AB Tasty, you can drive personalized marketing efforts: by centralizing all customer data in one place, CDPs enable businesses to better understand their customer’s preferences, behaviors, and needs, resulting in more targeted marketing efforts and improved customer satisfaction. This can be further enhanced by using the insights gained from experimentation to create personalized campaigns that are unique to their customers.

What else can ActionIQ and AB Tasty do?

  • Revolutionize Brand and Product Experiences: With AB Tasty’s AI-powered experimentation and personalization, combined with ActionIQ’s CDP, brands can revolutionize their customer experiences, driving results and placing their customers at the center of their digital strategies.
  • Omnichannel Approach to Customer Experiences: AB Tasty’s omnichannel approach to customer experiences, when combined with ActionIQ’s CDP, allows brands to deliver personalized and relevant content across various touchpoints, creating a comprehensive view of the customer for personalized experiences.
  • Increase Efficiency and Conversion Rate: AB Tasty’s low-code to no-code widgets provide unrivaled time-to-value and ROI, while ActionIQ’s CDP helps brands to scale their first-party data in an agile way, increasing efficiency and conversion rates.
  • Empower Business Teams with Valuable Customer Insights: ActionIQ’s CDP enables brands to gain valuable customer insights, while AB Tasty’s experimentation platform allows teams to innovate and make data-driven decisions, empowering them to deliver impactful cross-channel customer experiences in real-time.

This new integration between AB Tasty and ActionIQ empowers brands to transform their customer experiences, driving results, and placing their customers at the center of their digital strategies. To learn more about how this integration can benefit your business, request a demo from AB Tasty or ActionIQ today.

The post AB Tasty & ActionIQ: Introducing a Powerful New Integration for Enhanced Personalization and Customer Experiences appeared first on abtasty.

]]>