Streamlining Markdown Pricing with 10 m GBP uplift in revenue


A global retailer wanted to streamline their pricing approach for reducing product prices across all their stores. They faced an inefficiency of manual processes, resulting in loss of time and money. By automating A/B tests for Machine Learning-based markdown pricing, they swiftly implemented the most effective pricing strategies, potentially increasing their annual sales by £10 million.

The challenge: Manual A/B Testing lead to inefficient markdown pricing strategies

Our client implemented product markdowns to clear inventory and encourage customers to purchase products that were slow-selling or going out of season. However, they faced the challenge of inefficiency in manual A/B testing of markdowns, resulting in potential calculated losses that could be optimised.

The retailer aimed to automate their pricing determination process for markdowns but encountered scepticism regarding the relevance of automated procedures. The client sought proof that automated procedures are the future of the company. This was the moment they reached out to VirtusLab.

They established distinct prices for stores using both merchandisers, conducting manual markdowns, and machine learning models to finally compare the outcomes and identify the most effective strategy. The idea was to prove the pricing model’s efficiency by conducting A/B tests for various pricing strategies in stores.

The solution: Automating A/B Testing for optimising markdown pricing strategies

VirtusLab developed a framework to streamline and fully automate A/B tests. By providing the Automated Testing and Orchestration (ATO) with a list of products and models to compare, the A/B tests were run entirely autonomously. 

This included training the models, optimising the prices, selecting stores, uploading different prices to stores, and retrieving the test results from an API endpoint once the tests were complete. 

We successfully integrated various components, such as different APIs for store pricing, ML pipelines for model training and obtaining optimised results, and a store selection service.

The workflow of our framework

User defines A/B test: choose which models to compare, define the products, define the percentage of stores for each group of the A/B test


ATO automatically select the stores and determines the prices for the different groups of the A/B test


ATO uploads prices to stores.


User can retrieve the result of the A/B test.

The results of automated A/B tests

Automated A/B tests enabled our client to:

Increase the frequency of A/B tests from two tests a year to monthly testing.
Explore new models and improve existing ones to set optimal prices, leading to a significant increase in revenue of yearly 10 million pounds sterling.
Save time and improve the efficiency of the client’s teams: The trial setup time was reduced from weeks to just a few days.
The Tech Stack

Languages: Python

Database: Postgresql

CI/CD: Azure Devops, Github Actions

Infrastructure: Azure (cloud), AzureApp Service, AzureML, Postgresql DB, Terraform