Skip to main content

Overview

Sort Orders determine the sequence in which products appear on collection pages. You can fine-tune product visibility using a combination of conditions, product attributes, and weighted attribute groups. These expressions help align product sorting with your marketing strategies and customer preferences. images/sort-order-dashboard-image.png

Sort Order Architecture

Sort Orders build upon the attribute foundation and can be enhanced with sequences, while being subject to merchandising rule overrides:

Why Sort Orders Matter

Sort Orders shape how products are presented on collection pages so shoppers find the right items faster. Use conditions, attributes, and weighted groups to reflect your brand strategy and customer preferences. For step‑by‑step instructions, see:

Best Practices

  • Balance Weights: Avoid extreme weight differences to ensure fair influence of attributes.
  • Monitor Impact: Regularly review conversion rates and customer behavior to refine sorting strategies.
  • Use Conditions Wisely: Prioritize boosting/demoting logic carefully to align with merchandising goals.
For further details, see the Browse API or contact our support team.

Sort Order Experiments

Sort Order Experiments allow you to A/B test different sort orders on your collection pages to determine which arrangement of products leads to better engagement and conversion rates.

Overview

With Sort Order Experiments, you can:
  • Create A/B tests for different sort orders on collection pages
  • Target specific collections or all collections
  • View detailed metrics on how each variation performs
  • Make data-driven decisions about which sort orders to implement
Sort Order Experiments are created and managed directly within the Sort Orders section of your Layers dashboard. Each experiment compares a base sort order (Variant A) with a variant sort order (Variant B) to help you determine which arrangement of products performs better. Sort Order Experiments

Running Experiments

Use experiments to compare a base sort order with a variant and see which performs better. Set a traffic split, choose target collections, and monitor results to make data‑driven decisions.

Experiment Settings

  • Name: A descriptive name to identify your experiment
  • Base Sort Order: The current sort order that will be shown to the control group
  • Variant Sort Order: The new sort order that will be shown to the test group
  • Collections: Select all collections, a single collection, or multiple specific collections to target
  • Traffic Split: Percentage of traffic that will see each variation (default is 50/50)
  • Advanced Targeting Conditions: Optional filters to determine which visitors are eligible for the experiment (e.g., UTM parameters)

Monitoring Experiments

Once your experiment is running, you can monitor its performance in the experiment results section within the sort order:
  • Visitors: Number of visitors in each group
  • Views: Product views in each group
  • Clicks: Product clicks in each group
  • Add to Carts: Number of products added to cart in each group
  • Purchases: Number of purchases in each group
  • Conversion Rate: Percentage of visitors who made a purchase
  • Confidence Level: Statistical confidence in the results

Ending an Experiment

You can end an experiment at any time:
  1. Navigate to the Sort Orders section in your Layers dashboard
  2. Select the sort order containing the experiment
  3. Click on the “Experiments” tab
  4. Click “End Experiment” for the experiment you want to conclude
  5. Review the final results
  6. Implement the winning sort order if desired

Advanced Usage

UTM-Based Sorting

You can use experiments with a 100% traffic split and advanced targeting conditions to show different sort orders based on UTM parameters. This allows you to create specialized landing experiences for different marketing campaigns. For example, you could set up an experiment that shows a special sort order only to visitors coming from a specific email campaign by targeting the UTM source and campaign parameters.

Automatic Application

The system automatically applies experiments across all APIs where the sort order is used. This means that once an experiment is active, it will be applied consistently whether visitors are browsing collections, using search, or viewing recommendations.

Best Practices

Allow experiments to run until they reach statistical significance (usually indicated by a high confidence level).
For clearest results, only test one change at a time. If you want to test multiple sort orders, run separate experiments.
Be aware that seasonal factors may influence your results. Consider running experiments during typical periods for your business.
Don’t focus solely on one metric. Look at the full picture including views, clicks, add-to-carts, and purchases.

See also

I