Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.uselayers.com/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Metrics in Layers are quantifiable measurements of product performance that enable data-driven merchandising and sorting decisions. The platform supports two types of metrics, each designed for different use cases and data sources.
The dashboard now refers to saved metrics as Reports. The list lives at Reports in the sidebar. The underlying engine, query language, and APIs are unchanged — every metric you already created continues to work, and everything in this guide still applies.

Metric types

LayersQL metrics

LayersQL metrics are computed from Layers analytics data, including behavior captured through the Storefront Pixel and search interactions. These metrics provide near real-time insights into how customers engage with your products through the Layers platform. Key characteristics:
  • Built using the LayersQL query language
  • Powered by Layers’ behavioral analytics data
  • Support three datasets: products (product performance), search (search analytics), and collections (collection browsing analytics)
  • Support arithmetic expressions for calculated metrics (e.g., SUM(total_sales) + SUM(quantity_purchased))
  • Support WHERE clause filtering before aggregation (e.g., WHERE geo_country = 'US')
  • Support segmentation by dimensions (country, marketing channel, etc.) - see LayersQL Syntax for SEGMENT BY details
  • Real-time computation from search, browse, and interaction events
  • Ideal for metrics like views, add-to-carts, and search-driven conversions
Example LayersQL metric:
FROM products
SHOW SUM(total_sales)
GROUP BY product_id
SINCE -7d
This metric calculates total sales tracked by Layers over the last 7 days, grouped by product.

Variant-level metrics

LayersQL metrics can be grouped by variant_id to track performance at the variant level. This is particularly useful when combined with Variant Breakouts to sort variant tiles by their individual performance. Example variant-level metric:
FROM products
SHOW SUM(total_sales)
GROUP BY product_id, variant_id
SINCE -7d
When you include both product_id and variant_id in the GROUP BY clause, the metric tracks separate values for each variant. This enables variant-specific sorting and merchandising. Sorting behavior with variant breakouts: When a variant-level metric is used in a sort order and variant breakouts are enabled:
  • Variant tiles are sorted by their individual variant-level metric values
  • Product tiles are sorted by their product-level metric values (aggregated across all variants)
This conditional sorting ensures that variant tiles rank based on their specific performance while product tiles rank based on overall product performance, providing accurate and relevant sorting for both tile types.

Imported (ShopifyQL) metrics

Imported metrics leverage Shopify’s native analytics data through ShopifyQL queries. These metrics provide access to Shopify’s comprehensive sales data, including information not available through Layers’ analytics like returns, refunds, and detailed financial metrics. Key characteristics:
  • Built using full ShopifyQL syntax
  • Powered by Shopify’s native sales and analytics data
  • Automatically refreshed on a configurable schedule (hourly, daily, weekly)
  • Access to Shopify-specific metrics like return rates, net sales, and order values
  • Must include product_id in the query results for proper product association
Example Imported metric:
FROM sales
SHOW product_id, returned_quantity_rate
WHERE product_id IS NOT NULL
GROUP BY product_id
SINCE -7d UNTIL today
This metric imports Shopify’s return rate data for products over the last 7 days, refreshed daily. Important requirements for Imported metrics:
  • The ShopifyQL query must return a product_id column
  • Only one metric value column (besides product_id) should be returned
  • Queries must specify a refresh frequency: hourly, daily, or weekly

Segmented metrics

Segmented metrics allow you to break down performance data by audience attributes like geography or marketing channel. This enables personalized product rankings that automatically adapt to each visitor’s context.

How segmentation works

When you create a metric with segmentation, Layers stores separate values for each dimension. For example, a metric segmented by country tracks performance separately for visitors from the United States, Canada, United Kingdom, and other regions, plus an overall value as a fallback. When this metric is used in a sort order, the system automatically selects the most relevant value for each visitor. A visitor from Canada sees rankings based on Canadian performance when available, otherwise the overall performance. This happens transparently without requiring separate sort orders for each region.

Why use segmented metrics

Segmented metrics help you deliver more relevant product rankings to different audiences:
  • Geographic personalization: Show products that perform well in each visitor’s country or region
  • Marketing channel optimization: Rank products differently for visitors from paid ads versus organic search
  • Localized merchandising: Adapt rankings based on regional preferences and buying patterns
The system handles all the complexity automatically. You define the segmentation once in your metric, and it works across all sort orders that use that metric.

Fallback behavior

Segmented metrics always include an overall value as a fallback. If a visitor’s context doesn’t match any specific segment, or if a product has no data for that segment, the system uses the overall value. This ensures every visitor sees ranked results even when segment-specific data isn’t available.

Smoothing factor

When using segmented metrics in sort orders, you can configure a smoothing factor to control how segment-specific and global performance data are blended. This parameter helps balance personalization with stability. The smoothing factor determines the weight given to segment-specific data versus global data when ranking products. Lower values (1-50) prioritize segment-specific performance, making rankings more responsive to local trends. Higher values (100-200) blend in more global data, providing stability when segment data is sparse. When to adjust the smoothing factor:
  • High-traffic segments with established patterns: Use lower values (25-50) to maximize personalization
  • New or low-traffic segments: Use higher values (100-150) to maintain stable rankings until more data accumulates
  • Testing segmentation: Start with higher values (150-200) and gradually decrease as you validate segment performance
The smoothing factor is configured when adding a segmented metric to a sort order. See Sort Orders for detailed guidance on choosing appropriate values.

Learn more

What you can do with metrics

In search ranking

The Engagement signal group in text search ranking is powered by four query-level metrics:
  • Click-through rate — Distinct product views ÷ impressions for queries in the same semantic cluster.
  • Add-to-cart rate — Distinct add-to-cart events ÷ impressions for queries in the same cluster.
  • Purchase rate — Distinct purchases ÷ impressions for queries in the same cluster.
  • Revenue — Total revenue attributed to this product for queries in the same cluster.
Unlike sort-order metrics that summarise a product’s all-up performance, these signals measure how a product performs for the specific search intent the shopper just expressed. Two products can have similar overall sales velocity but very different engagement when the query is “running shoes” versus “dress shoes” — the ranking model uses these query-level signals to surface whichever product fits the query best.

Query clusters

Search queries are grouped into semantic clusters, so similar queries (for example, “summer dress” and “sundress”) share the same engagement data. This pools enough events for new and long-tail queries to produce useful signals from day one.

Blended attribution

Engagement metrics are computed over a 30-day window using blended attribution. Both deterministic events (forwarded to the Beacon API with an attributionToken) and modeled events from the same browsing session contribute to clicks, add-to-carts, purchases, and revenue. This keeps engagement signals accurate even when the attributionToken doesn’t survive the full path to checkout. You can adjust how much weight the engagement signal group receives relative to other signal groups (semantic, keyword, freshness, inventory) from the Ranking Relevancy page. Within the engagement group, you can also tune the relative importance of each query-level sub-signal — for example, weighting purchase rate above CTR if you care more about conversion than click magnetism.
Query-level engagement signals are recomputed on a rolling schedule and require shoppers to be actively searching and converting. Stores with little search traffic, or storefronts that don’t forward events to the Beacon API, will see less influence from this signal group until enough data accumulates.

In sort orders

Use metrics as sorting criteria to automatically rank products based on performance. Create weighted combinations of multiple metrics to develop sophisticated ranking algorithms that balance different business objectives. Examples:
  • Sort by recent conversion rate to surface high-performing products
  • Combine sales velocity with inventory levels for intelligent restocking
  • Weight revenue metrics alongside engagement metrics for balanced merchandising

In merchandising rules

The merchandising grid displays metric values directly on product cards, giving you real-time performance data while you arrange products. Each card shows the metrics you’ve selected, so you can make informed decisions about pinning, boosting, and grouping without switching between pages.

Choosing which metrics to display

Open Customize UI in the merchandising toolbar to select which metrics appear on product cards. You can add multiple metrics and drag them to reorder how they appear on each card. Your selections persist across sessions, so the grid remembers your preferred layout.

Which metrics are eligible

Not every metric can appear on the merchandising grid. A metric is eligible when it produces a per-product value:
  • Imported (ShopifyQL) metrics are always eligible
  • LayersQL metrics must meet all of the following:
    • Target the products dataset
    • Return a single aggregated value (one SHOW clause)
    • Be grouped by product_id or a product attribute
Metrics that aggregate across products (for example, total search volume) or return multiple values don’t resolve to a single number per product and are excluded from the merchandising grid.

Using metrics to guide decisions

With metric overlays enabled, you can:
  • Pin products with high return rates to the bottom of collections
  • Boost products with strong revenue per view in key collections
  • Demote products with declining sales velocity
  • Compare performance across products at a glance while reordering
See Create a merchandising rule for step-by-step instructions on configuring metric overlays.

Inline metrics on blocks and merchandising rules

In addition to dashboard widgets and sort orders, Layers surfaces a compact 7-day performance summary directly on the Blocks and Merchandising rules lists. Each row shows:
  • Impressions — block requests, or merchandising rule applications
  • Clicks — distinct product views attributed to the block or rule
  • CTR — clicks ÷ impressions
  • Add to cart — distinct add-to-cart events attributed to the block or rule
  • Purchases — distinct purchases attributed to the block or rule
Each row also includes a small sparkline showing the daily trend across the last seven days. Counts are deduplicated per session, so a single shopper viewing the same product multiple times still counts as one click. These metrics are recomputed every four hours and rely on the attribution token returned by the Blocks and Browse APIs. For attribution to work, your storefront must forward click, add-to-cart, and purchase events to the Beacon API with the same attributionToken. Stores that have not yet forwarded any events will see a placeholder until data arrives.

Blended attribution

Layers reports a single blended count for clicks, add-to-carts, and purchases on every block and merchandising rule. The blended figure combines two attribution signals so that you don’t undercount conversions when the attributionToken is missing — for example, when a shopper continues into checkout on a page that doesn’t forward the token. Each event falls into one of three categories:
  • Deterministic — The event was forwarded to the Beacon API with the originating attributionToken. Counted at full weight (1.0).
  • Modeled, high confidence — The event came from the same session as the originating request and happened within two minutes of it. The product also appeared in the original result set. Counted at full weight (1.0).
  • Modeled, medium confidence — Same conditions as high confidence, but the event happened between two and ten minutes after the request. Counted at half weight (0.5) to discount the higher chance the shopper was influenced by something else.
Events tied directly to a token always win — modeled attribution only fills in events that have no token at all. Each modeled event is also assigned to a single request, so the same purchase or click is never double-counted across overlapping blocks or rules. The blended count is what appears in the inline summary, the Performance sheet KPI cards, and chart trends. The performance sheet also breaks the totals down so you can see how much of each metric came from deterministic versus modeled signals when you need a closer look.

Default dashboard metrics

When you create a new store, Layers automatically adds a set of pre-configured metrics to your Overview dashboard. These include aggregate metrics for search volume, orders, revenue, collection views, image search, similar products, and block recommendations — giving you immediate visibility into your storefront performance without any manual setup. Block analytics metrics track how your recommendation blocks perform, including impressions, revenue, conversion rate, click sessions, and top-performing blocks. See Metric Recipes for the full list of block analytics recipes.
Existing stores receive block analytics metrics automatically. They appear on your Overview dashboard alongside your other default metrics.

Visualization and analysis

LayersQL metrics include built-in charts for visualizing your data directly in the dashboard. You can choose from eight chart types — including line, bar, area, donut, and card — to display trends, comparisons, and distributions. Metrics also calculate automatic prior period comparisons, showing percentage changes over time without requiring additional configuration. For details on chart types and configuration, see Creating Metrics & Best Practices.

See also