Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.uselayers.com/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Ranking relevancy gives you direct control over how products are ordered in text search results. Every search result is scored using a combination of signals — semantic similarity, keyword matching, engagement metrics, freshness, and inventory status. You can adjust how much each signal contributes to the final score. You can also create rules that promote, demote, pin, or sort specific products for particular queries. The ranking relevancy system has two main areas:
  • Signal weights control the global balance between the five signal groups that make up every search score.
  • Ranking rules apply conditional adjustments — promoting, demoting, pinning, or sorting products when specific search queries are detected.

Signal groups

Every text search score is composed of five signal groups. Each group contributes a percentage of the total score, and all group weights always sum to 100%.
Signal groupWhat it measuresSub-signals
SemanticHow closely product content matches the meaning of the queryText similarity, image similarity
KeywordHow well exact terms in the query match product fieldsPer-field matching across title, description, vendor, and other searchable attributes
EngagementHow customers interact with products for the matched queryClick-through rate, add-to-cart rate, purchase rate, revenue
FreshnessHow recently the product was publishedPublication date recency
InventoryWhether the product is in stockStock availability

Sub-signals

Within each group, individual sub-signals can also be tuned:
  • Semantic — Adjust the balance between text similarity and image similarity. If your catalog relies heavily on visual discovery, increasing image similarity weight helps surface visually relevant products.
  • Engagement — Adjust the relative importance of click-through rate, add-to-cart rate, purchase rate, and revenue. These are query-level sub-signals: they measure how shoppers reacted when this product appeared for queries in the same semantic cluster as the current search, rather than the product’s all-up performance. Each rate is computed over the last 30 days using blended attribution, so deterministic events (forwarded with an attributionToken) and modeled events from the same browsing session both contribute. Weighting purchase rate higher than CTR rewards products that convert for a query, not just products that get clicked.
  • Keyword — Sub-signal weights for keyword matching are derived from your attribute configuration and are displayed as read-only values.

Keyword-matched variant selection

When keyword features target variant-scoped attributes — such as options, variant metafields, or variant fields like SKU — the system automatically identifies the variant with the highest keyword relevance for each product. The first_or_matched_variant field in the API response reflects this best-matching variant rather than defaulting to the first variant by position. This is useful when your searchable attributes include variant-level data. For example, if you have a keyword weight on options.size and a customer searches for “large,” each product’s response includes the “Large” variant as the matched variant. This shows the correct price and availability for that size. To prevent false matches, variant selection requires keyword scores to exceed a minimum relevance threshold. Superficial text similarities — such as a query like “Jake” producing a trivial match against the option value “Lukas” — are ignored. Only variants with meaningful keyword relevance are selected, so the matched variant genuinely reflects the search intent. If no variant meets the threshold, the system falls back to the next step in the priority chain below. Variant selection priority: When determining which variant to return, the system evaluates the following in order:
  1. Explicit variant filters — If the request includes option or variant filters, the filtered variant is used.
  2. Keyword-matched variant — If keyword features matched on variant-scoped attributes, the variant with the highest combined keyword score is used.
  3. Pin-level default variant — If the product is pinned (via a merchandising rule or a visual ranking rule) and the pin has default variant options configured, the variant matching those options is used. See default variant for pins.
  4. Default selected options — If defaultSelectedOptions is specified in the request, the first matching variant is used.
  5. Confident vector search match — If semantic search matched a specific variant embedding and the match is confident, that variant is used. Confidence is determined by comparing the variant embedding distance against the product-level embedding distance of the same type (text-to-text or image-to-image). A match is confident when the variant is meaningfully closer to the query than the product-level embedding. When the variant does not improve on the product-level match, the system falls through to the position fallback. For example, “gold” and “silver” variants for a query like “ring” may not add relevance beyond the product title. The same fallback applies when no product-level embedding of the same type exists, since there is no baseline to determine whether the variant adds value.
  6. Position fallback — The first variant by position is used.
Keyword-matched variant selection only applies to text searches where keyword features are configured on variant-scoped attributes. It does not apply to browse, image search, or similar product requests.

Adjusting weights

The Weight Distribution panel shows a donut chart and legend with each signal group’s current percentage. For each signal group you can either drag its slider or type an exact percentage (between 1% and 80%) into the input field next to the slider. Click the chevron next to a group’s name to expand and tune its sub-signals, or hover the help icon beside a group’s name to see a description of what that signal measures. Click Reset above the chart to restore the default distribution.

How normalization works

When you adjust one signal group’s weight, the remaining groups are automatically rebalanced so that all weights continue to sum to 100%. This means you can increase the importance of one signal without manually reducing every other signal.
Avoid setting any single signal group above 70%. Over-concentrating weight on one group can reduce result diversity and make rankings less responsive to other quality signals.

Result relevancy filtering

After scoring and ranking, the search engine automatically filters out results that fall significantly below the top-scoring products. This removes low-relevancy tail results that would otherwise appear at the end of search results — products that matched broadly but are not meaningfully related to the query. The filtering uses two complementary strategies:
  • Statistical filtering removes products whose scores are far below the group average. If a product’s score is more than two standard deviations below the highest-scoring result, it is excluded.
  • Minimum ratio filtering removes products whose scores are less than half of the top result’s score, catching cases where a few weak results slip through statistical filtering.
Both filters run automatically on text search results. You do not need to configure them. The effect is that search results are more focused. Customers see fewer irrelevant products at the bottom of result pages, especially for specific or niche queries where the gap between good and poor matches is large.
Result relevancy filtering does not affect the total number of results returned for broad queries where most products score similarly. It primarily reduces noise for queries with a clear relevancy gap between strong and weak matches.

Tiered recall

Before the engine runs full scoring, candidate products are sorted into quality tiers based on how well they match the query — separating strong matches from weaker ones. Each tier has its own cap on how many products can pass through to the next stage, and stronger tiers are processed ahead of weaker ones. This serves two purposes:
  • Higher result quality — Strong matches are guaranteed a spot in the ranking pool, so close-but-narrow queries (like a specific product name, color, or model) consistently surface the best candidates near the top, even when many weaker products also match.
  • Faster search — Capping the candidate pool by tier reduces the number of products that go through the most expensive scoring steps, which keeps response times low on large catalogs.
Tiered recall runs automatically on text searches as part of the default ranking pipeline. There is no configuration in the dashboard — the tiers and caps are tuned by the platform to balance result quality and performance for your catalog. The behavior is observable in the search preview: queries with a few clearly strong matches will reliably show those products at the top, while broad queries continue to draw from a wider pool.
Tiered recall runs before signal weights and ranking rules are applied. Promote, demote, and pin actions still operate on the candidates that pass the tier filter, so your ranking rules continue to behave as expected.

Ranking rules

Ranking rules let you create conditional adjustments that modify search rankings when specific conditions are met. Each rule has a name, a scope, optional targeting conditions, and one or more actions.

Rule types

Manual rules define attribute-based conditions to promote or demote products. For example, you can promote all products from a specific vendor or demote products tagged as “clearance” for particular search queries. Visual rules let you pin products to exact positions in search results using a drag-and-drop interface. Search for a query, then drag products into the order you want them to appear.

Scope

Each rule has a scope that determines when it fires:
  • Global — Applies to every text search, regardless of what the customer searches for.
  • Query-specific — Applies only when the search query matches a targeting condition.

Targeting conditions

Query-specific rules support three matching modes:
ModeBehaviorExample
Exact matchQuery must equal the value exactly (case-insensitive)“summer dress” matches only “summer dress”
ContainsQuery must contain the value as a substring”dress” matches “red summer dress collection”
Semantic matchQuery must be semantically similar above a configurable threshold”summer dress” could match “sundress” or “beach outfit”
Semantic matching uses embedding similarity to detect queries with the same intent, even when the exact words differ. You can adjust the similarity threshold between 50% and 100% — lower thresholds match more loosely, higher thresholds require closer similarity.

Actions

ActionEffectConfiguration
PromoteMoves products matching an attribute filter higher in resultsAttribute + operator + value + strength (1–50%)
DemoteMoves products matching an attribute filter lower in resultsAttribute + operator + value + strength (1–50%)
PinLocks specific products to exact positions in resultsProduct selection + position (max 50 pinned products)
SortOverrides the sort order for matching queries using metric-based expressionsUp to 3 sorting expressions, each with a metric attribute, direction, and weight (5–100%)
Manual rules can have multiple actions. For example, a single rule can promote one set of products while demoting another, or combine promote actions with a sort override for the same query.

Default variant for pins

When you pin a product in a visual ranking rule, you can optionally set a default variant for that pin. This controls which variant’s data (price, image, and options) appears for the pinned product in search results. For example, if you pin a t-shirt to position 1 and set the default variant to “Color: Red, Size: Medium,” the search result tile shows the price and image for the Red/Medium variant instead of the default first variant. Default variant options set on a pin take priority over request-level defaultSelectedOptions but are overridden by explicit variant filters and keyword-matched variant selection. See the full variant selection priority above.
Default variant selection for pins is also available in merchandising rules. When both a merchandising rule pin and a ranking rule pin specify default variant options for the same product, the ranking rule pin takes precedence.

Sort action

The sort action lets you override the default search result ordering for specific queries using metric-based sorting expressions. This is useful when certain search terms should rank results by a specific metric — like best sellers or highest rated — instead of the default relevance-based ranking. For example, you could create a rule that triggers on the query “best sellers” and sorts results by your 7-day sales metric, or a rule that triggers on “new arrivals” and sorts by publication date. The sort override only applies to searches matching the rule’s targeting conditions, so it does not affect how results are ranked globally. Each sort action contains up to three sorting expressions. Each expression specifies:
  • Attribute — The metric or product attribute to sort by (for example, a sales or conversion rate metric).
  • Direction — Whether to sort ascending or descending.
  • Weight — How strongly the metric influences the final sort order (5–100%). Lower weights act as tie-breakers among products with similar relevance scores. Higher weights give the metric more influence over the ranking.
Metric requirements for sort actions: Only metrics that are product-level and single-value can be used in sort expressions. Multi-value metrics (such as those using list aggregation) and variant-level metrics are not eligible. Imported metrics from Shopify are also supported. If a metric does not meet these criteria, it will not appear as an option when configuring a sort expression. Sort expressions use a relevance-preserving formula. Instead of replacing relevance entirely, the sort blends metric values with the existing relevance score. Products with high relevance for the query are minimally affected by the metric boost, preserving search intent for precise queries. Products with lower relevance scores are more influenced by the metric, which helps surface high-performing products when the relevance differences between results are small.
Sort actions can be combined with promote and demote actions in the same rule. For example, you could sort results by a sales metric while also promoting products from a specific vendor.

Rule status

Each ranking rule is either Published (active) or Draft (inactive). Published rules are applied to live searches, while draft rules are saved but not evaluated. On the rules list page, you can filter by status to quickly find published or draft rules. New rules are created as drafts by default. You use separate actions in the rule editor to control status:
  • Save Changes — Persists the current configuration without changing publish state. A new rule stays in Draft; an already-published rule stays Published.
  • Publish — Saves the rule and marks it as Published.
  • Unpublish — Available on the edit page when a rule is already published. Takes the rule offline while preserving its configuration so you can republish it later.
This separation lets you stage configuration changes to a live rule, verify them in the search preview, and then either publish the changes or revert them before they affect customers. See Create a ranking rule for step-by-step instructions.

How rules are evaluated

Ranking rules only apply to text searches. When a text search is executed:
  1. All enabled rules for your store are loaded.
  2. Global rules are applied to every search.
  3. Query-specific rules are checked against the search query using their targeting condition.
  4. Matching rules apply their actions — promote and demote actions adjust product scores, sort actions override the result ordering, and pin actions lock products to fixed positions.
If multiple rules match the same search, all of their actions are applied. Promote and demote adjustments are additive, with a maximum combined magnitude of 50%. If multiple sort actions match, their sorting expressions are accumulated and applied together.

Search preview

The signal weights page includes a live search preview that shows how products would rank with your current weight configuration. You can:
  • Type a search query and see ranked results instantly.
  • Toggle ranking rules on or off to compare their impact.
  • View a per-product score breakdown showing how each signal group contributed to the product’s ranking.
  • See which ranking rules were applied to each product (shown as “Promoted”, “Demoted”, or “Pinned” badges).
  • Inspect sort adjustment details when relevance boost sort expressions are active, including the base hybrid score, boost sum, and per-metric raw and normalized values.
  • Review vector evidence showing which text chunks and catalog images most influenced semantic matching for each product.
The preview reflects unsaved weight changes, so you can experiment before committing. For a detailed walkthrough of the score breakdown panel, see Score breakdown.

Content search ranking

Content search (blog articles) uses a separate ranking model with three signals: text similarity (50%), image similarity (20%), and freshness (30%). These weights can be adjusted per-request via the API or interactively using the content search evaluate page. For API details, see the content search API reference.

See also