Skip to main content

Overview

Ranking relevancy gives you direct control over how products are ordered in text search results. Every search result is scored using a combination of signals — semantic similarity, keyword matching, engagement metrics, freshness, and inventory status. You can adjust how much each signal contributes to the final score. You can also create rules that boost, bury, pin, or sort specific products for particular queries. The ranking relevancy system has two main areas:
  • Signal weights control the global balance between the five signal groups that make up every search score.
  • Ranking rules apply conditional adjustments — promoting, demoting, pinning, or sorting products when specific search queries are detected.

Signal groups

Every text search score is composed of five signal groups. Each group contributes a percentage of the total score, and all group weights always sum to 100%.
Signal groupWhat it measuresSub-signals
SemanticHow closely product content matches the meaning of the queryText similarity, image similarity
KeywordHow well exact terms in the query match product fieldsPer-field matching across title, description, vendor, and other searchable attributes
EngagementHow customers interact with productsViews (7d), sales (7d), cart sessions (7d), total sales (7d)
FreshnessHow recently the product was publishedPublication date recency
InventoryWhether the product is in stockStock availability

Sub-signals

Within each group, individual sub-signals can also be tuned:
  • Semantic — Adjust the balance between text similarity and image similarity. If your catalog relies heavily on visual discovery, increasing image similarity weight helps surface visually relevant products.
  • Engagement — Adjust the relative importance of product views, sales, cart sessions, and total sales. These signals are powered by your store’s metrics — specifically the metric recipes that track 7-day product engagement. For example, weighting sales higher than views rewards products that convert, not just products that attract attention.
  • Keyword — Sub-signal weights for keyword matching are derived from your attribute configuration and are displayed as read-only values.

Keyword-matched variant selection

When keyword features target variant-scoped attributes — such as options, variant metafields, or variant fields like SKU — the system automatically identifies the variant with the highest keyword relevance for each product. The first_or_matched_variant field in the API response reflects this best-matching variant rather than defaulting to the first variant by position. This is useful when your searchable attributes include variant-level data. For example, if you have a keyword weight on options.size and a customer searches for “large,” each product’s response will include the “Large” variant as the matched variant, showing the correct price and availability for that size. To prevent false matches, variant selection requires keyword scores to exceed a minimum relevance threshold. Superficial text similarities — such as a query like “Jake” producing a trivial match against the option value “Lukas” — are ignored. Only variants with meaningful keyword relevance are selected, so the matched variant genuinely reflects the search intent. If no variant meets the threshold, the system falls back to the next step in the priority chain below. Variant selection priority: When determining which variant to return, the system evaluates the following in order:
  1. Explicit variant filters — If the request includes option or variant filters, the filtered variant is used.
  2. Keyword-matched variant — If keyword features matched on variant-scoped attributes, the variant with the highest combined keyword score is used.
  3. Pin-level default variant — If the product is pinned (via a merchandising rule or a visual ranking rule) and the pin has default variant options configured, the variant matching those options is used. See default variant for pins.
  4. Default selected options — If defaultSelectedOptions is specified in the request, the first matching variant is used.
  5. Confident vector search match — If semantic search matched a specific variant embedding and the match is confident, that variant is used. Confidence is determined by comparing the variant embedding distance against the product-level embedding distance of the same type (text-to-text or image-to-image). A match is confident when the variant is meaningfully closer to the query than the product-level embedding. When the variant does not improve on the product-level match — for example, “gold” and “silver” variants for a query like “ring” where neither variant adds relevance beyond the product title — the system falls through to the position fallback. The same fallback applies when no product-level embedding of the same type exists, since there is no baseline to determine whether the variant adds value.
  6. Position fallback — The first variant by position is used.
Keyword-matched variant selection only applies to text searches where keyword features are configured on variant-scoped attributes. It does not apply to browse, image search, or similar product requests.

How normalization works

When you adjust one signal group’s weight, the remaining groups are automatically rebalanced so that all weights continue to sum to 100%. This means you can increase the importance of one signal without manually reducing every other signal.
Avoid setting any single signal group above 70%. Over-concentrating weight on one group can reduce result diversity and make rankings less responsive to other quality signals.

Result relevancy filtering

After scoring and ranking, the search engine automatically filters out results that fall significantly below the top-scoring products. This removes low-relevancy tail results that would otherwise appear at the end of search results — products that matched broadly but are not meaningfully related to the query. The filtering uses two complementary strategies:
  • Statistical filtering removes products whose scores are far below the group average. If a product’s score is more than two standard deviations below the highest-scoring result, it is excluded.
  • Minimum ratio filtering removes products whose scores are less than half of the top result’s score, catching cases where a few weak results slip through statistical filtering.
Both filters run automatically on text search results. You do not need to configure them. The effect is that search results are more focused — customers see fewer irrelevant products at the bottom of result pages, especially for specific or niche queries where the gap between good and poor matches is large.
Result relevancy filtering does not affect the total number of results returned for broad queries where most products score similarly. It primarily reduces noise for queries with a clear relevancy gap between strong and weak matches.

Ranking rules

Ranking rules let you create conditional adjustments that modify search rankings when specific conditions are met. Each rule has a name, a scope, optional targeting conditions, and one or more actions.

Rule types

Manual rules define attribute-based conditions to boost or bury products. For example, you can boost all products from a specific vendor or bury products tagged as “clearance” for particular search queries. Visual rules let you pin products to exact positions in search results using a drag-and-drop interface. Search for a query, then drag products into the order you want them to appear.

Scope

Each rule has a scope that determines when it fires:
  • Global — Applies to every text search, regardless of what the customer searches for.
  • Query-specific — Applies only when the search query matches a targeting condition.

Targeting conditions

Query-specific rules support three matching modes:
ModeBehaviorExample
Exact matchQuery must equal the value exactly (case-insensitive)“summer dress” matches only “summer dress”
ContainsQuery must contain the value as a substring”dress” matches “red summer dress collection”
Semantic matchQuery must be semantically similar above a configurable threshold”summer dress” could match “sundress” or “beach outfit”
Semantic matching uses embedding similarity to detect queries with the same intent, even when the exact words differ. You can adjust the similarity threshold between 50% and 100% — lower thresholds match more loosely, higher thresholds require closer similarity.

Actions

ActionEffectConfiguration
BoostPromotes products matching an attribute filter higher in resultsAttribute + operator + value + strength (1–50%)
BuryDemotes products matching an attribute filter lower in resultsAttribute + operator + value + strength (1–50%)
PinLocks specific products to exact positions in resultsProduct selection + position (max 50 pinned products)
SortOverrides the sort order for matching queries using metric-based expressionsUp to 3 sorting expressions, each with a metric attribute, direction, and weight (5–100%)
Manual rules can have multiple actions. For example, a single rule can boost one set of products while burying another, or combine boost actions with a sort override for the same query.

Default variant for pins

When you pin a product in a visual ranking rule, you can optionally set a default variant for that pin. This controls which variant’s data (price, image, and options) appears for the pinned product in search results. For example, if you pin a t-shirt to position 1 and set the default variant to “Color: Red, Size: Medium,” the search result tile for that product will show the price and image for the Red/Medium variant instead of the default first variant. Default variant options set on a pin take priority over request-level defaultSelectedOptions but are overridden by explicit variant filters and keyword-matched variant selection. See the full variant selection priority above.
Default variant selection for pins is also available in merchandising rules. When both a merchandising rule pin and a ranking rule pin specify default variant options for the same product, the ranking rule pin takes precedence.

Sort action

The sort action lets you override the default search result ordering for specific queries using metric-based sorting expressions. This is useful when certain search terms should rank results by a specific metric — like best sellers or highest rated — instead of the default relevance-based ranking. For example, you could create a rule that triggers on the query “best sellers” and sorts results by your 7-day sales metric, or a rule that triggers on “new arrivals” and sorts by publication date. The sort override only applies to searches matching the rule’s targeting conditions, so it does not affect how results are ranked globally. Each sort action contains up to three sorting expressions. Each expression specifies:
  • Attribute — The metric or product attribute to sort by (for example, a sales or conversion rate metric).
  • Direction — Whether to sort ascending or descending.
  • Weight — How strongly the metric influences the final sort order (5–100%). Lower weights act as tie-breakers among products with similar relevance scores. Higher weights give the metric more influence over the ranking.
Metric requirements for sort actions: Only metrics that are product-level and single-value can be used in sort expressions. Multi-value metrics (such as those using list aggregation) and variant-level metrics are not eligible. Imported metrics from Shopify are also supported. If a metric does not meet these criteria, it will not appear as an option when configuring a sort expression. Sort expressions use a relevance-preserving formula. Instead of replacing relevance entirely, the sort blends metric values with the existing relevance score. Products with high relevance for the query are minimally affected by the metric boost, preserving search intent for precise queries. Products with lower relevance scores are more influenced by the metric, which helps surface high-performing products when the relevance differences between results are small.
Sort actions can be combined with boost and bury actions in the same rule. For example, you could sort results by a sales metric while also boosting products from a specific vendor.

Rule status

Each ranking rule can be set to Published (active) or Draft (inactive). Published rules are applied to live searches, while draft rules are saved but not evaluated. On the rules list page, you can filter by status to quickly find published or draft rules.

How rules are evaluated

Ranking rules only apply to text searches. When a text search is executed:
  1. All enabled rules for your store are loaded.
  2. Global rules are applied to every search.
  3. Query-specific rules are checked against the search query using their targeting condition.
  4. Matching rules apply their actions — boost and bury actions adjust product scores, sort actions override the result ordering, and pin actions lock products to fixed positions.
If multiple rules match the same search, all of their actions are applied. Boost and bury adjustments are additive, with a maximum combined magnitude of 50%. If multiple sort actions match, their sorting expressions are accumulated and applied together.

Search preview

The signal weights page includes a live search preview that shows how products would rank with your current weight configuration. You can:
  • Type a search query and see ranked results instantly.
  • Toggle ranking rules on or off to compare their impact.
  • View a per-product score breakdown showing how each signal group contributed to the product’s ranking.
  • See which ranking rules were applied to each product (shown as “Boosted”, “Buried”, or “Pinned” badges).
  • Inspect sort adjustment details when relevance boost sort expressions are active, including the base hybrid score, boost sum, and per-metric raw and normalized values.
  • Review vector evidence showing which text chunks and catalog images most influenced semantic matching for each product.
The preview reflects unsaved weight changes, so you can experiment before committing. For a detailed walkthrough of the score breakdown panel, see Score breakdown.

Content search ranking

Content search (blog articles) uses a separate ranking model with three signals: text similarity (50%), image similarity (20%), and freshness (30%). These weights can be adjusted per-request via the API or interactively using the content search evaluate page. For API details, see the content search API reference.

See also