What you can do
- Run test searches and review results in one place.
- Switch between demo profiles to simulate different shoppers.
- Use the Search Workbench to tune search parameters and bypass features.
- View Query Understanding for every search to see how the pipeline processed your query.
- Get AI-powered explanations that reference the top results and ranking scores.
Create a demo profile (persona)
- Go to Evaluate → Text Search.
- Open Demo Profiles and click Create.
- Add a nickname (e.g., “US returning customer”).
- (Optional) Add location, past purchases, items in cart, and prior searches.
- Save.
Use a demo profile in testing
- In Evaluate → Text Search, select a profile from the dropdown.
- Enter a few search terms and browse the results.
- Try another profile to compare behavior.
Search workbench
The Search Workbench panel lets you fine-tune search parameters, configure relevancy sort order, and selectively bypass features to understand their impact on results. You can save changes permanently or use them temporarily for testing.Relevancy sort order
The Relevancy Sort Order section allows you to configure how search results are ranked by dragging and dropping sorting attributes.- Drag-and-drop reordering: Click and drag the grip icon to reorder sorting attributes. The order determines ranking priority, with attributes at the top having the highest influence.
- Direction toggle: Click the arrow icon to switch between ascending (↑) and descending (↓) sort order for each attribute.
- Remove attributes: Click the trash icon to remove an attribute from the sort order (except Relevance Score, which is required).
- Real-time preview: Changes update search results immediately so you can see the impact.
Quick add recipes
Use the Quick Add dropdown to quickly add pre-configured relevancy factors to your sort order:- Boost In-Stock Products: Prioritize products with higher SKU coverage (more variants in stock)
- Boost New Products: Prioritize recently published products
- Demote Clearance Items: Push products with high discounts lower in results
- Boost Popular Products: Prioritize products with higher sales in the last 7 days
- Boost Trending: Prioritize products with more views in the last 7 days
Saving relevancy changes
Click the Save button in the Relevancy Sort Order section header to save your changes permanently. The Save button only appears when you’ve made changes to the sort order.Bypass options
Toggle these options to isolate specific features:- Bypass Cache: Forces a fresh search without using cached results.
- Bypass Query Expansion: Disables AI-powered query expansion to see results for the exact query only.
- Bypass Intent Modifiers: Disables automatic intent detection that may boost or filter results.
Query understanding
The Query Understanding panel appears for every search and visualizes how your query was processed through the search pipeline. Even when no explicit query rewrites or filters were applied, the panel confirms that AI-powered semantic retrieval and ranking were still active.Pipeline steps
When the search pipeline applies transformations to your query, the panel displays each step with its effect:- Query Expansion: Shows synonyms and related terms added to broaden the search.
- Query Replacement: Displays any typo corrections or query rewrites applied.
- Query Cleanup: Shows preprocessing steps that cleaned or normalized the query.
- Metadata Filtering: Indicates any automatic filters applied based on query context.
- Intent Modifiers: Shows detected shopping intent and resulting boosts or filters.
- Affinity Boosts: Displays personalization boosts derived from the shopper’s session context (cart contents and purchase history). Each boost shows the attribute, operator, value, and weight. Positive weights (shown in green) promote matching products, while negative weights (shown in red) slightly demote non-matching products. This step only appears when a demo profile with cart or purchase data is active.
- Rule Applied: Shows any Search Rules that matched and modified the request.
AI explain
The Explain button is always visible in the Query Understanding panel, regardless of whether pipeline steps were applied. Click it to generate an AI-powered explanation of how the search pipeline processed your query. The explanation covers query transformations, intent detection, and how the ranking model scored the top results. The AI explanation incorporates the top results returned by the search, so it can reference what the ranking model favored and why specific products ranked highly. This makes explanations more grounded and actionable. You can see not just what the pipeline did, but how it influenced the final result ordering. You can click Explain again after any search to generate a fresh explanation, including when you change demo profiles, adjust workbench settings, or modify the query.Score breakdown
Each product card in the evaluate results includes a scoring details panel. Hover over a result and click the info icon to open the score breakdown for that product. The breakdown shows how the ranking model scored the product and which signals contributed most.Hybrid score and final score
Every result displays a Hybrid Score — the base ranking score computed from all signal groups (semantic, keyword, engagement, freshness, and inventory). When a relevance boost sort order is active, the panel also displays a Final Score that reflects the adjusted ranking after Layers applies sort expressions. The hybrid score is shown below for reference so you can see how much the sort adjustment influenced the result.Signal group contributions
The score breakdown lists the active signal groups and their weighted contributions to the hybrid score. Groups are shown as a proportional bar so you can quickly see which signals dominated the ranking. Individual feature weights are listed below the bar.Sort adjustments
When relevance boost sort expressions are active (from ranking rules or the relevancy sort order in the workbench), the score breakdown includes a Sort Adjustment section. Expand it to see:- Adjusted Score — the product’s score after the sort expression was applied.
- Base Hybrid — the original hybrid score before adjustment.
- Boost Sum — the total boost applied from all metric expressions.
- Per-metric details — for each metric in the sort expression, the raw value, normalized value, and weight. The metric name is resolved so you can identify which metric influenced the score.
Vector evidence
The Vector Evidence section shows which text chunks and images most influenced the semantic matching for each product. This section appears when the search used vector-based retrieval (which is the default for text searches). The summary shows the top text and image match similarity scores. Click Open Evidence to view the full details in a side panel, where each vector contribution includes:- Type — whether the match came from text or image embeddings.
- Similarity score — how closely the product’s content matched the query.
- Top-K matches — the number of times this product appeared in the nearest-neighbor results across query variations.
- Matched queries — which query variations (including expansions) produced the match.
- Matched content — the actual text chunk or image that was most similar to the query. For image matches, the matched catalog image is displayed as a thumbnail.
- Matched variant — if a specific variant’s embedding was the closest match, the variant ID is shown.
Flagging problematic outputs
When you notice a query expansion or intent modifier that produces poor results, you can flag it to improve future searches. Layers automatically excludes flagged items from subsequent searches.How to flag an output
- In the Query Understanding panel, locate the problematic query expansion or intent modifier.
- Click the Flag button next to the item.
- In the flag modal, select a reason from the dropdown:
- Irrelevant to query
- Factually incorrect
- Offensive or inappropriate
- Too broad / generic
- Too narrow / specific
- Other
- Select the flag type:
- Generally bad: Excludes this output from all searches (recommended for most cases)
- Context-specific: Only excludes this output when it appears in similar query contexts
- (Optional) Add feedback to provide additional context about the issue.
- Click Flag Output to submit.
What happens after flagging
- Layers immediately adds the flagged item to a block list and excludes it from future searches.
- Layers also filters similar outputs using similarity matching (90% threshold for content, 70% for query context).
Flag types explained
Generally bad flags apply globally across all searches. Use this when an output is always problematic regardless of context (e.g., offensive content, factually wrong expansions). Context-specific flags only apply when both the output content and the original query are similar to what you flagged. Use this when an output is only problematic in certain situations (e.g., a query expansion that’s helpful for some queries but misleading for others). You can select both checkboxes if the output is problematic in multiple ways.Notes
- Demo profiles are for testing only and don’t use real customer data.
- If results look off, confirm your catalog is synced and search settings are saved.
- You can save workbench changes permanently using the Save buttons in each section, or use them temporarily for testing without saving.
- The Query Understanding panel appears for every search, regardless of whether explicit pipeline steps were applied.