<img height="1" width="1" style="display:none" src="https://q.quora.com/_/ad/ad9bc6b5b2de42fb9d7cd993ebb80066/pixel?tag=ViewContent&amp;noscript=1">

Best practices


The lists returned by the engine are already sorted when they arrive. This means that you should not implement sorting, and take care that any platform- or framework helper code does not sort the results automatically. If sorting is not avoidable, make sure to always use the "Value" property of collections as sort key. This way the resulting sort order will be similar to the order returned by the engine.

User Identification 

All libraries have built in user tracking, using cookies. We recommend that you use our libraries to automatically keep track of users. However, it is possible to use any unique string as User ID, as long as it is persistent so that the same user gets the same ID over multiple sessions. To use a custom User ID instead of the built-in ID handling, use the request parameter "UserId".

Product Identification 

It's vital that the site is consistent regarding product ID's when communicates with the engine. If the products in the catalogue have multiple ID fields, only one of them can be used, and it must be unique for each product.

Product attributes

The engine can be configured to return product attributes in the response. This means that you can choose to only recieve product IDs, or product objects with some or all attributes available. In most implementations, it is sufficient to use only the attributes that are visible on the search results page. This way, assuming you don't need to perform any additional visibility filtering before showing the results, you do not need to do database lookups to render the products, which increases the performance.


For the search engine to perform at its best, use exclude-rules in your server-side cache for the search result pages and for the behaviour feedback events. Otherwise the engine will not be able to learn from your users' behaviour, neither on an aggregated level nor a personal level.

Bots, crawlers, spiders

Many e-commerce stores are visited by several types of bots that are using site search. Not only is this bad for your SEO (in case of googlebot), it potentially screws up user feedback learning and statistics as well. A number of measures can be used to prevent this:

  • Add a robots.txt that tells all bots to ignore the search results page. Most bots will respect this.
  • Make sure that the users IP address is correctly sent to the engine, so that it can match it against known bots. When using a server-side library, this is done automatically, as long as you make sure that load balancers, proxies, or similar, implement the HTTP header "X-Forwarded-For" correctly.

Mixing implementation methods

It is possible to mix different implementation methods, for instance using back-end implementation for search queries and Javascript on the front-end for event tracking and autocomplete. The engine is agnostic regarding implementation methods, as long as all user ID's and product ID's are consistent between the different implementations.

It is also possible to use the same engine across different sites with different libraries. This is useful, for instance, in cases where there are mobile apps or web pages that are built using a different platform than the main store.