Engineering

From SQL Queries to Real-Time Search

Apr 09, 2026

From SQL Queries to Real-Time Search

Our lists loaded. Queries returned. Yet complex filters felt slow, DB load was high, and adding “one more” search field was risky. This was the hidden cost of doing search with a transactional database. It worked, until growth turned “okay” into friction.

Context: DMS and Parcel Management

As a ride-hailing company, we’ve expanded beyond passenger transportation into delivery and logistics services. Our Delivery Management System (DMS) handles the end-to-end lifecycle of parcels that clients send through our platform. From pickup requests to final delivery, the DMS tracks every parcel’s route, status, and metadata.

This expansion brought new operational complexity. Operations teams, customer support, and drivers all need to search, filter, and manage parcels efficiently across multiple dimensions: status, location, time windows, client details, and more.

DMS

What We Were Solving

We started with direct SQL in the DMS: a simple SearchCriteria built queries against the DB. With few parcels, it was fine. Growth changed the shape of the problem.

The database had grown to over 20 million rows and was handling hundreds of queries per second. At this scale:

  • More users with many parcels made list views expensive.
  • Multi-filter combinations (status, hub, shipping type, dates) became common.
  • Text search was expected.
  • The DB is the source of truth, but it’s not a search engine.

We needed list views that stayed fast at scale without pushing the transactional DB over the edge.

What We Needed: Defining the Requirements

The challenge was clear, but the solution needed to meet specific criteria. We needed something that was simple to use for our engineering teams, delivered fast search performance even under heavy load, and could scale as our parcel volumes continued to grow.

Why a dedicated search engine? Traditional relational databases are optimized for transactional consistency and point queries, not for the kind of exploratory, multi-dimensional filtering our users needed. OpenSearch is a distributed search and analytics engine built specifically for handling large-scale data with complex query requirements. It excels at:

  • Full-text search with relevance scoring and fuzzy matching
  • Multi-field filtering with minimal performance impact
  • Horizontal scalability through distributed architecture
  • Near real-time indexing for fresh data availability
  • Aggregations and analytics for operational insights

We already had Sleuth, an in-house tool that abstracts OpenSearch complexity and provides a consistent interface for teams. The task was to apply Sleuth to our DMS search problem while ensuring:

  • Simplicity: Teams shouldn’t need deep OpenSearch knowledge
  • Performance: Millisecond response times for complex queries
  • Reliability: Consistent behavior under production load
  • Maintainability: Standardized patterns across services

Why Sleuth + OpenSearch

Sleuth packages OpenSearch’s complexity in a manageable, consistent way for teams:

  • Unified search APIs without bespoke endpoints per service.
  • Standardized configuration for fields, filters, and sorts.
  • Search offloading that keeps transactional DBs healthy.
  • Scales with volume while maintaining stable latencies.
  • Operational visibility into indexing and query usage.

It’s not a shortcut; it’s a repeatable way to operate search at product scale.

How It Works

Our model is mixed by design:

  • Exact ID lookups → DB (source of truth).
  • Listings & multi-filter queries → Sleuth (OpenSearch).

Indexing is event-driven. When a parcel changes, the relevant fields reach Sleuth. Sleuth applies the configured mapping and bulk-indexes into OpenSearch. Versioning ensures only the latest state is searchable. Net effect: very fresh search data optimized for filtering and sorting.

How it works

What Improved

The improvements were immediate and measurable. Complex list queries that previously took seconds now return consistent millisecond responses, regardless of filter complexity. Database load dropped significantly as we moved exploration and search operations away from the transactional path, allowing the DB to focus on what it does best: maintaining data integrity and handling point queries.

Product iteration became simpler. Teams can now add fields and filters in a standardized way rather than writing per-service ad-hoc code. This means:

  • Faster delivery of new search capabilities
  • Less code to maintain across services
  • Consistent behavior that users can rely on

Most importantly, the user experience improved at scale. Lists load fast, sorting is responsive, and latency is predictable, even as data volumes grow.

DMS Filters

Trade-Offs

Of course, this approach comes with trade-offs. OpenSearch is not the source of truth. A small indexing delay exists between database changes and search visibility. In practice, this delay is negligible for most use cases, but it’s something to be aware of when designing features that require immediate consistency.

The standardization that makes Sleuth powerful also means scoped flexibility. Some advanced search features may need special handling outside the standard patterns. We’ve found this to be a reasonable constraint that encourages thoughtful design rather than ad-hoc solutions.

There’s also an adoption curve. Teams need to learn about:

  • Document structure and field mappings
  • Event-driven indexing patterns
  • Versioning and conflict resolution
  • Query optimization for OpenSearch

However, once teams climb that curve, they benefit from a shared, solid foundation that accelerates future development.

When This Approach Fits

This approach fits particularly well when your service relies on lists with multiple filters and sorts over large volumes of data. If users are constantly exploring data through different combinations of filters, dates, and search terms, a dedicated search infrastructure becomes essential.

It’s also ideal when you want to protect the transactional database from exploratory queries that could degrade response times. By offloading these workloads, the database can maintain consistent performance for critical transactional operations.

Finally, this makes sense when you aim to align teams around a common search model and observability. Instead of each team building their own search infrastructure with different patterns and monitoring, Sleuth provides:

  • A unified approach to search across services
  • Shared operational knowledge and best practices
  • Consistent metrics and observability

This alignment reduces cognitive load and makes it easier for engineers to work across different parts of the system.

Key Takeaways

Real-time product search is a different workload from transactions. We treat it that way.

  • Keep the DB for truth and pinpoint reads.
  • Use Sleuth + OpenSearch for exploration and listings.

By matching each workload to the right tool, we achieved better performance, lower operational risk, and a foundation that scales with our product.

Key takeaways

Borja Valverde

Software Engineer

Choose which cookies
you allow us to use

Cookies are small text files stored in your browser. They help us provide a better experience for you.

For example, they help us understand how you navigate our site and interact with it. But disabling essential cookies might affect how it works.

In each section below, we explain what each type of cookie does so you can decide what stays and what goes. Click through to learn more and adjust your preferences.

When you click “Save preferences”, your cookie selection will be stored. If you don’t choose anything, clicking this button will count as rejecting all cookies except the essential ones. Click here for more info.

Save preferences