Queries
This guide covers the Query API: how to construct queries, apply sort key conditions, filter results, paginate, and collect.
BoundQuery — A Fluent Chain
Section titled “BoundQuery — A Fluent Chain”Queries in effect-dynamodb are fluent chains with pre-resolved services. A collection query or entity scan returns a BoundQuery that you chain combinators onto, then terminate with .collect(), .fetch(), or .paginate().
import { DynamoClient } from "effect-dynamodb"
const db = yield* DynamoClient.make({ entities: { TaskEntity }, tables: { MainTable },})
// Construct and execute a query in one chainconst results = yield* db.entities.TaskEntity .byProject({ projectId: "proj-alpha" }) .filter({ status: "active" }) .limit(25) .reverse() .collect()Entity Queries
Section titled “Entity Queries”Query accessors live on db.entities (for entity index queries, primary key lookups, and scans) and db.collections (for auto-discovered cross-entity queries). Entity index queries return typed arrays; collection queries return grouped results.
// Primary key lookup (not a query — returns single item)const task = yield* tasks.get({ taskId: "t-001" })
// Named index queries — via entity index accessorsconst projectTasks = yield* db.entities.TaskEntity.byProject({ projectId: "proj-alpha",}).collect()const assigneeTasks = yield* db.entities.TaskEntity.byAssignee({ assigneeId: "emp-alice",}).collect()Every entity also exposes a .primary(...) accessor for the primary index — same contract as GSI accessors (required PK composites, optional SK composites with begins_with prefix matching):
// Primary-index query — list every item under a shared primary partition.// Used when multiple items share the primary PK and are distinguished by SK// (the join-table single-table pattern).const allMembers = yield* db.entities.Memberships.primary({ orgId: "org-acme",}).collect()Generated Accessor Mapping
Section titled “Generated Accessor Mapping”Every index — including primary — gets a query accessor. Accessors accept required PK composites and optional SK composites (partial SK composites apply begins_with prefix matching). .get(fullKey) remains the dedicated GetItem path for single-item fetches by full primary key.
| Definition | Accessor | Argument Type |
|---|---|---|
primaryKey: { pk: { composite: ["taskId"] }, ... } | db.entities.TaskEntity.get(...) | { taskId: string } |
primaryKey: { pk: { composite: ["orgId"] }, sk: { composite: ["userId"] } } | db.entities.Memberships.primary(...) | { orgId: string; userId?: string } |
indexes: { byProject: { name: "gsi1", pk: { composite: ["projectId"] }, ... } } | db.entities.TaskEntity.byProject(...) | { projectId: string } |
indexes: { byAssignee: { name: "gsi2", pk: { composite: ["assigneeId"] }, ... } } | db.entities.TaskEntity.byAssignee(...) | { assigneeId: string } |
.primary() vs .get()
Section titled “.primary() vs .get()”Use .get(fullKey) when you know the full primary composite key and want a single item — it’s a strongly-consistent GetItem, cheaper than a Query. Use .primary(partialKey) when you want to list items that share a primary partition key, for example a join-table where one partition holds many items distinguished by the sort key:
// `Memberships` primary key: pk = orgId, sk = userId//// List every membership in an organization — PK only, SK composites omittedconst allMembers = yield* db.entities.Memberships.primary({ orgId: "org-acme",}).collect()
// Narrow by sort-key prefix — equivalent to `.get()` here because the full SK// composite is provided, but returns an array (possibly empty) rather than// failing with `ItemNotFound`.const bobs = yield* db.entities.Memberships.primary({ orgId: "org-acme", userId: "u-bob",}).collect()Behavior is symmetric with GSI accessors: .where(), .filter(), .select(), .limit(), .reverse(), .startFrom(), .consistentRead(), .collect(), .fetch(), .paginate(), and .count() all chain off the returned BoundQuery.
Sort Key Conditions
Section titled “Sort Key Conditions”.where() adds a KeyConditionExpression against a remaining sort key composite — DynamoDB evaluates it on the index server-side, so it does reduce read capacity. Use it whenever the condition is on a sort-key composite the accessor hasn’t already pinned. Operators: eq, lt, lte, gt, gte, between, beginsWith.
// Index: byProject — pk: ["projectId"], sk: ["status", "createdAt"]// `.byProject({ projectId })` pins the PK; `status` + `createdAt` remain on the SK.
const recentDone = yield* db.entities.TaskEntity .byProject({ projectId: "proj-alpha" }) .where((t, { eq }) => eq(t.status, "done")) .where((t, { gt }) => gt(t.createdAt, "2026-01-01")) .collect().where() is only available on a BoundQuery whose remaining SK composites are non-empty. Once you have already supplied every SK composite to the accessor (or after a previous .where() consumes them), the method is no longer present on the type — calling it is a compile error. This makes it impossible to issue a sort-key condition that DynamoDB would reject at runtime.
Post-Query Filtering
Section titled “Post-Query Filtering”.filter() applies a FilterExpression after items are read from the index — it does not reduce read capacity, only network transfer. Use it for conditions on non-key attributes.
// Shorthand — AND-equality on multiple fieldsconst highPriActive = yield* db.entities.TaskEntity.byProject({ projectId: "proj-alpha" }) .filter({ status: "active", priority: "high" }) .collect()// Shorthand — simple AND-equalityconst activeShorthand = yield* db.entities.TaskEntity.byProject({ projectId: "proj-alpha" }) .filter({ status: "active" }) .collect()When to use SK composites vs filter:
| Approach | DynamoDB Mapping | Reduces Read Capacity? | Use For |
|---|---|---|---|
| SK composites in accessor | KeyConditionExpression | Yes | Narrowing by sort key prefix |
.filter() | FilterExpression | No | Any attribute (post-read) |
See the Expressions Guide for the complete operator reference and DynamoDB mapping tables.
Collection Queries
Section titled “Collection Queries”Collection queries return all member entity types, grouped by member name.
const db = yield* DynamoClient.make({ entities: { ClusteredEmployees, ClusteredTasks }, tables: { MainTable },})
// All entities in the collection (auto-discovered from entity indexes with collection: "tenantMembers")const { ClusteredEmployees, ClusteredTasks } = yield* db.collections .tenantMembers({ tenantId: "t-acme" }) .collect()// ClusteredEmployees: Employee[], ClusteredTasks: Task[]Hierarchical Sub-Collections
Section titled “Hierarchical Sub-Collections”Use the array form collection: ["parent", "child"] together with type: "clustered" to nest entities in a hierarchy. A query at the parent level returns the parent’s items and every descendant; a query at a child level returns only items at that level or deeper.
// Parent — returns Employee + Task + ProjectMember (everything in the partition)const contributions = yield* db.collections .contributions({ employeeId: "emp-alice" }) .collect()// { SubEmployee: Employee[], SubTasks: Task[], SubProjectMembers: ProjectMember[] }
// Child — returns only the deeper-level entitiesconst assignments = yield* db.collections .assignments({ employeeId: "emp-alice" }) .collect()// { SubTasks: Task[], SubProjectMembers: ProjectMember[] }For independent collections that just happen to share an index (no parent/child relationship), use a single string instead of an array (collection: "name"). See the Indexes & Collections guide for the full pattern, SK shape, and trade-offs.
Pagination
Section titled “Pagination”Collect All Items
Section titled “Collect All Items”.collect() fetches all pages and flattens into a single array (or grouped result for collections):
// Collect all items across all pagesconst allProjectTasks = yield* db.entities.TaskEntity.byProject({ projectId: "proj-alpha",}).collect()Single Page
Section titled “Single Page”.fetch() returns a page containing matching items (up to the limit) and an optional cursor for pagination:
// Single page with limit — returns page with items and cursorconst page = yield* db.entities.TaskEntity.byProject({ projectId: "proj-alpha" }).limit(3).fetch()// page.items: Task[] (up to 3 items)// page.cursor: string | null (pass to startFrom for next page)Cursor-Based Pagination
Section titled “Cursor-Based Pagination”Use .startFrom() to resume from a previous page’s cursor:
// Cursor-based paginationconst page1 = yield* db.entities.TaskEntity.byProject({ projectId: "proj-alpha" }) .limit(3) .fetch()
if (page1.cursor) { const page2 = yield* db.entities.TaskEntity.byProject({ projectId: "proj-alpha" }) .limit(3) .startFrom(page1.cursor) .fetch()}Streaming
Section titled “Streaming”.paginate() returns a Stream<A>, automatically handling DynamoDB pagination:
// Streaming — automatic pagination via Streamconst stream = tasks.scan().paginate()const allFromStream = yield* Stream.runCollect(stream)Entity scans read the entire table and return items matching the entity type. db.entities.Entity.scan() returns a BoundQuery, so all combinators and terminals work with scans.
// Basic scan — all items of this entity typeconst allTasks = yield* tasks.scan().collect()// Scan with filterconst activeScan = yield* tasks.scan().filter({ status: "active" }).collect()// Scan with limitconst firstPage = yield* tasks.scan().limit(3).collect()// Scan with consistent readconst consistent = yield* tasks.scan().consistentRead().collect()// Stream-based scanconst scanStream = tasks.scan().paginate()yield* Stream.runForEach(scanStream, (t) => Console.log(` Scanned: ${t.taskId} — "${t.title}"`))When to use Scan vs Query:
| Query | Scan | |
|---|---|---|
| Targets | Specific partition | Entire table |
| Efficiency | Reads only matching partition | Reads every item |
| Cost | Low (proportional to results) | High (proportional to table size) |
| Use cases | Normal application queries | Admin tools, migrations, data exports, analytics |
Scan automatically filters by __edd_e__ — even in a single-table design, db.entities.Tasks.scan() only returns Task items.
Consistent Reads
Section titled “Consistent Reads”By default, DynamoDB reads are eventually consistent. For strong consistency, use consistentRead:
// Consistent read on get — use entity definition's get + pipeconst consistentTask = yield* TaskEntity.get({ taskId: "t-001" }).pipe(Entity.consistentRead())// Consistent read on scan (applies to any BoundQuery against the base table).// Note: DynamoDB GSIs do not support consistent reads — only the base table// and local secondary indexes do.const consistentScan = yield* tasks.scan().consistentRead().collect()Consistent reads cost 2x the read capacity of eventually-consistent reads. Use them when you need read-after-write consistency (e.g., immediately after a put or update). GSIs cannot serve consistent reads — consistentRead() is only valid against the primary table or a local secondary index.
Ordering
Section titled “Ordering”By default, results are in ascending sort key order. Use .reverse() for descending:
// Most recent tasks first (descending sort key order)const recent = yield* db.entities.TaskEntity.byProject({ projectId: "proj-alpha" }) .reverse() .limit(3) .collect()Complete Example
Section titled “Complete Example”import { Effect, Layer, Stream } from "effect"import { DynamoClient } from "effect-dynamodb"
const program = Effect.gen(function* () { const db = yield* DynamoClient.make({ entities: { TaskEntity, ClusteredEmployees, ClusteredTasks }, tables: { MainTable }, })
// --- Single entity scan with filter --- const activeTasks = yield* db.entities.TaskEntity.scan() .filter({ status: "active" }) .limit(50) .collect()
// --- Entity index query, reversed, with filter --- const recentHighPriority = yield* db.entities.TaskEntity .byAssignee({ assigneeId: "emp-alice" }) .filter({ priority: "high" }) .reverse() .limit(10) .collect()
// --- Auto-discovered collection query: all tenant members --- const { ClusteredEmployees, ClusteredTasks } = yield* db.collections .tenantMembers({ tenantId: "t-acme" }) .collect() // ClusteredEmployees: Employee[], ClusteredTasks: Task[]
// --- Scan with streaming --- const scanStream = db.entities.TaskEntity.scan() .filter({ status: "active" }) .paginate() yield* Stream.runForEach(scanStream, (t) => Effect.log(`Task: ${t.title}`) )})
const main = program.pipe( Effect.provide( Layer.mergeAll( DynamoClient.layer({ region: "us-east-1" }), MainTable.layer({ name: "Main" }), ) ))What’s Next?
Section titled “What’s Next?”- Expressions — Complete reference for condition, filter, update, and projection expressions with DynamoDB mapping tables
- Data Integrity — Unique constraints, versioning, and optimistic concurrency
- Lifecycle — Soft delete, restore, purge, and version retention
- Advanced — Rich updates, batch operations, conditional writes
- DynamoDB Streams — Decode stream records into typed domain objects