Skip to content

Queries

This guide covers the Query API: how to construct queries, apply sort key conditions, filter results, paginate, and collect.

Queries in effect-dynamodb are fluent chains with pre-resolved services. A collection query or entity scan returns a BoundQuery that you chain combinators onto, then terminate with .collect(), .fetch(), or .paginate().

import { DynamoClient } from "effect-dynamodb"
const db = yield* DynamoClient.make({
entities: { TaskEntity },
tables: { MainTable },
})
// Construct and execute a query in one chain
const results = yield* db.entities.TaskEntity
.byProject({ projectId: "proj-alpha" })
.filter({ status: "active" })
.limit(25)
.reverse()
.collect()

Query accessors live on db.entities (for entity index queries, primary key lookups, and scans) and db.collections (for auto-discovered cross-entity queries). Entity index queries return typed arrays; collection queries return grouped results.

// Primary key lookup (not a query — returns single item)
const task = yield* tasks.get({ taskId: "t-001" })
// Named index queries — via entity index accessors
const projectTasks = yield* db.entities.TaskEntity.byProject({
projectId: "proj-alpha",
}).collect()
const assigneeTasks = yield* db.entities.TaskEntity.byAssignee({
assigneeId: "emp-alice",
}).collect()

Every entity also exposes a .primary(...) accessor for the primary index — same contract as GSI accessors (required PK composites, optional SK composites with begins_with prefix matching):

// Primary-index query — list every item under a shared primary partition.
// Used when multiple items share the primary PK and are distinguished by SK
// (the join-table single-table pattern).
const allMembers = yield* db.entities.Memberships.primary({
orgId: "org-acme",
}).collect()

Every index — including primary — gets a query accessor. Accessors accept required PK composites and optional SK composites (partial SK composites apply begins_with prefix matching). .get(fullKey) remains the dedicated GetItem path for single-item fetches by full primary key.

DefinitionAccessorArgument Type
primaryKey: { pk: { composite: ["taskId"] }, ... }db.entities.TaskEntity.get(...){ taskId: string }
primaryKey: { pk: { composite: ["orgId"] }, sk: { composite: ["userId"] } }db.entities.Memberships.primary(...){ orgId: string; userId?: string }
indexes: { byProject: { name: "gsi1", pk: { composite: ["projectId"] }, ... } }db.entities.TaskEntity.byProject(...){ projectId: string }
indexes: { byAssignee: { name: "gsi2", pk: { composite: ["assigneeId"] }, ... } }db.entities.TaskEntity.byAssignee(...){ assigneeId: string }

Use .get(fullKey) when you know the full primary composite key and want a single item — it’s a strongly-consistent GetItem, cheaper than a Query. Use .primary(partialKey) when you want to list items that share a primary partition key, for example a join-table where one partition holds many items distinguished by the sort key:

// `Memberships` primary key: pk = orgId, sk = userId
//
// List every membership in an organization — PK only, SK composites omitted
const allMembers = yield* db.entities.Memberships.primary({
orgId: "org-acme",
}).collect()
// Narrow by sort-key prefix — equivalent to `.get()` here because the full SK
// composite is provided, but returns an array (possibly empty) rather than
// failing with `ItemNotFound`.
const bobs = yield* db.entities.Memberships.primary({
orgId: "org-acme",
userId: "u-bob",
}).collect()

Behavior is symmetric with GSI accessors: .where(), .filter(), .select(), .limit(), .reverse(), .startFrom(), .consistentRead(), .collect(), .fetch(), .paginate(), and .count() all chain off the returned BoundQuery.

.where() adds a KeyConditionExpression against a remaining sort key composite — DynamoDB evaluates it on the index server-side, so it does reduce read capacity. Use it whenever the condition is on a sort-key composite the accessor hasn’t already pinned. Operators: eq, lt, lte, gt, gte, between, beginsWith.

// Index: byProject — pk: ["projectId"], sk: ["status", "createdAt"]
// `.byProject({ projectId })` pins the PK; `status` + `createdAt` remain on the SK.
const recentDone = yield* db.entities.TaskEntity
.byProject({ projectId: "proj-alpha" })
.where((t, { eq }) => eq(t.status, "done"))
.where((t, { gt }) => gt(t.createdAt, "2026-01-01"))
.collect()

.where() is only available on a BoundQuery whose remaining SK composites are non-empty. Once you have already supplied every SK composite to the accessor (or after a previous .where() consumes them), the method is no longer present on the type — calling it is a compile error. This makes it impossible to issue a sort-key condition that DynamoDB would reject at runtime.

.filter() applies a FilterExpression after items are read from the index — it does not reduce read capacity, only network transfer. Use it for conditions on non-key attributes.

// Shorthand — AND-equality on multiple fields
const highPriActive = yield* db.entities.TaskEntity.byProject({ projectId: "proj-alpha" })
.filter({ status: "active", priority: "high" })
.collect()
// Shorthand — simple AND-equality
const activeShorthand = yield* db.entities.TaskEntity.byProject({ projectId: "proj-alpha" })
.filter({ status: "active" })
.collect()

When to use SK composites vs filter:

ApproachDynamoDB MappingReduces Read Capacity?Use For
SK composites in accessorKeyConditionExpressionYesNarrowing by sort key prefix
.filter()FilterExpressionNoAny attribute (post-read)

See the Expressions Guide for the complete operator reference and DynamoDB mapping tables.

Collection queries return all member entity types, grouped by member name.

const db = yield* DynamoClient.make({
entities: { ClusteredEmployees, ClusteredTasks },
tables: { MainTable },
})
// All entities in the collection (auto-discovered from entity indexes with collection: "tenantMembers")
const { ClusteredEmployees, ClusteredTasks } = yield* db.collections
.tenantMembers({ tenantId: "t-acme" })
.collect()
// ClusteredEmployees: Employee[], ClusteredTasks: Task[]

Use the array form collection: ["parent", "child"] together with type: "clustered" to nest entities in a hierarchy. A query at the parent level returns the parent’s items and every descendant; a query at a child level returns only items at that level or deeper.

// Parent — returns Employee + Task + ProjectMember (everything in the partition)
const contributions = yield* db.collections
.contributions({ employeeId: "emp-alice" })
.collect()
// { SubEmployee: Employee[], SubTasks: Task[], SubProjectMembers: ProjectMember[] }
// Child — returns only the deeper-level entities
const assignments = yield* db.collections
.assignments({ employeeId: "emp-alice" })
.collect()
// { SubTasks: Task[], SubProjectMembers: ProjectMember[] }

For independent collections that just happen to share an index (no parent/child relationship), use a single string instead of an array (collection: "name"). See the Indexes & Collections guide for the full pattern, SK shape, and trade-offs.

.collect() fetches all pages and flattens into a single array (or grouped result for collections):

// Collect all items across all pages
const allProjectTasks = yield* db.entities.TaskEntity.byProject({
projectId: "proj-alpha",
}).collect()

.fetch() returns a page containing matching items (up to the limit) and an optional cursor for pagination:

// Single page with limit — returns page with items and cursor
const page = yield* db.entities.TaskEntity.byProject({ projectId: "proj-alpha" }).limit(3).fetch()
// page.items: Task[] (up to 3 items)
// page.cursor: string | null (pass to startFrom for next page)

Use .startFrom() to resume from a previous page’s cursor:

// Cursor-based pagination
const page1 = yield* db.entities.TaskEntity.byProject({ projectId: "proj-alpha" })
.limit(3)
.fetch()
if (page1.cursor) {
const page2 = yield* db.entities.TaskEntity.byProject({ projectId: "proj-alpha" })
.limit(3)
.startFrom(page1.cursor)
.fetch()
}

.paginate() returns a Stream<A>, automatically handling DynamoDB pagination:

// Streaming — automatic pagination via Stream
const stream = tasks.scan().paginate()
const allFromStream = yield* Stream.runCollect(stream)

Entity scans read the entire table and return items matching the entity type. db.entities.Entity.scan() returns a BoundQuery, so all combinators and terminals work with scans.

// Basic scan — all items of this entity type
const allTasks = yield* tasks.scan().collect()
// Scan with filter
const activeScan = yield* tasks.scan().filter({ status: "active" }).collect()
// Scan with limit
const firstPage = yield* tasks.scan().limit(3).collect()
// Scan with consistent read
const consistent = yield* tasks.scan().consistentRead().collect()
// Stream-based scan
const scanStream = tasks.scan().paginate()
yield* Stream.runForEach(scanStream, (t) => Console.log(` Scanned: ${t.taskId} — "${t.title}"`))

When to use Scan vs Query:

QueryScan
TargetsSpecific partitionEntire table
EfficiencyReads only matching partitionReads every item
CostLow (proportional to results)High (proportional to table size)
Use casesNormal application queriesAdmin tools, migrations, data exports, analytics

Scan automatically filters by __edd_e__ — even in a single-table design, db.entities.Tasks.scan() only returns Task items.

By default, DynamoDB reads are eventually consistent. For strong consistency, use consistentRead:

// Consistent read on get — use entity definition's get + pipe
const consistentTask = yield* TaskEntity.get({ taskId: "t-001" }).pipe(Entity.consistentRead())
// Consistent read on scan (applies to any BoundQuery against the base table).
// Note: DynamoDB GSIs do not support consistent reads — only the base table
// and local secondary indexes do.
const consistentScan = yield* tasks.scan().consistentRead().collect()

Consistent reads cost 2x the read capacity of eventually-consistent reads. Use them when you need read-after-write consistency (e.g., immediately after a put or update). GSIs cannot serve consistent readsconsistentRead() is only valid against the primary table or a local secondary index.

By default, results are in ascending sort key order. Use .reverse() for descending:

// Most recent tasks first (descending sort key order)
const recent = yield* db.entities.TaskEntity.byProject({ projectId: "proj-alpha" })
.reverse()
.limit(3)
.collect()
import { Effect, Layer, Stream } from "effect"
import { DynamoClient } from "effect-dynamodb"
const program = Effect.gen(function* () {
const db = yield* DynamoClient.make({
entities: { TaskEntity, ClusteredEmployees, ClusteredTasks },
tables: { MainTable },
})
// --- Single entity scan with filter ---
const activeTasks = yield* db.entities.TaskEntity.scan()
.filter({ status: "active" })
.limit(50)
.collect()
// --- Entity index query, reversed, with filter ---
const recentHighPriority = yield* db.entities.TaskEntity
.byAssignee({ assigneeId: "emp-alice" })
.filter({ priority: "high" })
.reverse()
.limit(10)
.collect()
// --- Auto-discovered collection query: all tenant members ---
const { ClusteredEmployees, ClusteredTasks } = yield* db.collections
.tenantMembers({ tenantId: "t-acme" })
.collect()
// ClusteredEmployees: Employee[], ClusteredTasks: Task[]
// --- Scan with streaming ---
const scanStream = db.entities.TaskEntity.scan()
.filter({ status: "active" })
.paginate()
yield* Stream.runForEach(scanStream, (t) =>
Effect.log(`Task: ${t.title}`)
)
})
const main = program.pipe(
Effect.provide(
Layer.mergeAll(
DynamoClient.layer({ region: "us-east-1" }),
MainTable.layer({ name: "Main" }),
)
)
)
  • Expressions — Complete reference for condition, filter, update, and projection expressions with DynamoDB mapping tables
  • Data Integrity — Unique constraints, versioning, and optimistic concurrency
  • Lifecycle — Soft delete, restore, purge, and version retention
  • Advanced — Rich updates, batch operations, conditional writes
  • DynamoDB Streams — Decode stream records into typed domain objects