Database Operations

The Blockflow SDK simplifies database interactions and is automatically included in your project when initialised using the CLI. This package, @blockflow-labs/sdk, provides a powerful set of tools to streamline database operations and efficiently manage blockchain data. It supports various database clients and offers operations tailored for both reorg-aware and reorg-unaware databases.


Database Operations Types

The SDK offers three main types of database operations, each designed to handle specific scenarios based on whether or not you need to deal with blockchain reorganisations (reorgs):

  1. InstanceDBOperations This client is designed for writing blockchain data while automatically handling reorgs. When working within instance handler code, InstanceDBOperations manage reorgs on your behalf, ensuring that your data remains consistent despite blockchain reorgs. This is particularly useful for use cases where accurate blockchain data is critical, and reorg handling is necessary.

  2. APIDBOperations Optimised for querying and reading data from reorg-aware databases, APIDBOperations are ideal for fast data retrieval. This client is used to power your APIs or front-end applications with quick access to blockchain data that has already been indexed. It is the recommended choice for read operations, ensuring that your application can fetch data efficiently and reliably.

  3. NativeDBOperations If your database setup does not require reorg handling (reorg-unaware databases), NativeDBOperations offer a simple and flexible way to dump and manage raw blockchain data. This client allows you to perform standard database operations without the additional overhead of reorg management, making it suitable for use cases where reorg consistency isn't a concern.


Database Clients

For each type of database operation, Blockflow SDK supports three specific database clients, allowing you to choose the one that matches your project’s database setup:

  • DuckDBClient: Use this client if you're working with DuckDB, a columnar database that is particularly efficient for analytical workloads.

  • MongoClient: Select this client when using MongoDB, a document-based NoSQL database that's ideal for flexible, schema-less data storage.

  • PostgresClient: This client is designed for PostgreSQL, a powerful open-source relational database that supports complex queries and transactions.


How to Use

To use any of these clients within your handler code, you can import the appropriate operations and database client based on your needs:

import { IEventContext, IBind, ISecrets } from "@blockflow-labs/utils";

import { Instance } from "@blockflow-labs/sdk";

/**
 * @dev Event::Transfer(address from, address to, uint256 value)
 * @param context trigger object with contains {event: {from ,to ,value }, transaction, block, log}
 * @param bind init function for database wrapper methods
 */
export const TransferHandler = async (
  context: IEventContext,
  bind: IBind,
  secrets: ISecrets
) => {
  // Implement your event handler logic for Transfer here

  const { event, transaction, block, log } = context;
  const { from, to, value } = event;

  const client = Instance.PostgresClient(bind);

  const transferDB = client.db();
};

For example, if you're using PostgresDB and want to write blockchain data with automatic reorg handling, you would use the Instance in combination with PostgresClient.

By offering this flexibility, the Blockflow SDK ensures that you can handle various types of blockchain data operations, regardless of the database you're using or whether reorg handling is required.

Last updated