ERC-4626

In this tutorial, we will show you how to index ERC-4626 data using the Blockflow CLI. Blockflow provides a seamless and efficient solution for building "Serverless Backends" through our Managed Databases, Instances, and Custom APIs.

ERC-4626 is a new Ethereum standard designed to improve tokenized vaults. It's about making the process of creating and managing yield-bearing vaults more efficient and standardized. With ERC-4626, developers can easily integrate different DeFi protocols, which means smoother operations and fewer headaches. In the ERC-4626 vault contract, users initiate the process by depositing their assets into the smart contract vault. These assets are combined into a pooled system, and in return, the vault issues ERC-20 tokens representing each user's portion of the assets deposited in the pool.

The vault uses specific strategies to try and make as much profit as possible from these pooled assets. These strategies are carefully planned to get the most rewards for everyone who has put their assets in the vault.

Also, the vault keeps some tokens in reserve. This is like a safety net. When someone wants to take their assets out, tokens from these reserves are distributed first before being redeemed from the allocated sources. This system makes sure there's always some money available for people who want to withdraw their assets and helps the vault manage its funds better.

Project Overview

The purpose of this tutorial is to teach you how to index ERC-4626 data by creating subgraphs with the aid of Blockflow and its utilities. A short breakdown for the project is -

  • Create a project directory and cd into it. Use the command blockflow init through the terminal and provide with the relevant contract address to be tracked.

  • Provide with appropriate schema to the studio.schema.ts file depending upon the data emitted by the events to be tracked.

  • Write the handler functions for tracking each event.

  • Test the project using the comand blockflow test and verify the data stored over the database.

Setting up the project

  • Download the Blockflow CLI using the command:

    npm i -g @blockflow-labs/cli

    We can remove the -g tag if global installation is not required.

  • Create a project directory and use the cd command over it to make it the working directory.

  • Use the blockflow init command to initiate the project.

  • Below are the fields that need to be entered for the project. You can also let the default values for each field by pressing [ENTER]. The last field requires you to select the instance trigger type with 4 choices. You can also add more than one trigger for a project.

Setting up the schema

The below mentioned schema is appropriate to data for an ERC-4626 contract events:

export interface User {
  id: String;
  tokenBalance: number;
  vaultBalance: number;
  entryValue: number;
  realizedEarnings: number;
}

export interface Token {
  id: String;
  name: string;
  totalSupply: number;
}

export interface Vault {
  id: String;
  name: string;
  vaultAddress: string;
  totalSupply: number;
  totalHolding: number;
  pricePerShare: number;
  totalTokenEarnings: number;
  minimumLock: number;
  peripheryAddress: string;
}

export interface dailyUserTrack{
  id: String;
  userId: String;
  dailyVaultBalance: number;
  dailyEntryValue: number;
}

export interface monthlyUserTrack{
  id: String;
  userId: String;
  monthlyVaultBalance: number;
  monthlyEntryValue: number;
}

export interface yearlyUserTrack{
  id: String;
  userId: String;
  yearlyVaultBalance: number;
  yearlyEntryValue: number;
}

export interface dailyVolume{
  id: String;
  dailyVaultTotalSupply: number;
  dailyPricePerShare: number;
}

export interface monthlyVolume{
  id: String;
  monthlyVaultTotalSupply: number;
  monthlyPricePerShare: number;
}

export interface yearlyVolume{
  id: String;
  yearlyVaultTotalSupply: number;
  yearlyPricePerShare: number;
}

export interface dailyAPY{
  id: String;
  dailyTokenEarnings: number;
  averageTokenEarningsPerToken: number;
  dailyAPYamount: number;
}

export interface weeklyAPY{
  id: String;
  weeklyTokenEarnings: number;
  averageTokenEarningsPerToken: number;
  weeklyAPYamount: number;
}

export interface monthlyAPY{
  id: String;
  monthlyTokenEarnings: number;
  averageTokenEarningsPerToken: number;
  monthlyAPYamount: number;
}

export interface yearlyAPY{
  id: String;
  yearlyTokenEarnings: number;
  averageTokenEarningsPerToken: number;
  yearlyAPYamount: number;
}

On completing the schema, use the command blockflow typegen to generate types.

Writing the handler function

Now we move onto writing the handler functions but first remove the unwanted event triggers from studio.yaml file. Below is the studio.yaml with 5 events that tracks the data fulfilling our schema above:c

name: Project Apollo
description: A top-secret research project to the moon
startBlock: latest
userId: XXXXXXXX-XXXX-XXXX-XXXXXXXX-XXXXXXXX
projectId: XXXXXXXX-XXXX-XXXX-XXXXXXXX-XXXXXXXX
network: Optimism
user: Jane-doe
schema:
  file: ./studio.schema.ts
execution: parallel
Resources:
  - Name: blockflow
    Abi: src/abis/blockflow.json
    Type: contract/event
    Address: "0x754e6134872d7a501ffeba6c186e187dbfdf6f4a"
    Triggers:
      - Event: Deposit(address indexed,address indexed,uint256,uint256)
        Handler: src/handlers/blockflow/Deposit.DepositHandler
      - Event: MinimumLockUpdated(uint256)
        Handler: src/handlers/blockflow/MinimumLockUpdated.MinimumLockUpdatedHandler
      - Event: PeripheryUpdated(address)
        Handler: src/handlers/blockflow/PeripheryUpdated.PeripheryUpdatedHandler
      - Event: Transfer(address indexed,address indexed,uint256)
        Handler: src/handlers/blockflow/Transfer.TransferHandler
      - Event: Withdraw(address indexed,address indexed,uint256,uint256)
        Handler: src/handlers/blockflow/Withdraw.WithdrawHandler

Use the command blockflow codegen to generate handler templates in the /src/handlers directory.

There are total 5 different files that are generated in the handlers directory. We will look into the Deposit.ts file on how funds are deposited into the vault and shares are received by the user at the same time. The first part of the code includes imports where we import the necessary schemas we had generated in the src/types/schema.ts file earlier.

We also import some functions which are to track the daily, monthly and yearly user data which are imported after the schema. These functions are helper functions saved in the /src/utils.

import {
  Vault,
  IVault,
  User,
  IUser,
  dailyUserTrack,
  IdailyUserTrack,
  monthlyUserTrack,
  ImonthlyUserTrack,
  yearlyUserTrack,
  IyearlyUserTrack,
  dailyVolume,
  IdailyVolume,
  monthlyVolume,
  ImonthlyVolume,
  yearlyVolume,
  IyearlyVolume,
} from "../../types/schema"

import {
  dailyUserTrackHandler,
  monthlyUserTrackHandler,
  yearlyUserTrackHandler,
  dailyVolumeHandler,
  monthlyVolumeHandler,
  yearlyVolumeHandler,
} from "../../utils/tracker"

We now have to write the handler code logic below the boiler plate code that was auto-generated in the handler. At first we bind the database connections for all DB's together. We try to do repetitive methods together to avoid confusion as we are dealing with a lot of databases at the same time.

const userDB: Instance = bind(User)
const vaultDB: Instance = bind(Vault)
const dailyUserTrackDB: Instance = bind(dailyUserTrack)
const monthlyUserTrackDB: Instance = bind(monthlyUserTrack)
const yearlyUserTrackDB: Instance = bind(yearlyUserTrack)
const dailyVolumeDB: Instance = bind(dailyVolume)
const monthlyVolumeDB: Instance = bind(monthlyVolume)
const yearlyVolumeDB: Instance = bind(yearlyVolume)

Now we move onto creating a unique ID, both for the user and the vault which will later be used for querying data. The user ID is made the receiver address and the vault ID is just the chain Id concatenated with the vault name.

const userId = `${receiver.toString()}`
const vaultId = `${block.chain_id.toString()}-"Vault"`

We now create a user variable which is assigned the interface IUser, it uses the findOne method to query for a specific userId over the userDB. On not finding any data relating to the userId, it creates a new user document with the same Id that was used for querying and populated its fields.

Assests are added to the vaultBalance and entryValue and the document is saved for the created user by the save method.

let user: IUser = await userDB.findOne({
    id: userId,
  })
  user ??= await userDB.create({
    id: userId,
    tokenBalance: 0,
    vaultBalance: 0,
    entryValue: 0,
    realizedEarnings: 0,
  })
  user.vaultBalance += assets
  user.entryValue += assets
  await userDB.save(user)

Similar to the above userDB, we query the vaultDB for a specific Id and on not finidng the data , we create a new vault with the provided Id and populate the fields as shown below:

let vault: IVault = await vaultDB.findOne({
    id: vaultId,
  })
  vault ??= await vaultDB.create({
    id: vaultId,
    name: "Vault",
    vaultAddress: receiver.toString(),
    totalSupply: 0,
    totalHolding: 0,
    pricePerShare: 0,
    totalTokenEarnings: 0,
    minimumLock: 0,
    peripheryAddress: "",
  })
  vault.totalHolding += assets
  vault.totalSupply += shares
  vault.pricePerShare = assets / shares
  await vaultDB.save(vault)

Now we need to update the daily, monthly and yearly user data for which we had imported the functions that were saved in the tracker.ts file. An example of a function to track the daily data of a user is shown below:

export async function dailyUserTrackHandler(
  userId: string,
  dailyUserTrack: Instance,
  blockTimestamp: string,
  assets: number,
  entryValue: number,
) {
  const BlockTimestamp = Number(blockTimestamp)
  const dateFromTimestamp = new Date(BlockTimestamp * 1000)
  const date = dateFromTimestamp.toISOString().split("T")[0]
  let id = userId.concat("_").concat(date)
  let dailytrack = await dailyUserTrack.findOne({
    id: id,
  })
  dailytrack ??= await dailyUserTrack.create({
    id: id,
    userId: userId,
    dailyVaultBalance: 0,
    dailyEntryValue: 0,
  })
  dailytrack.dailyVaultBalance += assets
  dailytrack.dailyEntryValue += entryValue
  await dailyUserTrack.save(dailytrack)
}

Notice how we create a unique Id using the date which is received from calculations from the blocktimestamp. We put all the necessary values as function parameters and query and update the DB (which we did above for the user and vault DB) inside the function. This improves the code quality and clarification. Finally the function has been imported inside the Deposit.ts file(and wherever necessary) and used by providing the arguments as shown below:

await dailyUserTrackHandler(
    userId,
    dailyUserTrackDB,
    block.block_timestamp,
    assets,
    assets,
  )

Note that await has been used because of the function being synchronous.

Testing

We use the below command to test if the handler logic is correct and the data gets stored on our specified collection in Mongo.

blockflow instance-test --startBlock <block number> --clean --range 10 --uri <connection string>

The <block number> can be put of any range, but we can put it to latest if no opinion provided. The — uri holds the MongoDB connection URL which is tested locally shall be filled with mongodb://localhost:27017/blockflow_studio.

Last updated