SolverScan

In this tutorial, we'll show you how to index SolverScan data using the Blockflow CLI and utilities. Blockflow provides a seamless and efficient solution for building "Serverless Backends" through our Managed Databases, Instances, and Custom APIs.

SolverScan is a comprehensive analytics tool that shows swap and bridge protocol data even with the detailed analysis over the solver information. By aggregating and presenting this data, Solverscan enhances transparency and efficiency in decentralized finance (DeFi) operations.


Project Overview

The purpose of this tutorial is to teach you how to index SolverScan data by creating subgraphs with the aid of Blockflow and its utilities. A short breakdown for the project is -

  • Create a project directory and cd into it. Use the command blockflow init through the terminal and provide with the relevant contract address to be tracked.

  • Provide with appropriate schema to the studio.schema.ts file depending upon the data emitted by the events to be tracked.

  • Write the handler functions for tracking each event.

  • Test the project using the comand blockflow test and verify the data stored over the database.


Setting up the project

  • Download the Blockflow CLI using the command:

    npm i -g @blockflow-labs/cli

    We can remove the -g tag if global installation is not required.

  • Create a project directory and use the cd command over it to make it the working directory.

  • Use the blockflow init command to initiate the project.

  • Below are the fields that need to be entered for the project. You can also let the default values for each field by pressing [ENTER]. The last field requires you to select the instance trigger type with 4 choices. You can also add more than one trigger for a project.


Setting up the schema

The below mentioned schema is the most appropriate to track the SolverScan data for any bridge or swap protocol.

import { String } from "@blockflow-labs/utils";

export interface TradeData {
  id: String;
  protocolAddress: string;
  owner: string;
  sellToken: string;
  buyToken: string;
  sellAmount: string;
  buyAmount: string;
  solver: string;
  liquiditySource: string;
  feeAmount: string;
  orderUid: string;
  timeStamp: string;
  transactionHash: string;
  gasUsed: string;
  gasCost: string;
}

export interface SolverData {
  id: String;
  solverAddress: string;
  totalTransactions: string;
  totalVolume: string;
  averageVolume: string;
  totalGasUsed: string;
}

export interface LiqudityData {
  id: String;
  target: string;
  value: string;
  selector: string;
}

export interface BridgeDataSrc {
  id: String;
  transactionHashSrc: string;
  from: string;
  fromValue: string;
  timestampSrc: string;
}

export interface BridgeDataDest {
  id: String;
  transactionHashDest: string;
  to: string;
  toValue: string;
  solver: string;
  solverGasCost: string;
  timestampDest: string;

}

export interface SolverAnalysis {
  id: String;
  totalTransactions: number;
  totalVolume: string;
  averageVolume: string;
  totalGasSpent: string;
}

export interface Volumeforeachpair {
  id: String;
  frequency: number;
  volume: string;
  token1address: string;
  token2address: string;
}

Below is the table describing each Schema:

Schema
Description

TradeData

This table contains the data of the swapping of tokens including the solver identification.

SolverData

This table contains data regarding a particular solver and their participation in the trade.

LiquidtyData

This table contains information regarding liquidity target, value and selector.

BridgeDataSrc

This table contains the source-side data of the bridging transaction.

BridgeDataDest

This table contains the destination-side data of the bridging transaction.

SolverAnanlysis

This table contains the data regarding the solvers participating in bridging transactions.

Volumeforeachpair

This table contains the data of transaction between any two pair of tokens.

On completing the schema, use the command blockflow typegen to generate types.


Writing the handler function

Below is the studio.yaml file with multiple contracts tracked for different protocols. It tracks 4 different contracts, one for Cow protocol and other two for DeBridge(2) and Across bridge.

name: SolverScan
description: A top-secret research project to the moon
startBlock: latest
userId: XXXXXXXX-XXXX-XXXX-XXXXXXXX-XXXXXXXX
projectId: XXXXXXXX-XXXX-XXXX-XXXXXXXX-XXXXXXXX
network: Ethereum
user: Jane-doe
schema:
  file: ./studio.schema.ts
execution: parallel
Resources:
  - Name: gpv2Settlement
    Abi: src/abis/gpv2Settlement.json
    Type: contract/event
    Address: "0x9008D19f58AAbD9eD0D60971565AA8510560ab41"
    Triggers:
      - Event: Interaction(address indexed,uint256,bytes4)
        Handler: src/handlers/gpv2Settlement/Interaction.InteractionHandler
      - Event: Settlement(address indexed)
        Handler: src/handlers/gpv2Settlement/Settlement.SettlementHandler
      - Event: Trade(address indexed,address,address,uint256,uint256,uint256,bytes)
        Handler: src/handlers/gpv2Settlement/Trade.TradeHandler
  - Name: dlnsource
    Abi: src/abis/dlnsource.json
    Type: contract/event
    Address: "0xeF4fB24aD0916217251F553c0596F8Edc630EB66"
    Triggers:
      - Event: >-
          CreatedOrder(tuple(uint64,bytes,uint256,bytes,uint256,uint256,bytes,uint256,bytes,bytes,bytes,bytes,bytes,bytes),bytes32,bytes,uint256,uint256,uint32,bytes)
        Handler: src/handlers/dlnsource/CreatedOrder.CreatedOrderHandler
  - Name: dlndestination
    Abi: src/abis/dlndestination.json
    Type: contract/event
    Address: "0xe7351fd770a37282b91d153ee690b63579d6dd7f"
    Triggers:
      - Event: >-
          FulfilledOrder(tuple(uint64,bytes,uint256,bytes,uint256,uint256,bytes,uint256,bytes,bytes,bytes,bytes,bytes,bytes),bytes32,address,address)
        Handler: src/handlers/dlndestination/FulfilledOrder.FulfilledOrderHandler
  - Name: across
    Abi: src/abis/across.json
    Type: contract/event
    Address: "0x5c7BCd6E7De5423a257D81B442095A1a6ced35C5"
    Triggers:
      - Event: >-
          FilledV3Relay(address,address,uint256,uint256,uint256,uint256
          indexed,uint32 indexed,uint32,uint32,address,address
          indexed,address,address,bytes,tuple(address,bytes,uint256,uint8))
        Handler: src/handlers/across/FilledV3Relay.FilledV3RelayHandler
      - Event: >-
          V3FundsDeposited(address,address,uint256,uint256,uint256
          indexed,uint32 indexed,uint32,uint32,uint32,address
          indexed,address,address,bytes)
        Handler: src/handlers/across/V3FundsDeposited.V3FundsDepositedHandler

Use the command blockflow codegen to generate handler templates in the /src/handlers directory. There are a total of 4 different directories generated. Each with the boilerplate code. We move onto the dlndestination directory to the FulfilledOrder handler to write the handler code for indexing bridge data. We start by importing the schema and then binding the connections to the DBs.

import { BridgeDataDest, SolverAnalysis } from "../../types/schema";

const bridgeDataDB: Instance = bind(BridgeData);
const solveranalysisDB: Instance = bind(SolverAnalysis);

We create a variable and use the await method to find the orderId document from the bridgedataDestDB. If not found anything, we use the if condition to create a new document in the bridgedataDB and fill its field values. Since we are on the destination address, we will fill all the values of the destination address. We put the value variable as transaction value minus the solverGasCost (transaction gas price here).

const solverGasCost = gasPrice.multipliedBy(gasUsed).dividedBy(etherUnit);
const value = (transactionValue.minus(solverGasCost)).dividedBy(etherUnit);

  let bridgedata = await bridgeDataDestDB.findOne({
    id: orderId
});
  if (!bridgedata) {
    await bridgeDataDestDB.create({
    id: orderId,
    transactionHashDest: transaction.transaction_hash,
    to: log.log_address,
    toValue: value,
    solver: sender,
    solverGasCost: solverGasCost,
    timestampDest: block.block_timestamp,
  });
  };

We now have a look onto each field:

  • id: it is the orderId which is the unique id for a transaction that is used to identify a token transfer on the source and destination sides.

  • transactionHashDest: we use the transaction.transaction_hash method of the blockflow library to get the hash for the transaction.

  • to: this is the transfer address

  • toValue: this is the amount of token transfer

  • solver: the sender here represents the transaction solver

  • solverGasCost: the gas fee charged by the solver to solve the transaction

  • timestampDest: the time at which the transaction took place on the destination side.

We also do CRUD operations over the solverananlysis DB the exact same way. Here sender is taken as the ID because the sender is the solver in the destination contract. We add the totalTransactions by 1 every time this DB is updated. The totalGasSpent, totalVolume and avergaVolume are updated simultaneously. The await method is then used to save the data for the else block.

let solveranalysis = await solveranalysisDB.findOne({
    id: sender,
  });
  if (!solveranalysis) {
    await solveranalysisDB.create({
      id: sender,
      totalTransactions: 1,
      totalVolume: value,
      averageVolume: value,
      totalGasSpent: solverGasCost,
    });
  } else {
    solveranalysis.totalTransactions += 1;
    solveranalysis.totalGasSpent += solverGasCost;
    solveranalysis.totalVolume =  (parseInt(solveranalysis.totalVolume) + value).toString();
    solveranalysis.averageVolume = solveranalysis.totalVolume/solveranalysis.totalTransactions;

    await solveranalysisDB.save(solveranalysis);
  }

Here the field values are:

  • id: sender , we save the data for each solver in this DB so the id is made by treating each solver as unique.

  • totalTransactions: this is the total number of transactions successfully performed by each solver.

  • totalVolume: this is the net cumulative value of the token transfer values that took place.

  • averageVolume: this is the average trasaction amount per transfer, we get this by dividing the totalvolume by totaltransactions.

  • totalGasSpent: total of the solverGasCost that was spent on all the transactions.

Similarly we can write the handler function for other handlers and update the DBs accordingly.


Testing

We use the below command to test if the handler logic is correct and the data gets stored on our specified collection in Mongo.

blockflow instance-test --startBlock <block number> --clean --range 10 --uri <connection string> --rpc <rpc_url>

The <block number> can be put of any range, but we can put it to latest if no opinion provided. The — uri holds the MongoDB connection URL which is tested locally shall be filled with mongodb://localhost:27017/blockflow_studio. The --rpc tag lets you put the rpc url.\

Check out the complete repository to index SolverScan at github.

Last updated