SolverScan
In this tutorial, we'll show you how to index SolverScan data using the Blockflow CLI and utilities. Blockflow provides a seamless and efficient solution for building "Serverless Backends" through our Managed Databases, Instances, and Custom APIs.
SolverScan is a comprehensive analytics tool that shows swap and bridge protocol data even with the detailed analysis over the solver information. By aggregating and presenting this data, Solverscan enhances transparency and efficiency in decentralized finance (DeFi) operations.
Project Overview
The purpose of this tutorial is to teach you how to index SolverScan data by creating subgraphs with the aid of Blockflow and its utilities. A short breakdown for the project is -
Create a project directory and
cd
into it. Use the commandblockflow init
through the terminal and provide with the relevant contract address to be tracked.Provide with appropriate schema to the
studio.schema.ts
file depending upon the data emitted by the events to be tracked.Write the handler functions for tracking each event.
Test the project using the comand
blockflow test
and verify the data stored over the database.
Setting up the project
Download the Blockflow CLI using the command:
We can remove the
-g
tag if global installation is not required.Create a project directory and use the
cd
command over it to make it the working directory.Use the
blockflow init
command to initiate the project.Below are the fields that need to be entered for the project. You can also let the default values for each field by pressing [ENTER]. The last field requires you to select the instance trigger type with 4 choices. You can also add more than one trigger for a project.
Setting up the schema
The below mentioned schema is the most appropriate to track the SolverScan data for any bridge or swap protocol.
Below is the table describing each Schema:
TradeData
This table contains the data of the swapping of tokens including the solver identification.
SolverData
This table contains data regarding a particular solver and their participation in the trade.
LiquidtyData
This table contains information regarding liquidity target, value and selector.
BridgeDataSrc
This table contains the source-side data of the bridging transaction.
BridgeDataDest
This table contains the destination-side data of the bridging transaction.
SolverAnanlysis
This table contains the data regarding the solvers participating in bridging transactions.
Volumeforeachpair
This table contains the data of transaction between any two pair of tokens.
On completing the schema, use the command blockflow typegen
to generate types.
Writing the handler function
Below is the studio.yaml file with multiple contracts tracked for different protocols. It tracks 4 different contracts, one for Cow protocol and other two for DeBridge(2) and Across bridge.
Use the command blockflow codegen
to generate handler templates in the /src/handlers
directory. There are a total of 4 different directories generated. Each with the boilerplate code. We move onto the dlndestination directory to the FulfilledOrder handler to write the handler code for indexing bridge data. We start by importing the schema and then binding the connections to the DBs.
We create a variable and use the await method to find the orderId document from the bridgedataDestDB. If not found anything, we use the if
condition to create a new document in the bridgedataDB and fill its field values. Since we are on the destination address, we will fill all the values of the destination address. We put the value variable as transaction value minus the solverGasCost (transaction gas price here).
We now have a look onto each field:
id: it is the
orderId
which is the unique id for a transaction that is used to identify a token transfer on the source and destination sides.transactionHashDest: we use the transaction.transaction_hash method of the blockflow library to get the hash for the transaction.
to: this is the transfer address
toValue: this is the amount of token transfer
solver: the
sender
here represents the transaction solversolverGasCost: the gas fee charged by the solver to solve the transaction
timestampDest: the time at which the transaction took place on the destination side.
We also do CRUD operations over the solverananlysis DB the exact same way. Here sender is taken as the ID because the sender is the solver in the destination contract. We add the totalTransactions by 1 every time this DB is updated. The totalGasSpent, totalVolume and avergaVolume are updated simultaneously. The await method is then used to save the data for the else block.
Here the field values are:
id:
sender
, we save the data for each solver in this DB so the id is made by treating each solver as unique.totalTransactions: this is the total number of transactions successfully performed by each solver.
totalVolume: this is the net cumulative value of the token transfer values that took place.
averageVolume: this is the average trasaction amount per transfer, we get this by dividing the totalvolume by totaltransactions.
totalGasSpent: total of the solverGasCost that was spent on all the transactions.
Similarly we can write the handler function for other handlers and update the DBs accordingly.
Testing
We use the below command to test if the handler logic is correct and the data gets stored on our specified collection in Mongo.
The <block number> can be put of any range, but we can put it to latest
if no opinion provided. The — uri
holds the MongoDB connection URL which is tested locally shall be filled with mongodb://localhost:27017/blockflow_studio
. The --rpc tag lets you put the rpc url.\
Check out the complete repository to index SolverScan at github.
Last updated