Last updated: 2020. 12. 31.
On-chain analysis for Klaytn Mainnet is useful to service providers, application developers, and users who want to gain insights from financial data. Several Klaytn clients provide useful functions for maintaining a variety of Klaytn nodes and accessing financial assets well, and they just require accessing few recent blocks or the data with index keys (e.g., accounts, txids, and blocks). Since on-chain analysis should traverse many transactions relevant to a target account, these clients are not suitable for analyzing complex transactions across the entire blockchain. Due to storage-optimized data management of existing tools, while performing on-chain analysis on Klaytn network, there are various pain points such as poor performance, limited accessibility, and uncomfortable user interfaces.
S2WLAB proposed a S2_EYEZ for Klaytn, an analytical platform to improve the data access quality to Klaytn Mainnet. In order to overcome the difficulties, this project aims to support a high-performance on-chain analysis platform satisfying fast graph query execution and provide general purpose APIs to accelerate the rapid development of on-chain analysis applications. In order to build the tool, we performed the preliminary research on Klaytn network. We observe several characteristics to analyze large-scale blockchain data regarding fast transaction analysis. First, existing Klaytn clients store the blockchain data as a flat database model. However, it lacks locality of reference while analyzing relevant transactions of an analysis account. We redesign the data management architecture from a flat database model to a graph-based model. Second, Klaytn Mainnet is an append-only data structure, like other blockchains, and it does not require strong ACID (atomicity, consistency, isolation and durability) properties required in a database. By applying loose ACID properties, S2_EYEZ enables a fast graph query execution. Third, only core cells can produce next blocks including financial data, which is 10 times faster than Ethereum. This implies that new transactions and accounts are created frequently like stream data, and S2_EYEZ should transform and import the data quickly. To handle stream data, we optimize data accessing mechanisms with low-level scheduling. With our underlying platform, flexible graph traverse APIs is required to users for creating arbitrary applications accessing financial data in Klaytn Mainnet. As an interim report, in this technical report, we address key characteristics of Klaytn Mainnet, requirements for on-chain analysis, and design architecture of S2_EYEZ for Klaytn.
We have three deliverables in this project. As a first deliverable, we describe an internal architecture of S2_EYEZ for Klaytn with design considerations. Based on the presented architecture, we have implemented an initial prototype of S2_EYEZ for Klaytn and will deploy it in bare metal servers. Check the below table that describes the details of timeline and key milestones.
Expected project completion time: 6 months
As a first deliverable, this section provides an internal architecture of S2_EYEZ for Klaytn with design principles, specifically focusing on an underlying graph-based data management platform to explore the Klaytn on-chain network. We first explain key characteristics of Klaytn to import new blocks with live transactions at high-frequency and traverse target accounts or transactions by following their adjacency nodes. With our preliminary analysis results, we design S2_EYEZ for Klaytn for handling a large-scale blockchain data, translating a flat data structure into a relational database, and executing complex graph queries to explore the entire blockchain. Lastly, we define generalized user-friendly APIs to be opened to Klaytn forum users.
Blockchain clients for Klaytn have been proposed to make users access and manage the blockchain data easily. These clients satisfy the official standard to create (financial) transactions, execute smart contracts, validate new blocks, and synchronize states across the Klaytn networks. However, the clients are not suitable for offering blockchain-wide analysis capabilities for on-chain transaction analysis that requires graph functions by accessing adjacent nodes fast.
In order to improve on-chain analysis capabilities, we design S2_EYEZ for Klaytn with three design principles: 1) provide sufficient locality of reference for an analysis target, 2) improve data access performance by allowing loose ACID properties in a data management, and 3) optimize read/write operations to import the blockchain data fastly.
Existing Klaytn clients mostly access few recent blocks or representative data. When Klaytn nodes validate new blocks, they just read recent few blocks to validate block data and Klaytn wallets are not required to traverse the blockchain data but read data by referencing specific hash keys (e.g., transaction IDs, accounts, or blocks). Usually, they store the blockchain data as a flat database model that appends new block data at the end of the recently synched blockchain. Then, they generate indexes to access representative data by hash keys. For those clients, the flat database model has advantages for storage efficiency and simplicity of data management.
For the on-chain transaction analysis, however, the flat database model causes poor performance because adjacent transactions and accounts are directly linked to an analysis target node and they are frequently accessed together. Improving locality of reference with this setting is one of the most important factors of the performance of an on-chain transaction analysis. Therefore, we redesign the blockchain data from a flat database model to a graph-based one. The proposed model translates a flat blockchain data into nodes (e.g., accounts, transactions) with properties (e.g., timestamp, volume) and edges for representing their relationships. Then, S2_EYEZ stores transformed data into a permanent storage and loads it to memory dynamically while running on-chain analysis functions.
Klaytn networks are maintained by a blockchain that is an append-only database. An append-only database means that the recorded data in the Klaytn Mainnet will never change. This characteristic also means that all snapshots are static, not dynamically changed while new blocks are mined. Thus, the ACID (atomic, consistency, isolation, durability) properties in a general database are unnecessary in maintaining Klaytn networks. New blockchain data is produced by core cells after passing a consensus algorithm, and analysis functions can read the immutable synced block data. With this characteristic, we design our graph database model that loosely satisfies ACID properties. By improving performance, our graph model does not create snapshots and incrementally append new block data into a graph-based data model. Loose ACID properties give performance gains compared to the strict ones provided by some of the other graph databases.
Klaytn is highly optimized to produce new blocks, which brings a low commission price, fast block generation, and confirmation time. This is a strong advantage over other blockchains such as Ethereum. However, fast block creation causes a new technical challenge to maintain a graph-based data model. When new blocks are created, S2_EYEZ generates indexes (e.g., accounts and txids) of important nodes, searches all nodes directly linked to the newly parsed nodes, and writes the data to a permanent storage in a short time. It causes tremendous read/write operations as new blocks are mined over time. To satisfy a fast block creation of Klaytn nodes, we improve read/write performance by optimizing memory loading/unloading and data locking mechanisms. S2_EYEZ keeps the amount of data that will be recorded in a permanent storage and writes them to storage considering the burden of memory. With this procedure, S2_EYEZ always allows read operations to access arbitrary nodes and edges except newly appended data from mined blocks. Our optimization techniques improve data import operations, as well as graph-query executions.
Overview of Graph Cache architecture
We present the Graph Cache system, a high-performance analysis framework for blockchain data. The system maintains the blockchain data as an in-memory graph data and supports a bunch of programmer interfaces over the blockchain data. In order to improve performance, we design the system as an in-memory database. On-chain analysis functions will traverse the entire blockchain. This motivates us to make the system build a single graph database instance in a centralized manner, not dividing it to distributed instances. A single database instance has benefits over clustered multiple instances because it is free of handling perallarism issues that arise while performing blockchain-wide query operations. In addition, Graph Cache takes several domain-specific optimization techniques as discussed in the following sections. The core of the system is written in C++ for performance critical tasks.
The above Figure illustrates the Graph Cache system. Mainly, the system consists of three layers, the application layer, the framework layer and the disk layer. Analytic applications are running on the application layer. The block parser in the framework layer dispatches raw blockchain data from the Klaytn Mainnet and transforms it into a graph incrementally. The job dispatcher manages a thread pool for handling query execution requests. When a new request arrives, the dispatcher assigns the job to one idle thread in the pool. The cache manager performs cache-in/out operations that (un)load a raw blockchain data from a permanent disk to memory and vice versa. It leverages a LRU algorithm to optimize the memory usage. Additional information on top of blockchain data such as user-defined tags is located in memory spaces and stored in an external file database system for persistency.
The system builds a graph data structure for blockchain and the organized data in the structured format for all blockchain blocks is stored in the disk layer. The disk storage has three types of data files, vmap table, dictionaries and relationships. A record of v-map table stores the mapping information between each of graph nodes and an account address. The dictionary file maintains an entry for every single node of graph and each entry points to a physical file location(i.e., file id and offset) has edges of a node. The relationship files represent all edges of entire nodes including reverse edges added to traverse a graph efficiently.
To summarize, we design the Graph Cache system of S2_EYEZ for Klaytn as an in-memory analytical graph database that uses nodes to store blockchain entities, and edges to store relationships between entities. Also, we make further optimization heuristics to improve performance, including converting node ids to actual indexes of arrays. Now, several key implementation considerations are described in the following subsections.
With the initial observation that the blockchain data can be in a graph format, we applied a graph model to blockchain analysis because the majority of financial transaction analysis applications work as a kind of graph traversal. In many cases of blockchain analysis, the analyst takes a flow of financial transactions as meaningful information. Who spent money to whom? How many coins did she get? How many accounts did she get the coins from? These questions intensify the needs of block chain analysis, and are logically analogous to graph traverse. For instance, in the graph structure including nodes as accounts and each edge as a transaction between two nodes, financial transactions between two accounts can be considered traversing a blockchain graph from one account node to another. This makes it possible to design a base data structure for blockchain ledgers as address and transaction graph indicating nodes as account addresses and transaction hash values and edges as transactions between two addresses. Each node and edge have a label and properties(e.g., amount of transaction) optionally. By allocating additional in-memory space for properties, some kinds of conditional searches such as traversing only more than three amounts of klay transfer make insignificant performance overheads compared to unconditional searches.
Since blockchain data has an append-only property, additional new transactions are updated incrementally. There is another type of modification that is made to the blockchain graph while maintaining a graph database. When an existing node is consumed by a new transaction, it causes the reallocation of preserved spaces (both in-memory and disk) for existing entries. To handle this, we first load the file from disk and map it into the memory. Then, we reallocate a new space from an available space pool, and copy existing entries and write a new transaction into the space. The old memory block is freed and the relevant pointers to memory blocks are updated appropriately. Analysis libraries only notice the modification of the memory pointers and the lowest mem-copy overhead.
Dictionary file structure for the Graph Cache system
The above Figure shows an example data structure of the dictionary, which represents the edges of each node. Every record consists of four unsigned numeric fields, node id, the number of edges, file id and file offset consuming 16 bytes as a whole. The node id is obtained from a result of a simple and fast hash function with an account address or tx hash value. With the dictionary, a graph traverse such as getting all transactions of an account address is simply done as retrieving the record of the dictionary and getting all edges from the location pointed by file id and offset.
Recall from the fact that the blockchain is big data, the kind of parallelism such as MapReduce could bring high performance on an analytic basis. However, is the blockchain really ‘big’ data? Now, Amazon EC2 provides a server that has a few tera memory and it is plenty big enough to deal with blockchain data. That is the reason why we design the system as a single in-memory architecture without parallelism, and thus do not need to dissipate resources to communicate with other instances.
Also, based on our careful inspection of blockchain data, most accounts made only a few transactions for all time and those are negligible on the analysis of financial transactions. In other words, the majority of blockchain analysts inspect partial blockchain data repeatedly in many cases. Thus, our system supports a LRU-based cache algorithm to provide space optimizations. To get the graph data, S2_EYEZ retrieves nodes and edges from the memory cache first and reads the data from a corresponding file if cache misses.
Multithreading is deployed in the Graph Cache system to handle multiple requests simultaneously. The system has a thread pool maintaining several worker threads. When a graph job is requested, a main thread of the system selects one of available worker threads from the pool and allocates the job to the worker thread. The worker thread requests shared lock for referencing nodes first before the traverse.
Since the blockchain data has an append-only property, a blockchain parser thread only requests an exclusive lock on graph nodes when a new block arrives. We apply a RW lock(Readers-Writer lock) mechanism to traverse blockchain graph data. Thus, the system allows the exclusive locks to a blockchain parser thread and gives shared locks to worker threads for read requests. However, the high contention of read requests could lead to starvation of write requests. To solve the problem of writer starvation, a write request is considered a job of higher priority than any other read requests. This is why the system prohibits new readers from acquiring a lock if there is a writer waiting for the lock.
The number of locks affects the performance of graph traversal by determining trade off between lock overhead and contention. In general, a coarse granularity of lock results in less overhead of locking and unlocking resources because each lock protects a large segment of data. However, it increases lock contention and incurs performance degradation when multiple requests run concurrently. In contrast, a fine granularity of lock increases lock overhead but reduces a lock contention. It is significant to define a suitable lock granularity but there is no quick answer for it. The lock granularity depends on domain-specific factors such as an entire number of graph nodes and edges, reference patterns and even hardware limitations. As a result, we empirically define the number of locks as the eighth power of two for a blockchain data.
One important thing, which must be guaranteed, is the recovery from unexpected failures. Because of the caching mechanism and in-memory block management, the storage files of the system can be left in an inconsistent state after abnormal shutdown. Recovery process guarantees that the files are in the consistent state in this case.
Change logging is one of the good candidates for handling unexpected failures and recovering the internal storage of the system, which enhances reliability of a service. Change logging is a procedure that records all of the generated data changes. We call that logs as change logs (CLs). This logging uses multiple change log threads to provide increased parallelism for better performance.
A change log maintains the following to recover unexpected abnormal states.
- Change log sequence number (CLSN)
- Changed file/block IDs
- Modifications in this change in idempotent form (meaning that any number of repeated applications of these modifications will produce the same result).
When some changes are committed, the logging process writes both the change logs in memory and the committed CLSN to the online log files. By virtue of this change logging, the uncommitted changes may temporarily exist in the data files while commited changes do not yet exist in the data files. Modified data blocks are written to data files when appropriate.
Crash recovery process starts with validating the state information. When the system is started, it checks the status of data files and the system logs. If the last termination is considered to be abnormal, there is a possibility of data file inconsistency. So, the system executed the crash recovery (CR) process. The CR process reads and reapplies all of the recorded change logs in the log files to the data files.
An example of internal state information
Checkpointing is a special process to write ‘checkpoints’ that is a mark indicating the start point of recovery. To reduce the time required for CR, the system periodically writes all the changes before a certain target CLSN. The checkpointing guarantees that all committed changes before a specific CLSN are written to the files. It implies that the change logs before the CLSN are unnecessary, and the change logs will not be applied any more.
The S2_EYEZ development environment exports high-level APIs that Klaytn forum users can design and implement on-chain analysis applications with few lines of code. S2_EYEZ DE abstracts an underlying infrastructure to execute complex graph queries across the Klaytn network. It provides useful utility APIs that users apply their own preliminary knowledge to graph query results. In order to satisfy two key requirements, we first focus on ‘tag’ that is a label assigned to each account, which gives semantic information about analysis results. With this semantic information, the S2_EYEZ DE provides methods to ‘filter out’ key accounts and transactions, which eliminates unnecessary information while analyzing a target account. To do this, the S2_EYEZ DE supports graph query APIs with various conditions to control graph query executions.
A Klaytn account, represented as a cryptographic key, is not human friendly enough to recognize hidden insights from analysis results. Assigning alias to accounts strengthens an understanding of query results. A user obtains a breakdown of some account’s real-world ownership (e.g., Exchange accounts) and semantic information linking to some specific incidents (e.g., Hacked accounts and ransomware campaigns). To apply the user’s own knowledge, the S2_EYEZ DE provides APIs to manage labels for each account. The APIs support functions such as insertion, deletion, and listing.
In order to improve tag access performance, tag information is always loaded in the memory storage and flushed into a permanent storage periodically. All added tags are referenced while executing graph queries. In our first release, the S2_EYEZ DE limits a single property (tag) to each account. One future work is to support multiple properties to each account, which will be used in graph traverses.
Identifying financial flows from the Klaytn network is not straightforward. Unlike fiat currencies, anyone can freely generate accounts, as well as create financial transactions with automated algorithms. Many accounts often are involved in transactions while analyzing financial flows, which makes users difficult to understand how accounts are related to each other regarding financial flows. Large-scale analysis results make users complicated to gain insights from the data. One efficient way making results human-understandable is to provide a variety of filtering conditions while traversing graphs from an analysis target.
Supported stop conditions for graph query executions
|target account||MUST||string||An analysis target of a query. A query execution starts with this node.|
|timestamp_from/to||OPTIONAL||(default: ALL)||timestamp||A from/to transaction time. Filtering out transactions that have a block creation time within parameters.|
|amount_min/max||OPTIONAL||(default: ALL)||unsigned integer||An amount of transferred value. Filtering out transactions that have an amount between these parameters.|
|tags||OPTIONAL||array (string)||An array of tag values. Filtering out accounts that are tagged by a user.|
|hop_count_max||MUST||3 (default: 3)||integer||Maximum hop counts from an analysis target. Stop until hop counts are satisfied. Only the number of accounts in a flow are counted.|
|node_limit||MUST||10,000 (default: 1,000)||integer||The maximum number of nodes in analysis results. If a node limit exceeds the limit, a graph traverse will be halted and return results immediately.|
|request_timeout_sec||MUST||30 (default: 10)||integer||A timeout of a query execution. If a query execution time exceeds the limit, a query will fail.|
The S2_EYEZ DE exports a query execution interface as a REST architectural style. A query execution starts with a target account and follows adjacency nodes until stop conditions are satisfied. Then, results of the query are represented as a set of accounts and corresponding transactions, delivered as JSON formatted data. Users can further analyze the results with various graph libraries such as python networkx (https://networkx.org/) and represent them with visualized graphs via matplotlib (https://matplotlib.org/). Users request a graph query with an analysis target and a list of stop conditions described in the above table. Stop conditions are based on Klaytn’s own account/transaction properties and a user-defined tag property. When a user requests a query with corresponding parameters, the S2_EYEZ first checks whether an analysis target is valid and the parameters satisfy each limit specified in the table. Then, an idle S2_EYEZ instance dispatches the request and executes a graph query across the Klaytn on-chain graph until all stop conditions are satisfied.
We limit the range of parameter values to avoid starvation, in case that some users exhaust all available resources in the S2_EYEZ instances. We are planning to extend the coverage of each query if more servers are available.
|Task||Weekly wage||No. engineers||Weeks||Total|
|Design||$5,000 per engineer||1||3||$15,000|
|Development||$5,000 per engineer||2||5||$50,000|
|Testing & QA||$5,000 per engineer||1||0||$ 0|
Engineers are assigned based on schedule for each task. 1) For ‘Design’, one engineer is assigned for 8 weeks, and 3 weeks have progressed. 2) For ‘Development’, two engineers are assigned for 15 weeks and 5 weeks have progressed.3) ‘Test & QA’ task is not yet started. Out of the total $220,000 product development & maintenance fee, $65,000 have been spent so far. Product development & maintenance cost will be spent as scheduled.
S2W LAB agreed to receive a total of 306,000 USD in 4 payment terms and received the first payment of 135,393 KLAY (equivalent to 60K USD) at Nov. 4th, 2020. S2W LAB sold 135,000 KLAY and acquired 68.3 M KRW (txids: 0x74c3092bafe9ddd989c7c6d88a0e18c9dde3fce8c13ee37e9948bf681f279987, 0x77b34a4879950a9fb1b028aa8e44dd85b4d435d9c90eb01bb116e1f3a2816d82).
In the proposal, we planned to use Amazon EC2 services to develop and deploy S2_EYEZ for Klaytn. S2_EYEZ includes low-level implementations that require a full privilege of host machines and brings tremendous disk/network operations while transforming and analyzing Klaytn blockchain data. In order to research for performance tuning and optimization of S2_EYEZ, we decided to prepare bare metal servers to provide reliable services and fast improvement for S2_EYEZ from user’s feedback. Thus, the amount of first funding is used to prepare required servers. S2W LAB purchased 2 servers for 65.2M KRW, 32.6M KRW each (VAT excluded). Server’s specification is described in the table below. In order to contribute Klaytn ecosystem, we decided to take the additional management costs including space rental fee, electricity fee, and network access fee.