Main Net Database Snapshots

TRON officially offers database snapshots regularly for quick node deployment. A data snapshot is a compressed file of the database backup of a TRON network node at a certain time. Developers can download and use the data snapshot to speed up the node synchronization process.

Download the Data Snapshot

Fullnode data snapshot

The following table shows the download address of Fullnode data snapshots. Please select a suitable data snapshot according to the location and node database type, and whether you need to query historical internal transactions.

Fullnode Data SourceDownload siteDescription
Official data source (Asia: Singapore), exclude internal transactions (About 1706G on May 16, 2024)
Official data source (America), exclude internal transactions (About 1702G on May 14, 2024)
Official data source (Asia: Singapore),exclude internal transactions (About 1686G on May 16, 2024)
Official data source (Singapore), include internal transactions (About 1884G on May 16, 2024)
Official data source with accountbalancehttp://,exclude internal transactions,but include address history TRX balance(About 2143G on May 16, 2024)

Note:The data of LevelDB and RocksDB are not allowed to be mixed. The database can be specified in the config file of the full node, set db.engine to LEVELDB or ROCKSDB.

Lite Fullnode data snapshot

The Tron Public Chain has supported the type of the Lite Fullnode since the version of GreatVoyage-v4.1.0 release. All the data required by the Lite Fullnode for running is whole of the status data and a little essential block data, so, it is much more lightweight (smaller database and faster startup) than the normal Fullnode. TRON officially offers database snapshots of the Lite Fullnode.

Lite Fullnode Data SourceDownload siteDescription
Official data source (North America: Virginia)

Tips: You can split the data from the whole data with the help of the Lite Fullnode Tool.

Use the data snapshot

The steps for using data snapshots are as follows:

  1. Download the corresponding compressed backup database according to your needs.
  2. Decompress the compressed file of the backup database to the output-directory directory or to the corresponding directory according to your needs.
  3. Startup the node. The node reads the output-directory directory by default. If you need to specify another directory,please add the -d directory parameter when the node starts.


Lite Fullnode Tool

Lite Fullnode Tool is used to split the database of a Fullnode into a Snapshot dataset and a History dataset.

  • Snapshot dataset: the minimum dataset for quick startup of the Lite Fullnode.
  • History dataset: the archive dataset that used for historical data queries.

Before using this tool for any operation, you need to stop the currently running Fullnode process first. This tool provides the function of splitting the complete data into two datasets according to the current latest block height (latest_block_number). Lite Fullnode launched from snapshot datasets do not support querying historical data prior to this block height. The tool also provides the ability to merge historical datasets with snapshot datasets.

For more design details, please refer to: TIP-128.

Obtain Lite Fullnode Tool

LiteFullNodeTool.jar can be obtained by compiling the java-tron source code, the steps are as follows:

  1. Obtain java-tron source code

    $ git clone
    $ git checkout -t origin/master
  2. Compile

    $ cd java-tron
    $ ./gradlew clean build -x test

    After compiling, LiteFullNodeTool.jar will be generated in the java-tron/build/libs/ directory.

Use Lite Fullnode tool


This tool provides independent cutting of Snapshot Dataset and History Dataset and a merge function.

  • --operation | -o: [ split | merge ] specifies the operation as either to split or to merge
  • --type | -t: [ snapshot | history ] is used only with split to specify the type of the dataset to be split; snapshot refers to Snapshot Dataset and history refers to History Dataset.
  • --fn-data-path: Fullnode database directory
  • --dataset-path: dataset directory, when operation is split, dataset-path is the path that store the Snapshot Dataset or History Dataset,
    otherwise dataset-path should be the History Dataset path.


Start a new Fullnode using the default config, then an output-directory will be produced in the current directory.
output-directory contains a sub-directory named database which is the database to be split.

  • Split and get a Snapshot Dataset

    First, stop the Fullnode and execute:

    // just for simplify, locate the snapshot into `/tmp` directory,
    $ java -jar LiteFullNodeTool.jar -o split -t snapshot --fn-data-path output-directory/database --dataset-path /tmp

    then a snapshot directory will be generated in /tmp, pack this directory and copy it to somewhere that is ready to run a Lite Fullnode.
    Do not forget rename the directory from snapshot to database.
    (the default value of the is database, make sure rename the snapshot to the specified value)

  • Split and get a History Dataset

    If historical data query is needed, History dataset should be generated and merged into Lite Fullnode.

    // just for simplify, locate the history into `/tmp` directory,
    $ java -jar LiteFullNodeTool.jar -o split -t history --fn-data-path output-directory/database --dataset-path /tmp

    A history directory will be generated in /tmp, pack this directory and copy it to a Lite Fullnode.
    History dataset always take a large storage, make sure the disk has enough volume to store the History dataset.

  • Merge History Dataset and Snapshot Dataset

    Both History Dataset and Snapshot Dataset have an file to identify the block height from which they are segmented.
    Make sure that the split_block_num in History Dataset is not less than the corresponding value in the Snapshot Dataset.

    After getting the History dataset, the Lite Fullnode can merge the History dataset and become a real Fullnode.

    // just for simplify, assume `History dataset` is locate in /tmp
    $ java -jar LiteFullNodeTool.jar -o merge --fn-data-path output-directory/database --dataset-path /tmp/history