Purpose:
The purpose of this document is to capture the important details relating to the state read performance of the get_fio_domains endpoint in the Fio Protocol.
Details:
The get_fio_domains looks up all domains owned by the target account.
get table rows is called with the following information
get_table_rows_params domain_row_params = get_table_rows_params{.json=true, .code=fio_system_code, .scope=fio_system_scope, .table=fio_domains_table, .lower_bound=boost::lexical_cast<string>(::eosio::string_to_name(account_name.c_str())), .upper_bound=boost::lexical_cast<string>(::eosio::string_to_name(account_name.c_str())), .key_type = "i64", .index_position = "2"};get_table_rows_result domain_result = get_table_rows_by_seckey<index64_index, uint64_t>(domain_row_params, abi, [](uint64_t v) -> uint64_t { return v; });
it seems that since we can have many many domains that are owned by the same account, we will want to make some tests to find limits of the readability of domains by account.
we will make 10-20k domains for an account on a local developer box then try to read through these.
I made a test branch to load a local chain with 21k domains for a single account.
branch name
feature/BD-4108-fiotest-develop-10122022
these tests and setup are in the file register-domains-one-account-max-load.js
it looks to me as if the indexing design of the domains is inherently limited to some number N
where N = the number of rows that can be processed by the node being queried.
if there are more rows than this then those additional rows can only be accessed using get table using --limit and -L and paging through results looking for the target account….
Add Comment