We hail from the so-called Web 2.0, and have tons of combined experience in managing teams and executing technical tasks. With DexterLab, we are starting by utilizing technologies well-known to us.
We aim to become a fully decentralized and autonomous organization, and the architecture we are building will support this mission.
We did our homework even before building DexterLab, analyzing projects such as Graph, Dune, and Nansen. We saw much better progress when teams had complete infrastructure control, and so we are starting with that. Yet, the long-term focus stays the same—full decentralization.
The information below covers a large part of the DexterLab app, explaining our current and future perspectives. More will be added here and on our blog in the coming months.

On-chain data analysis

The most challenging part of this analysis is to index major blockchains with transactions flowing through them. This is not something an average computer can do, but to go with an expensive all-in-one corporate solution from a provider such as AWS or GCE is against our principles. As early Kubernetes adopters, we have devised a plan to launch indexers on multiple data centers and achieve full distribution and data integrity.
Later on, we will be opening up slots for other indexers. Ideally, these will be individuals (sysadmins, DevOps) who can run infrastructure and would like to receive a stable income for their services. Alternatively, companies may host indexers and accept crypto payments if we cannot meet that scenario.
We want to use as many open-source projects as possible to become competitive in the space. Here is a list of frameworks/tools/databases we are already incorporating into DEXterlab: Kubernetes, Kafka (strimzi), NiFi, Etherium-etl, Hadoop, Clickhouse, victoriaMetrics, neo4j, and many more. As we will use a lot of open-source projects, we will give back by contributing to projects such as Etherium-etl (building a Solana version right now). Plus, we will add our written parts of useful helm charts to NiFi processors that are building datasets.