I would like to continue sharing some thoughts on the subject of “decentralization” and tooling for nodes since it gives context for the new developments. Decentralization — we can all agree — is subjective and dependent on the use-case. Blockchains are either built for speed or consensus, and not all shared-ledgers need perfect decentralization (not that this is possible right now anyway) — for some a legal document suffices (doesn’t even have to be smart, just words on paper that folks can hustle over).
Individuals should run nodes for public networks.
Public currencies however should have a wide network of nodes across all sorts of infrastructures in order to ensure layer 1 of decentralization. Folks use the “layer” analogy differently, but for me there is: Layer 0 (ISP and stuff), Layer 1 (nodes/racks), Layer 2 (protocol) and layer 3 (app). Everyone holding Bitcoin ideally should run their own nodes — it is your ticket into crypto citizenship, which is why we offer such nodes at very low cost. Continuously upgrading and maintaining nodes is tedious and complex, and we make it easy. To date we have launched around 100 BTC nodes — and if you are in Latin America, India or Africa, you should really consider running one (lots of room for growth there).
Foundations/Protocols should use cloud orchestration tools.
For more hybrid networks (i.e. anything that needs high performance), you need to plan network efficiencies once you have an ecosystem or network live and nodes are connecting. That means using multi-cloud and load-balancing to ensure your bare metal is decentralized at least (i.e. all but 1 EOS node resides on AWS). That is why we provide a dashboard and orchestration tool — so you can manage the degree of decentralization, at least in that category (layer 1). On the mining/hash-rate, consensus model, etc. there is another layer of complexity, but you have to start somewhere. Easy tools for outsiders to connect and validate transactions are a core feature there as well with every node/network configuration being so different. Just because you know how to deploy an Ethereum node (good luck syncing it!!), doesn’t mean you know how to deploy Fabric, Stellar, etc.
A bit of open-source history.
Talking about help for developers: We had the opportunity to discuss all of this last week at an event at Heavybit with Jed McCaleb (Stellar), Jesse Robbins (Chef), Brian Behlendorf (Hyperledger) and Jake Craige (Coinbase). All fine specimens. It all started from the guys who used to run mail-servers on their computers, up to the folks who started the Apache project (Brian from Hyperledger was one of the founders). It was the first foundation enabling monetary rewards for contributions to open-source projects. I was reading the Wiki description of Apache and thought these guys are about to drop their token, that is how “blockchain” it sounds:
“The Apache projects are characterized by a collaborative, consensus-based development process and an open and pragmatic software license. Each project is managed by a self-selected team of technical experts who are active contributors to the project.”
Cryptoland sometimes claims to be the first or to be cut loose from the previous work done around cryptography and open-source (which partly explains the atrocious state of developer tooling). Some interesting workable consensus models lie buried in these “older” organizations, which is why we copied parts of the Hyperledger governance model for AdLedger (since Brian did Apache).
Why the Stellar protocol is so stellar.
Talking about old/experience — I am trying to bridge the gap to Stellar, and some folks don’t know that it was founded by the guy who founded Ripple after he sold Mt. Gox: Jed McCaleb. Stellar is an interesting beast as a protocol in that space — a foundation with the right degree of decentralization for its purpose. It’s plumbing is more rigorous and we are delighted to offer the only deployment tool for Stellar nodes. I wanted to provide some detail about how we did this, because it was challenging and we are proud of it. So please excuse the tech talk — you can just scroll to the AWS/Blockdaemon comparison part a few paragraphs down if it is too much.
Our Stellar deployments consist of three basic components:
- Stellar Core is the backbone of the Stellar network. It maintains a local copy of the ledger, communicating and staying in sync with other instances of Stellar-core on the network.
- Stellar Horizon is an API server for the Stellar ecosystem. It acts as the interface between Stellar-core and applications that want to access the Stellar network.
- Stellar Bifrost is a Bitcoin/Ethereum to Stellar bridge. It allows users to exchange BTC/ETH for tokens on the Stellar network. As such it’s well suited for token distribution events like ICOs.
Each of these components uses its own PostgreSQL database. That’s a lot of moving parts that need to be configured, monitored, secured, and tested. Moving from a single, simple deployment to one that scales for multiple customers meant we had to add features like a user interface, application and system monitoring, and configuration tweaks (stellar core has over 40 config options) and so forth.
Since Bifrost acts as a Bitcoin/Ethereum bridge it also needs secure access to Bitcoin and Ethereum nodes. Luckily we have a team with great experience in this area which enables us to keep a couple of constantly synced (a challenge in itself!) nodes around. Communication happens over the RPC protocols — these protocols are typically in plain text so we had to add encryption on top of it for added security. Further we’ve added Bitcoin support to the Bifrost demo page to enable our customers to test with both Ethereum and Bitcoin straight away.
We are now live, and are adding high-availability tools, dashboards and documentation for more robust nodes. As always, our blockchain dev team learns a ton about what works and doesn’t work, and we are generating great synergies across protocols. One area that remains complex is the approach a lot of the infrastructure companies are taking when it comes to their hosting and blockchain services offering.
The large cloud providers: Drama, Comedy or Romance?
We recently had one large infrastructure provider threaten to shut down servers on us because they suspected some of our customers were using their infrastructure for mining. Luckily, we are deeply interwoven with all major providers and have architectured our orchestration tool for easy switching. In addition, our strong connections to all major providers means that we can iron out such issues before they become critical. It is a good reminder how important it is to not have all your nodes running on one cloud — one random decision can shut you down.
It made us take a good look at some of the “other” blockchain-as-a-service type solutions that are also infrastructure providers. Services like ours are great resellers for them because we can scale up clouds faster than anyone. We figured it is helpful to compare features with their blockchain deployment tools. Below we are showing a comparison between the AWS Ethereum templates and our deployment — geared for 2 very divergent use-cases (theirs to play, ours to deploy into actual production):
AWS has a different goal with their offering, which is to offer simple tools and then to sell you more. Valid for sure. We allow you to iterate and then have full flexibility in selecting the right network configuration at any stage of your process. It is a different approach that can go well together (we will be an enormous reseller of the best cloud services). We work with AWS to ensure our customers that are using AWS get the best configuration possible — so even though we have concurrent offerings, we are working on a deeper partnership. This way there is at least some competition between AWS, GCloud, Azure, DigitalOcean, etc.
I want to end this post with an important final thought. Privacy for individuals was the main driver for decentralized blockchains. Security will be the main driver for private/hybrid chains. This is why we are working on ISO and SOC certifications. Read more about it in my next blog post.