Enterprise Readiness

Enterprise Readiness

[Written by Peter-Jan Brone]

After months of hard work we are proud to announce that DataBroker DAO is now running on the new version of our Mint middleware solution. The new version is simply called Mint 2.0 because as we are developers we’d very quickly come up with crazy names that seem funny at first but are eventually just a nuisance to explain to new joiners. I kid you not: I once refactored a micro service and we ended up renaming it to ‘Natalie’, after Natalie Portman, for no reason other than the apparent one.

The diehard followers of our project will know that not so long ago we raised money through a token sale, and we are still very grateful to all the people backing our project and giving us the opportunity to keep working and building out our Data Exchange platform. In light of this we felt it was our responsibility and an opportune time to go a little deeper into the inner workings of the backend now that our End of Summer release is out. Over the past 3 months most work has sunk into re-engineering the tech on which our platform is built. I won’t go into detail about the numerous improvements to the frontend, which were discussed in previous posts already.

TL;DR: Mint 2.0 is bigger, badder and bolder; and DataBroker DAO’s Data Exchange platform is now running on it. And by the way you might say “boy this post is way too long”, but I looked it up and apparently I only overextended the ideal blog post length by 1.4%.

 

Some history

As a background into what exactly Mint is I’d like to refer you to the following overview of its functionalities. In short, Mint is a blockchain agnostic layer that sits on top of the blockchain and acts as both an indexer and adapter layer for all in- and outgoing traffic to and from the blockchain. Mint 1.0 was built using NodeJS and mostly relied on the well-known Ethereum Javascript API web3.js, in combination with MongoDB 3.2 and IPFS as data storage layers. One of the main issues we were facing with this setup was that we needed to maintain an enormous amount of watchers in the web3.js library, one for every sensor deployed to the system. We discovered however that these watchers were suffering from a memory leak. For us this was a good point in time to decide to mature the entire stack and move to newer, more cutting edge technologies.

The new stack

One of our first decisions was to rewrite our entire codebase in TypeScript, which many of you know is a purely syntactical superset of Javascript offering static typing and thus type safety.

Many articles have been written about this already so it’s not very meaningful to go into great detail here. Suffice it to say that programming in TypeScript is bliss, especially for engineers with an object oriented programming background. Once you’ve worked in TypeScript, there’s just no going back, up to the point that even looking at plain Javascript hurts my eyes every time I see it.

Ethers

After being stuck in limbo for the past year between web3 v0.2 with all of its known issues, or upgrading to v1 which is still in beta and thus possibly comes with some instability, we were glad to see a new library was gaining some traction. When we discovered our watchers were leaking memory we decided to move once and for all and rewrite the whole thing using the ethers.js library. After a couple of months of working with it, and having encountered numerous corner cases making me dig deeper in their lower level API, I can say it’s very readable and easily extendable. Kudos to Richard Moore who’s behind the project and is very actively tackling all open GitHub issues on a daily basis by the way.

RabbitMQ

Mint 1.0 was mostly purely synchronous behaviour, which means that when you were interacting with the blockchain through our REST API you were bound by the default 60s Nginx timeout window before your calls would end up in gateway timeouts. Other than this we also felt we were very limited in the amount of open connections our servers could handle at any point in time. If some requests would tend to stall a little, this number became very unpredictable and our overall system ended up becoming rather unstable during moments of heavy load. As we are eyeing to become an enterprise level data marketplace for worldwide IoT sensor data this obviously was not an option for us.

By rewriting all our endpoints to handle requests in an asynchronous manner, meaning that POST requests immediately return a request identifier for which you can poll the response later on, we brought back stability to our systems, even in times when the platform’s under stress. This comes in very handy as not all blockchain applications are necessarily a continuous stream of transactions that need to be sent to the chain. On the contrary, that traffic is more often than not very bursty and using this technique we managed to not only buffer incoming requests but we’re actually able to optimise the processing up to a point we can fit more transactions into a single block, thus increasing our overall TPS.

Aside from that it enabled reworking the codebase to a fully decentralised architecture, we ended up containerising every service just because it felt natural to do so. We’ve also managed to make synchronous communication between consumers possible by implementing RPC calls using simple direct replyTo queues. Obviously these should be treated with a little caution but they are extremely useful for nonce tracking for example. It allowed us to make a nonce tracking microservice able to keep track of your nonce on multiple networks, or even if you’re interacting with multiple Mint instances at a time, again increasing the overall TPS as we’re able to correctly predict nonces without running into too many invalid nonce errors. For the most part though our workers are mere small snippets of code executing on certain data and republishing messages to a wide array of queues for further processing, which means we can scale horizontally.

Another great technology which we implemented and which goes hand in hand with AMQP is MQTT. Coincidentally this protocol was designed for enabling communication between IoT devices, but we just use it for the slimmed down pub/sub messaging protocol it is. The rabbitmq plugin does nothing really other than host a socket on which frontends can subscribe to topics they’re interested in; we however ensure that our entire stack religiously communicates every interaction with the blockchain, tagged by your request id for example. Now if you use your imagination and combine this with react and redux, I’m sure you can imagine just how powerful this solution really is.

Mongo 4.0

What can I say … I freakin’ love Mongo. We decided to step away from ODMs like Mongorito or Mongoose and write all our queries raw using the incredibly awesome aggregation pipeline. I remember back when WiredTiger was first introduced I was amazed at the power of their engine and with the new versions 3.6 and later 4.0 they’ve just blown me away with the capabilities of their tech. One of the great additions I feel in the latest version are the change streams cursors enabling you to have very fine-grained listeners on all your Mongo collections, triggering the necessary queues and updating whatever’s required.

Webhooks

I think it was a Friday in August when Roderik opened up a medium prio ticket stating “Hey don’t you think it would be cool to add webhooks?”. Well by Saturday evening we had implemented three different types of webhooks, so users of Mint can now subscribe to:

  • Events that get triggered
  • Addresses that are interacted with
  • Timed hooks that relay the user’s wallet balance for example

ERC20 token bridge

I guess you can say this token bridge we’ve built is part of our new stack as well. The reason we need it, is to allow for scalability of the marketplace and still maintain a real life use case for our DTX token on mainnet. It mitigates scaling issues because it enables using a private PoA network, where you can decide the gas cost and blocktimes yourself. Furthermore it protects you against the congestion that we see happening on the mainnet from time to time.

It’s actually pretty cool, I’ve pulled this schematic from the internet somewhere describing the deposit flow. In short it’s basically just 2 contracts on 2 different chains that communicate with each other through events for which you have to listen to in a couple of validator nodes that are signing requests using seeds the contract on the mainnet knows the addresses of. Read that twice, it’s really not that difficult when you get how it works. Implementing it however is a real PITA, but that’s just because… blockchain.

What’s next

All of the above doesn’t even describe half of what we’ve been doing, but the rest is top secret and we will never tell you. Most of all we’re glad to have the ERC20 token bridge in production, allowing everyone to transfer DTX tokens from the mainnet onto our private PoA network and making the DTX token truly useful and usable on our platform. With that I’d like to welcome every data sensor owner or buyer to start interacting with it. Withdrawals are disabled for the time being.

Now I hear you all thinking … why is this blog post titled Enterprise Readiness? Well we, the DataBroker DAO team, feel that with this End of Summer release we are finally ready to welcome enterprise level players onto the system. If you have data just sitting there, collecting dust in your data silos, please don’t hesitate to contact our admins. We will guide you through the enlisting process and answer any questions you may have so you can turn that pile of bits into a big pile of gold, DTX tokens, whatever you want to call it!

Oh, and I was kidding about the withdrawals by the way, of course we made these possible too!

https://youtu.be/vAny5zIWk3s

Stay tuned, because winter is coming and we plan on working even harder, now there’s no real reason to go outside anymore.

Hitting the road with DataBroker DAO!

Hitting the road with DataBroker DAO!

The roadshow for the DataBroker DAO token sale got underway this past week and what a start! We have been well received in San Francisco and Vilnius in the past week and have had some great results that validate both our business model and our token economics.

First up was the CB Cryptocurrency and Blockchain Conference in San Francisco from March 22–23. We took the stage to share DBDAO for the first time in the US market and dove straight into the heart of many things innovative in the blockchain space.

There was a great line up of speakers and panelists with backgrounds from blockchain to VC to the SEC which brought in several hundred participants, more than 60 speakers, 30+ companies, about 20 of which were pitching for glory. I found the presentations of Alex MachinskyBernard Moon and Chance Du very informative and inspiring and the insights offered by Reese Jones of Singularity University and Glen Gow of Clear Ventures enlightening.

At the end of the first day, it was our turn to take the stage for the pitch competition. We were up against some “stiff” competition (inside joke for those who were there) and in the end, we were selected by the judges for the top spot! Judge Vince Kohli did the honours of presenting DataBroker DAO as the winner.

 

Matthew Van Niekerk (CEO DataBroker DAO), Vince Kohli (Startup judge & mentor MIT, Berkley, Stanford)

 

On Saturday, we didn’t take the stage but met some great folks at the Draper University hosted Fintech World Blockchain/ICO’s event in San Mateo and got to check out the great decor at the Hero City.

 

Hero City at Draper University, San Mateo, CA

 

Next up on the roadshow was Vilnius Lithuania, this time for the d10e Decentralisation Conference. Again, there were a number of great presentation covering several dimensions from legal to marketing to regulatory compliant execution of token sales. A great update on the blockchain based share registry plans of Vilnius was also provided by Jonas Udris. Quite a progressive approach!

The fun got started on the pitch front in the late afternoon and this time around, the stakes were higher as there was 100K EUR in prize money up for grabs and we nailed it! Well second is pretty good in my books against the tough global competition that took the stage. The 25K EURO prize money that we took home for DataBroker DAO will go to great use in getting us further along on the product roadmap.

 

Mago Tsanajev (Content Marketing) collecting the prize after winning 2nd place in the pitch competition.

 

In addition to the great response and reception of DBDAO in both San Francisco and Vilnius this week, are also getting some fantastic reviews:

4.6/5 stars on Track ICO https://www.trackico.io/ico/databrokerdao/

8.1/10 starts on ICO Marks https://icomarks.com/ico/databrokerdao

4.11/5 stars on ICO Holder https://icoholder.com/en/databrokerdao-3028

Looking at the ratings on these sites, a common theme comes through where we can improve and that is with social network and community activity. We will certainly double down on our efforts in this area and would like your help.

If you are a $DTX holder or a DataBroker DAO fan, please make your support heard and help us spread the word. Every bit counts to achieve our common objectives.

Coming up this week, the team is in Washington DC and Dubai for more events and we’ll keep you posted on how these go.

Any questions, opportunities or partnership requests are welcome on any of these channels:

Telegram: https://t.me/databrokerdao
Facebook: https://www.facebook.com/DataBrokerDAO/
Twitter: https://twitter.com/DataBrokerDAO
Reddit: https://www.reddit.com/r/DatabrokerDAO/