AI is being positioned as the nowadays massive disruptor. However, the AI concept has been around for many decades (the term was first coined during the 50s) and has passed through several “winters” of interest and negative speculation (as well as equal number of hypes).

So, is there, really, a difference now?

Some will tribute such a “difference” to the nowadays “Data Abundance”: Social Networking, Mobile phones, wearables, IoT, etc. It is true that data availability based on which AI algorithms can deliver insights is growing exponentially providing a greater opportunity for better data mining and actionable prescriptive insights.

Others will argue that it is the processing power availability, the advances in machine learning algorithms and techniques that make the actual “difference”. We have witnessed the great success stories of Deep Reinforcement learning (AlphaGo, AlphaStar). Moreover, Federated Learning and Generative Adversarial networks are opening even newer, more exciting frontiers.    

Both are correct in my opinion and arguing which one point of view is more important than the other, is a chicken and egg debate not worth entering. However, I believe there is still something missing and it is related with the greatest value generating element of our era: The means of Sharing.

 

intrablog ai

 

Nowadays, for something to skyrocket, it must both generate and benefit from “networking effects”. It needs to act like a platform which increases its value by the number of the users that are using it and the members who contribute at it.

Yes, data availability is exponentially increasing indeed. And it does so both in terms of volume but also in terms of data sources. My phone, hundreds of applications in it, my smartwatch and my IoT enabled car, all generate lots of data that concern me. So does my smart thermostat, my fridge and several such devices in all the places I use and visit all day that belong and are controlled by others. Different sources, account holders, login names, formats, update frequencies, and the list goes on. Our forecasted Internet of Things world is indeed coming into reality producing tons of data for us that AI can draw great insights form but, to be frank, it’s a complete mess!

Different, totally incompatible devices, platforms, formats and business models exist all over the world and many more are emerging each day. If we want to make sense out of all this, data samples need to be synchronized and be protected from malicious interventions (change value, delete sample, alter order of sampling). And, No! The solution is not to call ANSI, IEEE, ISO or however else to put us in order by creating a central global repository of sensory data – our world does not work like that anymore. We have well realized that assigning this control to a single “trusted” authority, increase costs and minimizes agility and transparency.

But how (and without the employing a central trusted authority) can we deploy a database accessible and readable by all which will be updated timely and in a tamper proof manner while ensuring anonymity? And where are we going to send the maintenance bill?

And lets for the moment assume that this can somehow happen, and a trusted lake of sensory data will be created and be seamlessly maintained. How and who is going to create value out it? Do we really expect that each aspiring data analytics company will try from scratch to create a silo of value opaque from other to build upon it? How many Cambridge Analytica type of stories do we like to create?

 

intrablog ai

 

Well, both questions had no answer until Blockchain surfaced. Blockchain’s essence of existence to is implement distributed immutable, readable, writable and tamper proof by all, ledger of information. The blockchain network will put order in the chaos, will ensure the order of events, the control of its access and the protection of any sensitive information. And the best part is that it will do this without requiring a single trusted party to ensure any of the above. It will rather achieve it through elaborate and fair consensus mechanisms employed over several distributed (but conglomerated) nodes.

On top, by using Smart Contracts, Blockchain may seamlessly enable the distributed (layered) monetization of the data mining effort. There is no point of everyone trying to create value from scratch. It would be great if a layered approach would be employed which will promote focus and escalated value generation to separate entities. Along this concept, the data engineering or mining effort of one company will be leveraged by (sold to) another having a different view or combining other data sources as well. Such a model can be sustained only if an automated monetization scheme is put in place with the help of which value will be propagating effortlessly up the ladder of value. Such automation can be realized through smart contracts.

There you have it:

Sure: IoT senses and Artificial Intelligence computes. However, if it’s a multiparty and big enough of a deal (and our world for sure is) someone must remember, structure and facilitate. This is a role reserved for Blockchain. And it’s a leading one!

 

Author: Gerasimos Michalitsis