THANK YOU FOR SUBSCRIBING
By Chirdeep Singh Chhabra, Founder, Ocean Protocol
Data is the new oil, or more accurately, the new energy source to fuel our economy, making it a Data Economy. Many companies are now talking about their ‘data strategy’ and setting up departments looking solely at how to create value from data and advance AI capability. It is no longer a topic that is being discussed behind closed doors among IT teams but has become a front-and-center topic that has entered the boardroom and discussed among companies’ top executives. This will just see a new data economy develop but also new business models placing CIO’s and CDO’s of companies at the center of change and revenue generation.
“Knowledge is power” has been a long-standing, popular proverb. It’s been known to the human race that the more knowledge a person gains, the more powerful he/she becomes. This goes also with companies. Data has long been seen as a valuable resource but only recently has there been an explosion of data being generated. Thanks to rapid digitization of our lives and the growth of IoT, in 2016 alone, the world produced 16 ZB of data; that’s 17592186044416 GB of data. However, only 1 percent of that was being analyzed. Even though vast amounts of data are being generated daily, companies are still exploring trusted and scalable ways to share, monetize, and reap the value data can bring to their organizations and their industries.
The value data can bring to an organization is immense and can be, in general, broken down into two pillars: on the one hand, it can help organizations gain insights to outpace its competition, be it the ability to better anticipate and satisfy the needs of customers or increase productivity and efficiency; on the other hand, data can be a new product and a new revenue stream. In the first pillar, data—either generated internally within an organization or gathered from external sources—provides the company new Business Intelligence and knowledge that can be used to optimize processes. What’s more important is the breaking down of data silos to provide knowledge that doesn’t just cause an improvement of existing processes but also revolutionizes a new way of operating.
The second pillar leads to new products and business models derived from data—the AI future which requires a vast amount of data assets to train and develop AI applications. Both pillars require effective and scalable data exchange. The challenge here is Trust. The main reasons why data exchange and analysis have been hampered is due largely to concerns over trust and security. Up until now, most data exchanges operate under centralized marketplaces where a copy of data is stored centrally and exchanged under the rules and governance of the individual marketplace. Under this model, there is a lack of data provenance, transparency, trust and fairness in pricing. Companies have limited means to assign a value to their data assets and to make sure that exchange is trustworthy, auditable and verifiable. In addition, data owners can lose control of their data assets and more importantly, privacy and security can be easily compromised.
Having built and run centralized marketplaces myself, I experienced these limitations first hand and the model simply won’t scale. Data remains locked up in silos, unable to be leveraged. Many companies have AI algorithms but not enough datasets to train their AI. According to a report by PwC, the AI revolution, which will be powered by data, will lift the global GDP to 15.7 trillion by 2030 with a 60 percent increase in consumption and 40 percent increase in productivity. How can we effectively unleash the power of data?
Decentralization is the Way Forward
With the rise of blockchain technology, experts are moving beyond its crypto application. We are seeing the coming together of blockchain, data, and AI, allowing individuals and organizations to convert data and related services into assets. For example, the non-profit, Ocean Protocol Foundation, is building a blockchain enabled data exchange protocol called the Ocean Protocol, to promote worldwide data exchange. Due to its decentralized nature, Ocean Protocol allows for trust to take place without the presence of a third centralized party. It eliminates the issue of single point of control and failure, enables trust among data providers and consumers, and ensures secure data exchange and transactions. In addition, with native tokens, Ocean Protocol can incentivize individuals and organizations to submit, refer, and make available quality data and related services, significantly enhance and maintain the integrity of the ecosystem. By providing an open source, decentralized blockchain data exchange protocol for data services, Ocean Protocol creates opportunities for individuals and organizations to continuously create value from data assets. It supports all types of data services for a complete range of data and AI applications, breaking down data silos and liberating the world’s data to solve industry and societal problems.
Open source technology enables innovation and easy adoption by the developer and data scientist communities. Participants are free to use existing open source software and tools or develop their own. One of the obstacles for data to be effectively applied to AI is that data needs to be standardized, labeled, and “cleansed” of bias and anomalies. The data must also be specific enough to be useful, yet protect individuals’ privacy. The Ocean Protocol provides a powerful decentralized substrate for the provision of data services. The open source model allows for a vibrant ecosystem to be developed on top of the Ocean Protocol, including a multitude of data marketplaces, providers of compute, storage and AI services, an array of services that are required to optimize data for usage.
A Unique Trust Framework
Recently, the Ocean Protocol, PwC Singapore, and the Info-communications Media Development Authority of Singapore (IMDA) announced their collaboration to co-develop a trust framework and marketplace solution that enables safe, trusted and effective data exchange that will satisfy the various requirements of data providers and consumers. Under this collaboration, IMDA will provide regulatory guidance and co-create codes of practice to mitigate risks that will ensure appropriate data practices in usage, handling, and sharing.
To support the development, seven industry-led pilots have been launched in the Food and AgriTech, Built Environment, Consumer goods & Retail, Financial Services, Mobility, Utilities, Wellness and healthcare sectors. For example, Aviva and Connected Life are applying data analytics and artificial intelligence to enhance protection and care for the ageing population and support independent living, while Roche Diagnostics is exploring ways to improve management and treatment compliance of patients on blood-thinning therapy, and Johnson & Johnson is working on a clinical trial to leverage access to multi-source, motion and lifestyle data to help improve orthopedic evaluation and recovery. Among other partners, Unilever is using the data sharing framework to unlock new shopper insights in Singapore and help smallholder farmers thrive in Southeast Asia.
The Trusted Data Framework will offer a customized approach for companies to enforce provenance and the flexibility of having different levels of data and services access—from free (commons), priced, restricted, to private assets.
For data to be optimally utilized to drive valuable business decisions, the organization needs to allow its various departments to access a wide array of relevant data sources in a timely fashion. The era of a centralized Business Intelligence unit pumping out BI reports is gone. Decentralized data exchange can help companies stay agile by breaking down data silos and providing flexible solutions and data ecosystem to optimize data exchange, either for internal use or to be turned into valuable assets thereby creating a new Data Economy.