Media

Fake News and Fake Profiles Issue

Fake news is probably the most alarming issue we face as the media go through the painful process of democratization via social media and publishing platforms.


Indeed, just 30 years ago media publishers – newspapers, TV, and radio – controlled the world. They followed certain policies, were in their turn controlled by some groups, and their clients were able to evaluate their reputation.


Blogs, social media, video hostings, and other platforms changed it all for good. Now everyone is a publisher. Bloggers (and then vloggers and podcasters) build their own reputation and personal brands. Some of them connect their posts with their real identity, while others prefer staying anonymous. Such authors, especially anonymous ones, may launch fake news and establish vast cross-references to each other, creating a feeling of numerous sources while in reality it could be only one original source.


Another widespread problem is ‘bot farms’, which create a false sense of public opinion through thousands of "individual" comments.


It is almost impossible for an inexperienced reader to separate lies from truth now. Even respected publishers are not safe any more if they don’t invest in proper fact checking for each and every piece of news, which consumes a lot of resources.

(Re-)Introducing Measurable Reputation

People need a simple indicator of veracity. The unfeigned indicator.


Several projects keep trying to address this issue, but none of them dared to deconstruct the most basic thing: the reputation itself and the mechanics behind it.

We do.


We distinguish reputations (we call them e-Karma) of several elements of the media value chain:

  • reputation of the platform
  • reputation of the author
  • reputation of the content

As any e-Karma, the reputation of any of these elements will be built on millions of comments, likes, follow requests or any other human reactions related to the platform, the author, and their content, and even every part of the content. One can comment on a specific paragraph and then Linked Data technology will be able to create a cumulative rating for a specific idea discussed in this paragraph, so that some smart platform could figure out a public opinion on this idea.

In our architecture, we don’t introduce two separate categories of authors and readers, as every reader may become an author (…of at least a comment). But we do distinguish authenticated and (yet) unauthenticated users. An authenticated user has an e-passport (technically speaking, a private key and a certificate from a CA in the global Web 3.0 PKI), and an unauthenticated one is just a regular user of a modern platform (logins and passwords don’t make users authenticated in a strict sense of the Web 3.0 Data Space).

It’s important to emphasize that only authenticated users may build their e-Karma as authors and may provide input to e-Karma of other authors and their pieces of content.


By the way, ‘authenticated’ doesn’t mean ‘known to everyone’. Users may get any level of anonymity they require. E.g. authors may want to have aliases, and some aliases may even become pretty famous. In such a case, the system will protect such aliases, meaning that no one could fake the existing alias. In fact, authenticated users within the Web 3.0 Data Space will be able to adjust their level of anonymity much better than unauthenticated ones.

In the beginning, all users are unauthenticated. Eventually, as they want to become authors with a reputation or to rate others, or as they get PODs for other reasons, they automatically get e-passports and become authenticated users.

Additional Benefits

Within the Web 3.0 Data Space system, as more and more materials and authors, and platforms are rated, we expect a smooth transition from a vague world of uncertainty and fake news to the world with a proven reputation of every content provider, both individual and corporate.


In this world, authors will “just publish” their materials instead of posting them on a specific platform.


On the other hand, will become risky for a platform to ban a popular author, as readers will go away and access content through other platforms, it will get negative reactions from unhappy authors and their readers, and its e-Karma will decrease (as long as these authors and readers have decent e-karma level themselves).


More positive side effects will emerge, such as:


Persistent IDs

Every author, media, or article will have its own unique and unchangeable ID (we call it a Persistent ID). What is important here, every piece of content and its URL will remain forever, as well as all links from other materials to this ID (exactly like DOI works).


Investigation platforms

As all data will be well preserved, with persistent IDs and authenticated authors (even anonymous, but with persistent aliases), we expect investigation platforms to emerge and to provide due diligence on all kinds of potential frauds.


We believe that the Web 3.0 Data Space architecture will provide a trusted ecosystem which will eventually replace the current untrusted one and at some point will allow us to get rid of fake news.

The old system was perfect for the world of paper and trusted publishers. Now we have to introduce reliable reputation mechanics to the new digital reality