How Voluntary Self-Verification Improves Trust and Safety on Social Media

Social media platforms like Twitter, Instagram, YouTube and Facebook have long offered “blue checkmarks” to authenticate the identities of prominent personalities, brands and companies.

While the “blue checkmarks” lend authenticity to important profiles, they also subtly create a class distinction between those who are “worthy” of verification and those who are not. This can go against the very idea of ​​“inclusion” and “neutrality” that social media platforms pride themselves on.

A feature such as “voluntary self-verification” – which offers users the chance to verify their social media profiles and thus gain greater credibility – could help bridge the gap between prominent profiles and the host of creators who leverage social platforms to express themselves, engage and connect with like-minded people.

Additionally, in addition to lending credibility and authenticating user profiles, the self-verification feature could bring a host of other benefits to social media platforms, users, and the ecosystem as a whole. .

What is Voluntary Verification under Indian Laws?

As per the Information Technology Rules (Guidelines for Intermediaries and Code of Ethics for Digital Media) India 2021 Guidelines, “Social Media Intermediaries” in India are required to allow voluntary verification for all users, so that upon completion of a simple verification process, users can be provided with visible verification markers similar to the “blue checkmarks” seen next to important accounts on platforms like Twitter, Instagram or Facebook.

Benefits of voluntary self-verification

Voluntary self-verification is an empowering feature on social media that allows users to prove the authenticity of their accounts on the platform, lending greater credibility to the thoughts and opinions they share. It reinforces the visibility of authentic voices, and offers them the same privilege – in the form of a visible marker – which until then was only entitled to eminent testimonies. Unlike elitist “blue check marks”, which are often provided by social media companies in recognition of stature or achievements of prominent accounts, voluntary self-verification will only serve to authenticate the user as a “real person”. and not a bot.

Even though social media is ubiquitous, the field is gradually reflecting the Wild West, where malicious elements (or even bot accounts) on condition of anonymity, engage in the spread of hate and fake news. The lack of responsibility and traceability of a user posting such content is the engine of this forest fire. Very often, conversations on social media are far more confrontational and aggressive than they are in real life, as online anonymity gives users greater leeway to engage in wrongdoing. Self-verification would lead to greater accountability and make users accountable for everything they share on social platforms.

A key anomaly plaguing social media platforms is the problem of fake accounts or bots. A large number of bots are used to generate more subscribers, likes or shares of posts which eventually go viral and are seen by real users. Facebook removed as many as 1.7 billion robots in the last quarter of 2021, which was huge, but still on the downside-mind-boggling 1.8 billion bots removed in the previous quarter. In just six months, 3.5 billion bots have been created on a single platform. So, even considering a single social media platform and in a single calendar year, these algorithmically created “fake humans” could exceed the total number of real humans on this planet.

Self-verified accounts can be easily differentiated from these bots. When a particular brand, company, or ideology can categorize its followers into verified and unverified users, it will become another measure of market competition. Additionally, as self-verification grows in importance, social media platforms will be in a better position to formulate strong policies regarding the removal of these bots.

Additionally, voluntary self-verification can limit online abuse, bullying, the spread of hate and malice, and the harassment of women and children. By limiting anonymous accounts and bots, this feature can significantly improve the transparency and security of the interface; make social media safe and reliable for users and creators to engage in healthy interactions.

While people may think that the voluntary nature of this provision will reduce it to a namesake attempt, history has taught us the importance of the network effect. As more and more people get used to absorbing news and posts from verified accounts, the rejection of unverified accounts could become the new norm. This could lead to a situation where unverified accounts could lose their relevance in the social media universe.



LinkedIn


Warning

The opinions expressed above are those of the author.



END OF ARTICLE