Will AI (Artificial intelligence) improve and boost humanism ,kindness, or terminate our world and our species and help to spread fake news?
These questions still incompatible scenarios, but you’ll find supporters for all of them. they will not be alright , so who’s wrong then?
Ideas spread because they’re attractive, whether or not they’re good or bad, right or wrong. In fact, the “truth” is simply one among the weather used or avoided so on create any story or idea.
There are different interests behind any statement , and messages are issued and received with huge amounts of human bias.
We’re living within the age of hoax news. Fake news contains misinformation behind the real news, spread via some media , and produced with a specific motive like generating revenue, promoting or discrediting a reputation , TRP, a movement , a corporation , etc.
During the 2020 CAA Protest in India, WhatsApp was used to spread alarming amounts of misinformation, rumors and false news against CAA.
Using this technology, it had been possible to need advantage of encrypted personal conversations and discussion groups involving up to 256 people, making these discussion groups much harder to identify compared to the Facebook News Feed or Google’s search results.
Last year, the 2 main Indian political parties took these tactics to a replacement scale by trying to influence India’s 900 million eligible voters through creating content in Facebook and spreading it on WhatsApp. India is WhatsApp’s largest market (more than 200 million Indians users). And an area where users forward more content than anywhere else within the earth .
But these tactics aren’t only utilized within the political arena: they’re also involved in activities that go from manipulating share prices to attacking commercial rivals with fake customer critics. How can fake news have such an impact? the solution is within the way humans process information.
Massive amounts of knowledge have born to AI systems that are already producing human-like synthetic texts, powering a replacement scale of disinformation operation.
Supported tongue Processing (STP) techniques, several life like text-generating systems have proliferated which they’re getting smarter every day .
OpenAI announced the launch of GPT-3. A tool to supply text that’s so real, that in some cases it’s nearly impossible to differentiate from human writing. GPT-3 also can determine how concepts relate to every other, and discern context. Tools like this one are often used to generate misinformation, spam, phishing, abuse of legal and governmental processes, and even fake academic essays.
Deepfakes relate to technologies that make it possible to make evidence of scenes that never happened through video, photo and audio fakes. These technologies can enable bullying more generally, boost scams, damage a company’s reputation, or maybe pose a danger to democracies by putting words within the mouths of politicians.
But deepfakes have another impressive effect: they create it easier for liars to deny the reality in two ways. First, if accused of getting said or done something that they did say or do, liars may generate and spread altered sound or images to make doubt.
The second way is just to denounce the authentic as being fake. How that becomes more plausible because the overall public becomes more educated about the threats posed by deep fakes.
How can we fight this battle?
Fighting fake news could even be a double-edged sword: on the one side, warning news consumers and promoting tools so as that they are often aware and challenge the sources of data could even be a very positive thing, while on the opposite side, we could even be producing news consumers that don’t believe the facility of well-sourced news and mistrust everything. If we follow the latter path, we may achieve a general state of disorientation, with news consumers uninterested or unable to work out the credibility of any news source.
We need technology to fight this battle. AI makes it possible to hunt out words and patterns that indicate fake news in huge volumes of knowledge , and tech companies are already performing on it. Google is functioning on a system which may detect videos that are altered, making their datasets open source and galvanizing others to develop deepfake detection methods.
YouTube declared that it will not allow election-related “fake” videos and anything that aims to mislead viewers about voting procedures.