Google and Microsoft have replied by showcasing AI chatbots integrated into their search engines in response to ChatGPT’s explosive expansion, which is generating headlines every day. AI is unquestionably the technology of the future.
But what about the future? AI is a potent technology that can be utilized to improve human learning, productivity, and enjoyment. But the idea underlying both Google’s Bard bot and Microsoft’s “New Bing” chatbot is flawed and dangerous: readers don’t care where their information originates from or who is behind it.
Both firms’ AI engines are marketed as alternatives to the papers they learned from and are based on data from human authors. In the end, there might be a more limited web with less open information and fewer knowledgeable people available to provide you with sound advice.
The success of many publishers’ economic models heavily depends on the website traffic that search engines produce. If there were no traffic, it would lead to a lack of advertising, e-commerce clicks, financial resources, and job opportunities. As a result, some publishers may be forced to shut down their operations entirely. Others might opt to implement paywalls or restrict search engine indexing by Bing and Google. Consequently, AI bots would struggle to find reliable sources for scraping, leading to less reliable recommendations. Furthermore, readers would either have to pay more for high-quality information or have access to a reduced number of perspectives.
Nevertheless, it is obvious that these bots are being educated by indexing the writings of human authors, the majority of whom received payment from publishers for their contributions. As a result, when the bot claims Orion is a fantastic constellation to observe, it did so after learning about constellations from a website similar to our sister site Space.com. Although Bing gives us some idea with its citations, the particular websites that lead to which true assertions are unknown because of how the models were trained. It is up for debate and may be decided in court whether Google and Bing’s actions amount to plagiarism or copyright infringement.
Getty Photographs is currently suing Stable Diffusion, a company that creates AI photos, for using 10 million of their images to train the model. A sizable publisher could do the same, in my opinion. A few years ago, Amazon was seriously charged with copying goods from its own third-party merchants before manufacturing its own Amazon-branded goods, which unsurprisingly rank higher in internal search. It’s not unfamiliar. One could contend that Bing’s OpenAI and Google’s LaMDA engines are simply performing actions that a human author might perform. They read original sources before giving their own verbal summaries of them. But no human author could possibly possess these levels of processing capacity or expertise.
You might also make the case that publishers require a traffic source other than Google, but most Internet users are conditioned to utilize search as their first port of call. People would visit Yahoo in 1997, look through a directory of websites, bookmark any that fit their interests, and return to them later. This was similar to how audiences were modeled for TV networks, magazines, and newspapers since most individuals had a small number of go-to sources for information.
You can trust a publisher, but you frequently locate that publisher’s website by searching for it on Google or Bing, which you can access immediately from your browser’s address bar. Any publisher, or even a collection of publishers, will find it challenging to alter this engrained tendency. More crucially, if you are an informed consumer, you are being treated like a machine and asked to ingest information without questioning its reliability or source. That is not a nice user experience.
FAQs