Is AI Sentient?

BrilliantThe Best Way To Learn Math And Computer Science

Should We Be Worried About AI Becoming Sentient?

Outstanding AI Faked An Intelligent Test

GPT-3’s builder, OpenAI, was initially founded as a non-profit in 2015. In 2019, OpenAI did not publicly release GPT-3’s precursor model, breaking from OpenAI’s previous open-source practices, citing concerns that the model would perpetuate fake news. OpenAI eventually released a version of GPT-2 that was 8% of the original model’s size. In the same year, OpenAI restructured to be a for-profit company. In 2020, Microsoft announced the company had exclusive licensing of GPT-3 for Microsoft’s products and services following a multi-billion dollar investment in OpenAI. The agreement permits OpenAI to offer a public-facing API such that users can send text to GPT-3 to receive the model’s output, but only Microsoft will have access to GPT-3’s source code.

Large language models, such as GPT-3, have come under criticism from a few of Google’s AI ethics researchers for the environmental impact of training and storing the models, detailed in a paper co-authored by Timnit Gebru and Emily M. Bender in 2021.

The growing[when?] use of automated writing technologies based on GPT-3 and other language generators, has raised concerns regarding academic integrity and raised the stakes of how universities and schools will gauge what constitutes academic misconduct such as plagiarism.

GPT was built with data from the Common Crawl dataset, a conglomerate of copyrighted articles, internet posts, web pages, and books scraped from 60 million domains over a period of 12 years. TechCrunch reports this training data includes copyrighted material from the BBC, The New York Times, Reddit, the full text of online books, and more. In its response to a 2019 Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation from the United States Patent and Trademark Office (USPTO), OpenAI argued that “Under current law, training AI systems [such as its GPT models] constitutes fair use,” but that “given the lack of case law on point, OpenAI and other AI developers like us face substantial legal uncertainty and compliance costs.”

BERT

LaMDA

Natural language processing

Wu Dao

ChatGPT

Hallucination (artificial intelligence)

Bidirectional Encoder Representations from Transformers (BERT) is a masked-language model published in 2018 by Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. A 2020 literature survey concluded that “in a little over a year, BERT has become a ubiquitous baseline in NLP experiments”, counting over 150 research publications analyzing and improving the model.

Official GitHub repository

BERT on Devopedia

LaMDA is a family of conversational neural language models developed by Google. The first generation was announced during the 2021 Google I/O keynote, while the second generation was announced at the following year’s event. In June 2022, LaMDA gained widespread attention when Google engineer Blake Lemoine made claims that the chatbot had become sentient. The scientific community has largely rejected Lemoine’s claims, though it has led to conversations about the efficacy of the Turing test, which measures whether a computer can pass for a human. In February 2023, Google announced Bard, a conversational artificial intelligence chatbot powered by LaMDA, to counter the rise of OpenAI’s ChatGPT.

Press release for LaMDA

Press release for Bard

This entry was posted in Computer, Educational, Science. Bookmark the permalink.