Recently Unveiled Chat GPT-4 Version:

0
41

OpenAI, a company that develops artificial intelligence, recently unveiled GPT-4, the most recent version of the large language model that underpins its well-known chatbot, ChatGPT. The company claims that GPT-4 has significant enhancements. It has already awed individuals with its capacity to produce text that is human-like and to generate images and computer code from virtually any prompt. Although some researchers are dissatisfied with their inability to access the technology, its underlying code, or information regarding how it was trained, these abilities have the potential to transform science. Scientists believe that makes the technology less useful for research and raises safety concerns.

The ability to handle both images and text is one feature of the GPT-4 upgrade, which was released on March 14. Open AI, which is based in San Francisco, California, claims to have passed the US bar exam with results in the ninetieth percentile, as opposed to the tenth percentile for the previous version of ChatGPT, as evidence of its proficiency with languages. However, the technology is currently only accessible to paid ChatGPT subscribers.

Listed as an author on research papers is ChatGPT: According to Evi-Anne van Dis, a psychologist at the University of Amsterdam, “There’s a waiting list at the moment so you cannot use it right now.” Many scientists disagree. But she has seen GPT-4 demonstrations. It was mind-boggling when we watched some videos in which they demonstrated their capabilities, as she claims. She relates one instance, a hand-drawn doodle of a website, as a demonstration of GPT-4’s ability to handle images as inputs, which GPT-4 used to produce the computer code needed to build that website.

However, the scientific community is dissatisfied with OpenAI’s secrecy regarding how the model was trained, what data it used, and how it actually functions. Sasha Luccioni, an open-source AI researcher specializing in climate at HuggingFace, asserts that “all of these closed-source models are essentially dead-ends in science.” OpenAI can continue to build on their research, but it’s a dead end for the community as a whole.

Tests using a “red team”:

As a “red-teamer,” Andrew White, a chemical engineer at the University of Rochester, has had exclusive access to GPT-4: a person paid by OpenAI to test the platform in an effort to make it behave badly. According to him, he has had access to GPT-4 for the past six months. It did not appear to be significantly different from previous iterations at first.

Scientists are fooled by abstracts written by Chat GPT Login. He asked the bot what steps in a chemical reaction were needed to make a compound, predict the yield of the reaction, and choose a catalyst. White claims, “I was actually not that impressed at first.” Although it would hallucinate an atom here, it would look so real, which was really surprising. He adds, “It would skip a step there.” But when he gave GPT-4 access to scientific papers as part of his red team work, things changed dramatically. It showed us that these models might not be so great on their own. However, when you connect them to tools like the Internet and calculators or retrosynthesis planners, new abilities suddenly emerge.

Concerns come along with those abilities. For instance, could GPT-4 permit the production of hazardous chemicals? According to White, OpenAI engineers incorporated feedback from individuals like White into their model to deter GPT-4 from producing content that is harmful, illegal, or dangerous.

Fake news:

Another issue is the dissemination of false information. According to Luccioni, hallucination cannot be cured by models like GPT-4, which are used to predict the next word in a sentence. Because there is so much hallucination, you can’t trust these models, “she says. She asserts that this remains a concern in the most recent version, despite OpenAI’s claim that GPT-4 has improved safety.

Luccioni is disappointed by OpenAI’s assurances regarding safety because he does not have access to the training data. You have no idea what the data are. Therefore, you can’t change it. She states, “I mean, it’s just impossible to do science with a model like this.”


How readers of Nature use ChatGPT:

Psychology student Claudi Bockting, who works with van Dis in Amsterdam, is also concerned about the unsolved question of how GPT-4 was trained. Being accountable for something you can’t control is very difficult for people, she says. One of the concerns is that they may be significantly more biased than, for instance, human bias. According to Luccioni, it is impossible to identify the source of the bias or find a solution without having access to the GPT-4 code.

Discussions on ethics:

Additionally, Bockting and van Dis are concerned about the growing ownership of these AI systems by major tech companies. They want to make sure that scientists properly test and verify the technology. She adds, “This is also an opportunity because, of course, working with big tech can speed up processes.”

This year, Van Dis, Bockting, and colleagues argued that a set of “living” guidelines governing the use and development of AI and tools like Chat GPT Sign Up was urgently required. They are concerned that AI technology legislation will not be able to keep up with the rate of change. On April 11, Bockting and van Dis will hold an invitation-only summit at the University of Amsterdam to discuss these issues with representatives from organizations like the World Economic Forum, the UNESCO science-ethics committee, and the Organization for Economic Co-operation and Development.

White asserts that despite the concern, GPT-4 and its subsequent iterations will reshape science. He asserts, “I think it’s actually going to be a huge change in science infrastructure, almost like the internet was a big change.” He adds that it won’t take the place of scientists, but it might help with some tasks. I believe that we will begin to realize that we can connect the libraries, data programs, and papers we use with computational work and even robotic experiments.

LEAVE A REPLY

Please enter your comment!
Please enter your name here