AI is Advancing Alarmingly Fast and GPT-4 is Proof of That

As the field of natural language processing (NLP) progresses, OpenAI’s ChatGPT has emerged as the leading generative AI in NLP. At its original launch in November 2022, ChatGPT was based on GPT-3.5, OpenAI’s language model capable of generating human-like text using deep learning techniques. However, OpenAI recently unveiled their newest iteration of the GPT series, GPT-4, and it leaves GPT-3.5 in the dust. 

GPT-4 does much better than GPT-3.5 when it comes to taking tests designed for humans. Most impressively, GPT-4’s simulated bar exam score lands in the top 10% of test takers, whereas GPT-3.5 scored in the bottom 10%. Other tests that GPT-4 performed higher than GPT-3.5 include AP Calculus BC, AMC 12, GER Quantitative, GRE Verbal, AP Chemistry, AP Physics 2, AP Statistics, and more.

It’s clear that GPT-4 is more knowledgeable than GPT-3.5, but it’s also safer and more reliable.

While designed to not respond to dangerous prompts, there were certain ways that users got ChatGPT to produce harmful material. The process of this is known as jailbreaking, which generally refers to removing boundaries set in place by the developers of a program. One way this has been done to ChatGPT is by telling it to act out the role of DAN (Do Anything Now), who always does what he is told. The result of this is ChatGPT creating scripts for viruses, propagating hate speech, and anything else the user requests. However, GPT-4 is less likely to fall for these trap prompts, making it safer to be implemented into other applications. Regarding reliability, OpenAI has stated that it is much less likely to hallucinate, or confidently present made-up information as fact, than GPT-3.5.

While knowledge, safety, and reliability are all important aspects that have been improved upon, GPT-4 introduces a completely new aspect, multi-modal capability.

Being multi-modal means that GPT-4 can accept input in the form of both digital text and images, whereas the previous versions could only handle text input. Using this capability, GPT-4 can read text in multiple languages from images, explain memes, interpret graphs, and make predictions of what will happen based on the positioning of different objects in the image. This capability has many great use cases, one of which has already been implemented through Be My Eyes, a mobile app designed to aid the visually impaired. Be My Eyes has a community of volunteers that assist the visually impaired understand images through live chat. Be My Eyes now has a virtual assistant feature powered by GPT-4 where GPT-4 acts as a volunteer to interpret images for those who can’t.

GPT-4 is a major breakthrough in NLP. With its improved knowledge, safety, reliability, and multi-modal capability, it is poised to revolutionize the way we interact with language and images. 

Do you think we are approaching sentiency with AI?

See more like this:

image
AI and Globalization: Bridging Cultures or Diluting Identities?
Historically, globalization has been driven by technological advancement. In the 16th century, the caravel...
Two people in a park, facing each other, talking, with robotic hands above their heads controlling them with strings, ensuring the people dont look the exact same and their heads are fully human
The Rising Influence of AI on Everyday Language
AI is becoming increasingly prevalent in our daily lives in the form of integrated writing assistants,...
africanwriters
ChatGPT Could Potentially Harm African Writers
As AI chatbots continue to evolve and grow more popular, millions of people around the world are taking...

Leave a Reply