What is GPT-4 & What is it Capable Of?


GPT-4 is an artificial intelligence large language model system that can mimic human-like speech and reasoning. It does so by training on a vast library of existing human communication, from classic works of literature to large swaths of the internet.

Artificial intelligence of this type builds on its training to predict what letter, number or other character is likely to come in sequence. This cheat sheet explores GPT-4 from a high level: how to access GPT-4 for either consumer or business use, who made it and how it works.

What is GPT-4?

GPT-4 is a large multimodal model that can mimic prose, art, video or audio produced by a human. GPT-4 is able to solve written problems or generate original text or images. GPT-4 is the fourth generation of OpenAI’s foundation model.

The GPT-4 API, as well as the GPT-3.5 Turbo, DALL·E, and Whisper APIs, are now generally available as of July 7, 2023.

On May 13, OpenAI revealed GPT-4o, the next generation of GPT-4, which is capable of producing improved voice and video content.

As of July, the organization offers a smaller model, GPT-4o mini. It costs less (15 cents per million input tokens and 60 cents per million output tokens) than the base model and is available in Assistants API, Chat Completions API and Batch API, as well as in all tiers of ChatGPT. It only handles text and vision for now.

Who owns GPT-4?

GPT-4 is owned by OpenAI, an independent artificial intelligence company based in San Francisco. OpenAI was founded in 2015; it started out as a nonprofit but has since shifted to a for-profit model. OpenAI has received funding from Elon Musk, Microsoft, Amazon Web Services, Infosys, and other corporate and individual backers.

OpenAI has also produced ChatGPT, a free-to-use chatbot spun out of the previous generation model, GPT-3.5, and DALL-E, an image-generating deep learning model. As the technology improves and grows in its capabilities, OpenAI reveals less and less about how its AI solutions are trained.

When was GPT-4 released?

OpenAI announced its release of GPT-4 on March 14, 2023. GPT-4 was immediately available for ChatGPT Plus subscribers, while other interested users needed to join a waitlist for access.

SEE: Salesforce looped generative AI into its sales and field service products. (TechRepublic)

How can you access GPT-4?

The public version of GPT-4 is available at the ChatGPT portal site.

On July 7, 2023, OpenAI made the GPT-4 API available for general use for “all existing API developers with a history of successful payments.” OpenAI also expects to open access to new developers by the end of July 2023. Rate-limits may be raised after that period depending on the amount of compute resources available.

In August 2023, GPT-4 was packaged as part of ChatGPT Enterprise. Users of the business-oriented subscription receive unlimited use of a high-speed pipeline to GPT-4.

How much does GPT-4 cost to use?

For an individual, the ChatGPT Plus subscription costs $20 per month to use.

Pricing for the text-only GPT-4 API starts at $0.03 per 1k prompt tokens (one token is about four characters in English) and $0.06 per 1k completion (output) tokens, OpenAI said. (OpenAI explains more about how tokens are counted here.)

SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)

A second option with greater context length – about 50 pages of text – known as gpt-4-32k is also available. This option costs $0.06 per 1K prompt tokens and $0.12 per 1k completion tokens.

Other AI assistance services like Microsoft Copilot and GitHub’s Copilot X now run on GPT-4.

What are the capabilities of GPT-4?

Like its predecessor, GPT-3.5, GPT-4’s main claim to fame is its output in response to natural language questions and other prompts. OpenAI says GPT-4 can “follow complex instructions in natural language and solve difficult problems with accuracy.” Specifically, GPT-4 can solve math problems, answer questions, make inferences or tell stories. In addition, GPT-4 can summarize large chunks of content, which could be useful for either consumer reference or business use cases, such as a nurse summarizing the results of their visit to a client.

OpenAI tested GPT-4’s ability to repeat information in a coherent order using several skills assessments, including AP and Olympiad exams and the Uniform Bar Examination. It scored in the 90th percentile on the Bar Exam and the 93rd percentile on the SAT Evidence-Based Reading & Writing exam. GPT-4 earned varying scores on AP exams.

These are not true tests of knowledge; instead, running GPT-4 through standardized tests shows the model’s ability to form correct-sounding answers out of the mass of preexisting writing and art it was trained on.

GPT-4 predicts which token is likely to come next in a sequence. (One token may be a section of a string of numbers, letters, spaces or other characters.) While OpenAI is closed-mouthed about the specifics of GPT-4’s training, LLMs are typically trained by first translating information in a dataset into tokens; the dataset is then cleaned to remove garbled or repetitive data. Next, AI companies typically employ people to apply reinforcement learning to the model, nudging the model toward responses that make common sense. The weights, which put very simply are the parameters that tell the AI which concepts are related to each other, may be adjusted in this stage.

The Chat Completions API and its upgrades

The Chat Completions API lets developers use the GPT-4 API through a freeform text prompt format. With it, they can build chatbots or other functions requiring back-and-forth conversation. The Chat Completions API first became available in June 2020.

In January 2024, the Chat Completions API will be upgraded to use newer completion models. OpenAI’s ada, babbage, curie, and davinci models will be upgraded to version 002, while Chat Completions tasks using other models will transition to gpt-3.5-turbo-instruct.

GPT-3.5 Turbo fine-tuning and other news

On Aug. 22, 2023, OpenAPI announced the availability of fine-tuning for GPT-3.5 Turbo. This enables developers to customize models and test those custom models for their specific use cases.

In January 2023 OpenAI released the latest version of its Moderation API, which helps developers pinpoint potentially harmful text. The latest version is known as text-moderation-007 and works in accordance with OpenAI’s Safety Best Practices.

What are the limitations of GPT-4 for business?

Like other AI tools of its ilk, GPT-4 has limitations. For example, GPT-4 does not check if its statements are accurate. Its training on text and images from throughout the internet can make its responses nonsensical or inflammatory. However, OpenAI has digital controls and human trainers to try to keep the output as useful and business-appropriate as possible.

Additionally, GPT-4 tends to create ‘hallucinations,’ which is the artificial intelligence term for inaccuracies. Its words may make sense in sequence since they’re based on probabilities established by what the system was trained on, but they aren’t fact-checked or directly connected to real events. OpenAI is working on reducing the number of falsehoods the model produces.

Another major limitation is the question of whether sensitive corporate information that’s fed into GPT-4 will be used to train the model and expose that data to external parties. Microsoft, which has a resale deal with OpenAI, plans to offer private ChatGPT instances to corporations later in the second quarter of 2023, according to an April report.

Like GPT-3.5, GPT-4 does not incorporate information more recent than September 2021 in its lexicon. One of GPT-4’s competitors, Google Bard, does have up-to-the-minute information because it is trained on the contemporary internet.

AI can suffer model collapse when trained on AI-created data; this problem is becoming more common as AI models proliferate.

GPT-4 vs. GPT-3.5 or ChatGPT

OpenAI’s second most recent model, GPT-3.5, differs from the current generation in a few ways. OpenAI has not revealed the size of the model that GPT-4 was trained on but says it is “more data and more computation” than the billions of parameters ChatGPT was trained on. GPT-4 has also shown more deftness when it comes to writing a wider variety of materials, including fiction.

GPT-4 performs higher than ChatGPT on the standardized tests mentioned above. Answers to prompts given to the chatbot may be more concise and easier to parse. OpenAI notes that GPT-3.5 Turbo matches or outperforms GPT-4 on certain custom tasks.

Additionally, GPT-4 is better than GPT-3.5 at making business decisions, such as scheduling or summarization. GPT-4 is “82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses,” OpenAI said.

SEE: Learn how to use ChatGPT.(TechRepublic Academy)

Another large difference between the two models is that GPT-4 can handle images. It can serve as a visual aid, describing objects in the real world or determining the most important elements of a website and describing them.

“Over a range of domains — including documents with text and photographs, diagrams or screenshots — GPT-4 exhibits similar capabilities as it does on text-only inputs,” OpenAI wrote in its GPT-4 documentation.

The latest GPT-4 news

Microsoft announced in early August that GPT-4 availability in Azure OpenAI Service has expanded to several new coverage regions.

As of November 2023, users already exploring GPT-3.5 fine-tuning can apply to the GPT-4 fine-tuning experimental access program.

OpenAI also launched a Custom Models program which offers even more customization than fine-tuning allows for. Organizations can apply for a limited number of slots (which start at $2-3 million) here.

At OpenAI’s first DevDay conference in November, OpenAI showed that GPT-4 Turbo could handle more content at a time (over 300 pages of a standard book) than GPT-4. GPT-4 Turbo is available in preview as of November. OpenAI lowered prices for GPT-4 Turbo in November 2023. The price of GPT-3.5 Turbo was lowered several times, most recently in January 2024.

On April 9, OpenAI announced GPT-4 with Vision is generally available in the GPT-4 API, enabling developers to use one model to analyze both text and video with one API call.





Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top