Do NOT follow this link or you will be banned from the site!
Timestamp: Mar 14, 2023
OpenAI has officially announced GPT-4 - the latest version of its incredibly popular large language model powering artificial intelligence (AI) chatbots (among other cool things).
If you’ve heard the hype about ChatGPT (perhaps at an incredibly trendy party or a work meeting), then you may have a passing familiarity with GPT-3 (and GPT-3.5, a more recent improved version). GPT is the acronym for Generative Pre-trained Transformer, a machine learning technology that uses neural networks to bounce around raw input information tidbits like ping pong balls and turn them into something comprehensible and convincing to human beings. OpenAI claims that GPT-4 is its "most advanced AI system" that has been "trained using human feedback, to produce even safer, more useful output in natural language and code."
GPT-3 and GPT-3.5 are large language models (LLM), a type of machine learning model, from the AI research lab OpenAI and they are the technology that ChatGPT is built on. If you've been following recent developments in the AI chatbot arena, you probably haven’t missed the excitement about this technology and the explosive popularity of ChatGPT. Now, the successor to this technology, and possibly to ChatGPT itself, has been released.Cut to the chase
- What is it? GPT-4 is the latest version of the large language model that's used in popular AI chatbots
- When is it out? It was officially announced March 14, 2023
- How much is it? It's free to try out, and there are subscription tiers as well
GPT-4 was officially revealed on March 14, although it didn’t come as too much of a surprise, as Microsoft Germany CTO, Andreas Braun speaking at the AI in Focus - Digital Kickoff event, let slip that the release of GPT-4 was imminent.
It had been previously speculated that GPT-4 would be multimodal, which Braun also confirmed. GPT-3 is already one of the most impressive natural language processing models (NLP models), models built with the aim of producing human-like speech, in history.
GPT-4 will be the most ambitious NLP we have seen yet as it will be the largest language model in existence.
ChatGPT is about to get stronger. (Image credit: Shutterstock) What is the difference between GPT-3 and GPT-4?
The type of input Chat GPT (iGPT-3 and GPT-3.5) processes is plain text, and the output it can produce is natural language text and code. GPT-4’s multimodality means that you may be able to enter different kinds of input - like video, sound (e.g speech), images, and text. Like its capabilities on the input end, these multimodal faculties will also possibly allow for the generation of output like video, audio, and other types of content. Inputting and outputting both text and visual content could provide a huge boost in the power and capability of AI chatbots relying on ChatGPT-4.
Furthermore, similar to how GPT-3.5 was an improvement on GPT-3’s chat abilities by being more fine-tuned for natural chat, the capability to process and output code, and to do traditional completion tasks, GPT-4 should be an improvement on GPT-3.5’s understanding. One of GPT-3/GPT-3.5’s main strengths is that they are trained on an immense amount of text data sourced across the internet.
(Image credit: Rokas Tenys via Shutterstock) What can GPT-4 do?
GPT-4 is trained on a diverse spectrum of multimodal information. This means that it will, in theory, be able to understand and produce language that is more likely to be accurate and relevant to what is being asked of it. This will be another marked improvement in the GPT series to understand and interpret not just input data, but also the context within which it is put. Additionally, GPT-4 will have an increased capacity to perform multiple tasks at once.
OpenAI also claims that GPT-4 is 40% more likely to provide factual responses, which is encouraging to learn since companies like Microsoft plan to use GPT-4 in search engines and other tools we rely on for factual information. OpenAI has also said that it is 82% less like to respond to requests for 'disallowed' content.
Safety is a big feature with GPT-4, with OpenAI working for over six months to ensure it is safe. They did this through an improved monitoring framework, and by working with experts in a variety of sensitive fields, such as medicine and geopolitics, to ensure the replies it gives are accurate and safe.
These new features promise greater ability and range to do a wider variety of tasks, greater efficiency of processing resources, the ability to complete multiple tasks simultaneously, and the potential for greater accuracy, which is a concern among current AI-bot and search engine engineers.
How GPT-4 will be presented is yet to be confirmed as there is still a great deal that stands to be revealed by OpenAI. We do know, however, that Microsoft has exclusive rights to OpenAI’s GPT-3 language model technology and has already begun the full roll-out of its incorporation of ChatGPT into Bing. This leads many in the industry to predict that GPT-4 will also end up being embedded in Microsoft products (including Bing).
We have already seen the extended and persistent waves caused by GPT-3/GPT-3.5 and ChatGPT in many areas of our lives, including but not limited to tech such as content creation, education, and commercial productivity and activity. When you add more dimensions to the type of input that can be both submitted and generated, it's hard to predict the scale of the next upheaval.
The ethical discussions around AI-generated content have multiplied as quickly as the technology’s ability to generate content, and this development is no exception.
GPT-4 is far from perfect, as OpenAI admits. It still has limitations surrounding social biases - the company warns it could reflect harmful stereotypes, and it still has what the company calls 'hallucinations', where the model creates made-up information that is "incorrect but sounds plausible."
Even so, it's an exciting milestone for GPT in particular and AI in general, and the pace at which GPT is evolving since its launch last year is incredibly impressive.