Home Developers & Open Source GPT-3: The Gamechanger Is Here

GPT-3: The Gamechanger Is Here

by Raghav Khullar
0 comment 6 minutes read

By now, most of you must have heard or perhaps seen the demos of this amazing and revolutionary new AI language model. GPT-3 is an amazingly fast and accurate ML model of the modern age. Recently, it has been quite the buzz all over social media platforms, for the past few weeks. As a result, it was only natural to have the urge to write about the model. Read on to know more about it:


This mega machine learning model, created by OpenAI, can write it’s own op-eds, poems, articles, and even working code. About two years ago, we could’ve said that Artificial General Intelligence was still quite far away, maybe even decades, until today.

The current works on AI are mainly focused on just a specific task at hand. Well, GPT-3 just changed that. GPT-3 is the latest language model from the OpenAI team. They published the paper in May 2020, and in July, OpenAI gave access to the model to a few beta testers via an API.

The model was initially used to generate poetry. Later, it was applied to writing role-playing adventures or creating simple apps with a few buttons. To everyone’s astonishment, it gave astoundingly accurate results.

Under the hood

GPT-3 is a neural-network-powered language model. Just like the other such models, GPT-3 is elegantly trained on an unlabeled text dataset.

The model works by randomly removing words or phrases from the text. So, the GPT-3 model must learn to fill them in using only the surrounding words as context. It’s a simple training task under NLP, that results in a powerful and generalized model. It uses a transformer-based approach, which became popular just about 2-3 years ago.

With GPT-3, you don’t need to do a fine-tuning step. This is the heart of it. This is what gets people excited about GPT-3. It uses custom language tasks without training data.

What makes GPT-3 unique

The model contains about 175 billion parameters. It’s the largest language model ever created, and was trained on the largest dataset of any language model. Now coming onto the MAGICAL part of GPT-3. As most data science professionals would agree, every model in AI requires several hyper-parameter adjustments.

How To Fine Tune Your Machine Learning Models To Improve ...

However, GPT-3 can do what no other model can do, with such accuracy. It can perform tasks without any special tuning. It could become a translator, a programmer, a poet, or a famous author. Moreover, it does it with the user providing fewer than 10 training examples.

So, what next?

In conclusion, as of now, GPT-3 is in private beta. OpenAI is whitelisting selective developers and users as of now. Meanwhile, the model shall be officially available to the general public for use soon.

Sam Altman's leap of faith | TechCrunch

But when a new AI milestone comes along it gets buried in the hype. Sam Altman, who co-founded OpenAI with Elon Musk, tried to tone things down. “The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out“, he said.

You may also like

Leave a Comment

Update Required Flash plugin

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.