At Red Marble, our core belief is that artificial intelligence will transform human performance.
And as part of our everyday work, we’re continually coming across (and creating) ways that AI is improving workforce productivity.
Our projects generally fall under five technical patterns of AI: prediction, recognition, hyper-personalisation, outlier detection and the one I’ll talk about here…
Conversation and Language AI
Broadly, conversation and language AI deals with language and speech. There are 3 aspects to this pattern:
- The ability to have a conversation with software, either via text or voice. Common examples of this include Alexa or Siri, but we’re seeing an increasing number of voice-based interfaces within enterprises.
- The ability to understand language and analyse it; for example, we recently worked on a project where we analyse text in work notes to understand if any contractual clauses may have been triggered.
- The ability to generate language – to create a natural language narrative based on input data, for example auto-generating a project status update narrative based on data collected.
There’s been a huge advance recently in natural language generation. It’s based on software called GPT-3 (Generative Pre-Trained Transformer, version 3) developed by California-based AI research centre OpenAI.
This technology was flagged in a research paper in May, and released for a private beta trial in July 2020.
What IS GPT-3?
GPT-3 is a ‘language model’, which means that it is a sophisticated text predictor.
A human ‘primes’ the model by giving it a chunk of text, and GPT-3 predicts the statistically most appropriate next piece of text. It then uses its output as the next round of input, and continues building upon itself, generating more text.
It’s special primarily because of its size. It’s the largest language model ever created, trained using around 175 billion variables (known as ‘parameters’ in this context). Essentially it’s been fed most of the internet to learn what text goes where in response to certain input primes.
What can GPT-3 do?
Some beta-testers have marvelled at what it can do – medical diagnoses, generating software code, creating excel functions on the fly, writing university essays and generating CVs just to name a few.
Others have rejoiced in posting examples showing that GPT-3, though sophisticated, remains easy to fool. After all, it has no common sense!
https://twitter.com/raphamilliere/status/1287047986233708546
https://twitter.com/an_open_mind/status/1284487376312709120
https://twitter.com/sama/status/1284922296348454913
Why is GPT-3 important?
For now this is an interesting technical experiment. The language model cannot be fine-tuned – yet. But it’s only a matter of time before industry-specific variants emerge, trained to skilfully generate excellent quality text in a specific domain.
Any industry where text based outputs or reports are generated – market research, web development, copywriting, medical diagnoses, property valuation, higher education to name a few – could be impacted.
Is GPT-3 intelligent?
This is the big question for us and cuts to the essence of what we think makes for great AI.
In our view, GPT-3 is great at mathematically modelling and predicting what words a human would expect to see next. But it has no internal representation of what those words actually mean. It lacks the ability to reason within its writing; it lacks “common sense”.
But it’s a great predictor of what a human might deem to be acceptable language on a particular topic, and – we believe – that means that through all the hype, it’s a legitimate and credible model. We’ll be keeping a keen eye on it!
Thanks for checking out our business articles.
If you want to learn more, feel free to reach out to Red Marble AI.
You can click on the "Let's Talk" button on our website or email Dave, our AI expert at d.timm@redmarble.ai.We appreciate your interest and look forward to sharing more with you!