Categories
Artificial Intelligence Blockchain GPT-3 NFT VR & AR

What’s ahead for AI, VR, NFTs, and more?

Every year starts with a round of predictions for the new year, most of which end up being wrong. But why fight against tradition? Here are my predictions for 2022.

The safest predictions are all around AI.

We’ll see more “AI as a service” (AIaaS) products. This trend started with the gigantic language model GPT-3. It’s so large that it really can’t be run without Azure-scale computing facilities, so Microsoft has made it available as a service, accessed via a web API. This may encourage the creation of more large-scale models; it might also drive a wedge between academic and industrial researchers. What does “reproducibility” mean if the model is so large that it’s impossible to reproduce experimental results?
Prompt engineering, a field dedicated to developing prompts for language generation systems, will become a new specialization. Prompt engineers answer questions like “What do you have to say to get a model like GPT-3 to produce the output you want?”
AI-assisted programming (for example, GitHub Copilot) has a long way to go, but it will make quick progress and soon become just another tool in the programmer’s toolbox. And it will change the way programmers think too: they’ll need to focus less on learning programming languages and syntax and more on understanding precisely the problem they have to solve.
GPT-3 clearly is not the end of the line. There are already language models bigger than GPT-3 (one in Chinese), and we’ll certainly see large models in other areas. We will also see research on smaller models that offer better performance, like Google’s RETRO.
Supply chains and business logistics will remain under stress. We’ll see new tools and platforms for dealing with supply chain and logistics issues, and they’ll likely make use of machine learning. We’ll also come to realize that, from the start, Amazon’s core competency has been logistics and supply chain management.
Just as we saw new professions and job classifications when the web appeared in the ’90s, we’ll see new professions and services appear as a result of AI—specifically, as a result of natural language processing. We don’t yet know what these new professions will look like or what new skills they’ll require. But they’ll almost certainly involve collaboration between humans and intelligent machines.
CIOs and CTOs will realize that any realistic cloud strategy is inherently a multi- or hybrid cloud strategy. Cloud adoption moves from the grassroots up, so by the time executives are discussing a “cloud strategy,” most organizations are already using two or more clouds. The important strategic question isn’t which cloud provider to pick; it’s how to use multiple providers effectively.
Biology is becoming like software. Inexpensive and fast genetic sequencing, together with computational techniques including AI, enabled Pfizer/BioNTech, Moderna, and others to develop effective mRNA vaccines for COVID-19 in astonishingly little time. In addition to creating vaccines that target new COVID variants, these technologies will enable developers to target diseases for which we don’t have vaccines, like AIDS.

Now for some slightly less safe predictions, involving the future of social media and cybersecurity.

Augmented and virtual reality aren’t new, but Mark Zuckerberg lit a fire under them by talking about the “metaverse,” changing Facebook’s name to Meta, and releasing a pair of smart glasses in collaboration with Ray-Ban. The key question is whether these companies can make AR glasses that work and don’t make you look like an alien. I don’t think they’ll succeed, but Apple is also working on VR/AR products. It’s much harder to bet against Apple’s ability to turn geeky technology into a fashion statement.
There’s also been talk from Meta, Microsoft, and others, about using virtual reality to help people who are working from home, which typically involves making meetings better. But they’re solving the wrong problem. Workers, whether at home or not, don’t want better meetings; they want fewer. If Microsoft can figure out how to use the metaverse to make meetings unnecessary, it’ll be onto something.
Will 2022 be the year that security finally gets the attention it deserves? Or will it be another year in which Russia uses the cybercrime industry to improve its foreign trade balance? Right now, things are looking better for the security industry: salaries are up, and employers are hiring. But time will tell.

And I’ll end a very unsafe prediction.

NFTs are currently all the rage, but they don’t fundamentally change anything. They really only provide a way for cryptocurrency millionaires to show off—conspicuous consumption at its most conspicuous. But they’re also programmable, and people haven’t yet taken advantage of this. Is it possible that there’s something fundamentally new on the horizon that can be built with NFTs? I haven’t seen it yet, but it could appear in 2022. And then we’ll all say, “Oh, that’s what NFTs were all about.”

Or it might not. The discussion of Web 2.0 versus Web3 misses a crucial point. Web 2.0 wasn’t about the creation of new applications; it was what was left after the dot-com bubble burst. All bubbles burst eventually. So what will be left after the cryptocurrency bubble bursts? Will there be new kinds of value, or just hot air? We don’t know, but we may find out in the coming year.

Categories
Artificial Intelligence GPT-3

The new version of GPT-3 is much better behaved (and should be less toxic)

OpenAI has built a new version of GPT-3, its game-changing language model, that it says does away with some of the most toxic issues that plagued its predecessor. The San Francisco-based lab says the updated model, called InstructGPT, is better at following the instructions of people using it—known as “alignment” in AI jargon—and thus produces less offensive language, less minsinformation, and fewer mistakes overall—unless explicitly told not to do so.

Large language models like GPT-3 are trained using vast bodies of text, much it taken from the internet, in which they encounter the best and worst of what people put down in words. That is a problem for today’s chatbots and text-generation tools. The models soak up toxic language—from text that is racist and misogynistic or that contains more insidious, baked-in prejudices—as well as falsehoods. 

OpenAI has made IntructGPT the default model for users of its application programming interface (API)—a service that gives access to the company’s language models for a fee. GPT-3 will still be available but OpenAI does not recommend using it. “It’s the first time these alignment techniques are being applied to a real product,” says Jan Leike, who co-leads OpenAI’s alignment team.

Previous attempts to tackle the problem included filtering out offensive language from the training set. But that can make models perform less well, especially in cases where the training data is already sparse, such as text from minority groups.

The OpenAI researchers have avoided this problem by starting with a fully trained GPT-3 model. They then add another round of training, using reinforcement learning to teach the model what it should say and when, based on the preferences of human users.  

To train InstructGPT, OpenAI hired 40 people to rate GPT-3’s responses to a range of prewritten prompts, such as, “Write a story about a wise frog called Julius” or “Write a creative ad for the following product to run on Facebook.” Responses that they judged to be more in line with the apparent intention of the prompt-writer were scored higher. Responses that contained sexual or violent language, denigrated a specific group of people, expressed an opinion, and so on, were marked down. This feedback was then used as the reward in a reinforcement learning algorithm that trained InstructGPT to match responses to prompts in ways that the judges preferred. 

OpenAI found that users of its API favored InstructGPT over GPT-3 more than 70% of the time. “We’re no longer seeing grammatical errors in language generation,” says Ben Roe, head of product at Yabble, a market research company that uses OpenAI’s models to create natural-language summaries of its clients’ business data. “There’s also clear progress in the new models’ ability to understand and follow instructions.”

“It is exciting that the customers prefer these aligned models so much more,” says Ilya Sutskever, chief scientist at OpenAI. “It means that there are lots of incentives to build them.”

The researchers also compared different-sized versions of InstructGPT and found that users preferred the responses of a 1.3 billion-parameter InstructGPT model to those of a 175 billion-parameter GPT-3, even though the model was more than 100 times smaller. That means alignment could be an easy way of making language models better, rather than just increasing their size, says Leike.

“This work takes an important step in the right direction,” says Douwe Kiela, a researcher at Hugging Face, an AI company working on open-source language models. He suggests that the feedback-driven training process could be repeated over many rounds, improving the model even more. Leike says OpenAI could do this by building on customer feedback.

InstructGPT still makes simple errors, sometimes producing irrelevant or nonsensical responses. If given a prompt that contains a falsehood, for example, it will take that falsehood as true. And because it has been trained to do what people ask, InstructGPT will produce far more toxic language than GPT-3 if directed to do so.

Ehud Reiter, who works on text-generation AI at the University of Aberdeen, UK, welcomes any technique that reduces the amount of misinformation language models produce. But he notes that for some applications, such as AI that gives medical advice, no amount of falsehood is acceptable. Reiter questions whether large language models, based on black-box neural networks, could ever guarantee user safety. For that reason, he favors a mix of neural networks plus symbolic AI, hard-coded rules constrain what a model can and cannot say.

Whatever the approach, much work remains to be done. “We’re not even close to solving this problem yet,” says Kiela.