Categories
Orion News

Cryptius Corporation Merges with Orion Innovations

We are incredibly pleased to announce that Orion Innovations has acquired the Cryptius Corporation in an all stock deal. The deal comes on the heels of strong performance by Cryptius on a variety of Orion-led projects within the commercial and Defense spaces. Both Orion and Cryptius specialize in building end-to-end Artificial Intelligence, Machine Learning, and Data Science applications in a variety of verticals. The deal was agreed to in principle by Marc Asselin, CEO of Orion Innovations, and Stephen Plainte, CEO and Data Scientist at Cryptius, and will close in the coming weeks.

“There is an incredible synergy that exists between our two companies. We realized very early on that it just made more sense to tackle large, complex projects together to deliver even more value for our commercial and government clients” said Mr. Plainte. He continued, “Additionally, Cryptius has a number of customers that would benefit greatly from this synergy, and we couldn’t be more excited to introduce them to the new team.”

Mr Asselin shared a similar take on the merger, saying “From the beginning the Cryptius team has been an incredible asset and resource to us. Their corporate culture matches incredibly well with ours, and their technical skills really complement our own. We are really excited and blessed at the opportunity to integrate the Cryptius team with Orion.”

Orion was founded in 2008 by Mr. Asselin after decades of success as CTO for companies in many different verticals. Mr. Plainte founded Cryptius in May of 2021 with the goal of providing jobs to highly talented technologists that come from non-traditional backgrounds, and to serve the traditionally underserved SMB segment with advanced technology from the AI industry.

The new Orion executive team is rounded out by Mr. Asselin as CEO, Mr. Plainte as CTO, Maria Morales as COO, Patrick Mills as Chief Compliance Officer, John Riley III as VP Government Services, and Mike Phillips as VP of Commercial Services.

For press inquiries:

hi@goorion.com

561-900-3712

Categories
Artificial Intelligence

The AI promise: Put IT on autopilot

Sercompe Business Technology provides essential cloud services to roughly 60 corporate clients, supporting a total of about 50,000 users. So, it’s crucial that the Joinville, Brazil, company’s underlying IT infrastructure deliver reliable service with predictably high performance. But with a complex IT environment that includes more than 2,000 virtual machines and 1 petabyte—equivalent to a million gigabytes—of managed data, it was overwhelming for network administrators to sort through all the data and alerts to figure out what was going on when problems cropped up. And it was tough to ensure network and storage capacity were where they should be, or when to do the next upgrade.

To help untangle the complexity and increase its support engineers’ efficiency, Sercompe invested in an artificial intelligence operations (AIOps) platform, which uses AI to get to the root cause of problems and warn IT administrators before small issues become big ones. Now, according to cloud product manager Rafael Cardoso, the AIOps system does much of the work of managing its IT infrastructure—a major boon over the old manual methods.

“Figuring out when I needed more space or capacity—it was a mess before. We needed to get information from so many different points when we were planning. We never got the number correct,” says Cardoso. “Now, I have an entire view of the infrastructure and visualization from the virtual machines to the final disk in the rack.” AIOps brings visibility over the whole environment.

Before deploying the technology, Cardoso was where countless other organizations find themselves: snarled in an intricate web of IT systems, with interdependencies between layers of hardware, virtualization, middleware, and finally, applications. Any disruption or downtime could lead to tedious manual troubleshooting, and ultimately, a negative impact on business: a website that won’t function, for example, and irate customers.

AIOps platforms help IT managers master the task of automating IT operations by using AI to deliver quick intelligence about how the infrastructure is doing—areas that are humming along versus places that are in danger of triggering a downtime event. Credit for coining the term AIOps in 2016 goes to Gartner: it’s a broad category of tools designed to overcome the limitations of traditional monitoring tools. The platforms use self-learning algorithms to automate routine tasks and understand the behavior of the systems they monitor. They pull insights from performance data to identify and monitor irregular behavior on IT infrastructure and applications.

Market research company BCC Research estimates the global market for AIOps to balloon from $3 billion in 2021 to $9.4 billion by 2026, at a compound annual growth rate of 26%.1 Gartner analysts write in their April “Market Guide for AIOps Platforms” that the increasing rate of AIOps adoption is being driven by digital business transformation and the need to move from reactive responses to infrastructure issues to proactive actions.

“With data volumes reaching or exceeding gigabytes per minute across a dozen or more different domains, it is no longer possible for a human to analyze the data manually,” the Gartner analysts write. Applying AI in a systematic way speeds insights and enables proactivity.

According to Mark Esposito, chief learning officer at automation technology company Nexus FrontierTech, the term “AIOps” evolved from “DevOps”—the software engineering culture and practice that aims to integrate software development and operations. “The idea is to advocate automation and monitoring at all stages, from software construction to infrastructure management,” says Esposito. Recent innovation in the field includes using predictive analytics to anticipate and resolve problems before they can affect IT operations.

AIOps helps infrastructure fade into the background

Network and IT administrators harried by exploding data volumes and burgeoning complexity could use the help, says Saurabh Kulkarni, head of engineering and product management at Hewlett Packard Enterprise. Kulkarni works on HPE InfoSight, a cloud-based AIOps platform for proactively managing data center systems.

“IT administrators spend tons and tons of time planning their work, planning the deployments, adding new nodes, compute, storage, and all. And when something goes wrong in the infrastructure, it’s extremely difficult to debug those issues manually,” says Kulkarni. “AIOps uses machine-learning algorithms to look at the patterns, examine the repeated behaviors, and learn from them to provide a quick recommendation to the user.” Beyond storage nodes, every piece of IT infrastructure will send a separate alert so issues can be resolved speedily.

The InfoSight system collects data from all the devices in a customer’s environment and then correlates it with data from HPE customers with similar IT environments. The system can pinpoint a potential problem so it’s quickly resolved—if the problem crops up again, the fix can be automatically applied. Alternatively, the system sends an alert so IT teams can clear up the issue quickly, Kulkarni adds. Take the case of a storage controller that failed because it doesn’t have power. Rather than assuming the problem relates exclusively to storage, the AIOps platform surveys the entire infrastructure stack, all the way to the application layer, to identify the root cause.

“The system monitors the performance and can see anomalies. We have algorithms that constantly run in the background to detect any abnormal behaviors and alert the customers before the problem happens,” says Kulkarni. The philosophy behind InfoSight is to “make the infrastructure disappear” by bringing IT systems and all the telemetry data into one pane of glass. Looking at one giant set of data, administrators can quickly figure out what’s going wrong with the infrastructure.

Kulkarni recalls the difficulty of managing a large IT environment from past jobs. “I had to manage a large data set, and I had to call so many different vendors and be on hold for multiple hours to try to figure out problems,” he says. “Sometimes it took us days to understand what was really going on.”

By automating data collection and tapping a wealth of data to understand root causes, AIOps enables companies to reallocate core personnel, including IT administrators, storage administrators, and network admins, consolidating roles as the infrastructure is simplified, and spending more time ensuring application performance. “Previously, companies used to have multiple roles and different departments handling different things. So even to deploy a new storage area, five different admins each had to do their individual piece,” says Kulkarni. But with AIOps, AI handles much of the work automatically so IT and support staff can devote their time to more strategic initiatives, increasing efficiency and, in the case of a business that provides technical support to its customers, improving profit margins. For example, Sercompe’s Cardoso has been able to reduce the average time his support engineers spend on customer calls, reflecting better customer experience while increasing efficiency.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Categories
Artificial Intelligence

What the history of AI tells us about its future

In May 11, 1997, Garry Kasparov fidgeted in his plush leather chair in the Equitable Center in Manhattan, anxiously running his hands through his hair. It was the final game of his match against IBM’s Deep Blue supercomputer—a crucial tiebreaker in the showdown between human and silicon—and things were not going well. Aquiver with self-recrimination after making a deep blunder early in the game, Kasparov was boxed into a corner. 

A high-level chess game usually takes at least four hours, but Kasparov realized he was doomed before an hour was up. He announced he was resigning—and leaned over the chessboard to stiffly shake the hand of Joseph Hoane, an IBM engineer who helped develop Deep Blue and had been moving the computer’s pieces around the board.

Then Kasparov lurched out of his chair to walk toward the audience. He shrugged haplessly. At its finest moment, he later said, the machine “played like a god.”

For anyone interested in artificial intelligence, the grand master’s defeat rang like a bell. Newsweek called the match “The Brain’s Last Stand”; another headline dubbed Kasparov “the defender of humanity.” If AI could beat the world’s sharpest chess mind, it seemed that computers would soon trounce humans at everything—with IBM leading the way.

That isn’t what happened, of course. Indeed, when we look back now, 25 years later, we can see that Deep Blue’s victory wasn’t so much a triumph of AI but a kind of death knell. It was a high-water mark for old-school computer intelligence, the laborious handcrafting of endless lines of code, which would soon be eclipsed by a rival form of AI: the neural net­—in particular, the technique known as “deep learning.” For all the weight it threw around, Deep Blue was the lumbering dinosaur about to be killed by an asteroid; neural nets were the little mammals that would survive and transform the planet. Yet even today, deep into a world chock-full of everyday AI, computer scientists are still arguing whether machines will ever truly “think.” And when it comes to answering that question, Deep Blue may get the last laugh.

When IBM began work to create Deep Blue in 1989, AI was in a funk. The field had been through multiple roller-coaster cycles of giddy hype and humiliating collapse. The pioneers of the ’50s had claimed that AI would soon see huge advances; mathematician Claude Shannon predicted that “within a matter of ten or fifteen years, something will emerge from the laboratories which is not too far from the robot of science fiction.” This didn’t happen. And each time inventors failed to deliver, investors felt burned and stopped funding new projects, creating an “AI winter” in the ’70s and again in the ’80s.

The reason they failed—we now know—is that AI creators were trying to handle the messiness of everyday life using pure logic. That’s how they imagined humans did it. And so engineers would patiently write out a rule for every decision their AI needed to make.

The problem is, the real world is far too fuzzy and nuanced to be managed this way. Engineers carefully crafted their clockwork masterpieces—or “expert systems,” as they were called—and they’d work reasonably well until reality threw them a curveball. A credit card company, say, might make a system to automatically approve credit applications, only to discover they’d issued cards to dogs or 13-year-olds. The programmers never imagined that minors or pets would apply for a card, so they’d never written rules to accommodate those edge cases.  Such systems couldn’t learn a new rule on their own.

To support MIT Technology Review’s journalism, please consider becoming a subscriber.

AI built via handcrafted rules was “brittle”: when it encountered a weird situation, it broke. By the early ’90s, troubles with expert systems had brought on another AI winter.

“A lot of the conversation around AI was like, ‘Come on. This is just hype,’” says Oren Etzioni, CEO of the Allen Institute for AI in Seattle, who back then was a young professor of computer science beginning a career in AI.

In that landscape of cynicism, Deep Blue arrived like a weirdly ambitious moonshot.

The project grew out of work on Deep Thought, a chess-playing computer built at Carnegie Mellon by Murray Campbell, Feng-hsiung Hsu, and others. Deep Thought was awfully good; in 1988, it became the first chess AI to beat a grand master, Bent Larsen. The Carnegie Mellon team had figured out better algorithms for assessing chess moves, and they’d also created custom hardware that speedily crunched through them. (The name “Deep Thought” came from the laughably delphic AI in The Hitchhiker’s Guide to the Galaxy—which, when asked the meaning of life, arrived at the answer “42.”)

IBM got wind of Deep Thought and decided it would mount a “grand challenge,” building a computer so good it could beat any human. In 1989 it hired Hsu and Campbell, and tasked them with besting the world’s top grand master. Chess had long been, in AI circles, symbolically potent—two opponents facing each other on the astral plane of pure thought. It’d certainly generate headlines if they could trounce Kasparov.

To build Deep Blue, Campbell and his team had to craft new chips for calculating chess positions even more rapidly, and hire grand masters to help improve algorithms for assessing the next moves. Efficiency mattered: there are more possible chess games than atoms in the universe, and even a supercomputer couldn’t ponder all of them in a reasonable amount of time. To play chess, Deep Blue would peer a move ahead, calculate possible moves from there, “prune” ones that seemed unpromising, go deeper along the promising paths, and repeat the process several times. 

“We thought it would take five years—it actually took a little more than six,” Campbell says. By 1996, IBM decided it was finally ready to face Kasparov, and it set a match for February. Campbell and his team were still frantically rushing to finish Deep Blue: “The system had only been working for a few weeks before we actually got on the stage,” he says. 

It showed. Although Deep Blue won one game, Kasparov won three and took the match. IBM asked for a rematch, and Campbell’s team spent the next year building even faster hardware. By the time they’d completed their improvements, Deep Blue was made of 30 PowerPC processors and 480 custom chess chips; they’d also hired more grand masters—four or five at any given point in time—to help craft better algorithms for parsing chess positions. When Kasparov and Deep Blue met again, in May 1997, the computer was twice as speedy, assessing 200 million chess moves per second. 

Even so, IBM still wasn’t confident of victory, Campbell remembers: “We expected a draw.”

The reality was considerably more dramatic. Kasparov dominated in the first game. But in its 36th move in the second game, Deep Blue did something Kasparov did not expect. 

He was accustomed to the way computers traditionally played chess, a style born from machines’ sheer brute-force abilities. They were better than humans at short-term tactics; Deep Blue could easily deduce the best choice a few moves out.

But what computers were bad at, traditionally, was strategy—the ability to ponder the shape of a game many, many moves in the future. That’s where humans still had the edge. 

Or so Kasparov thought, until Deep Blue’s move in game 2 rattled him. It seemed so sophisticated that Kasparov began worrying: maybe the machine was far better than he’d thought! Convinced he had no way to win, he resigned the second game.

But he shouldn’t have. Deep Blue, it turns out, wasn’t actually that good. Kasparov had failed to spot a move that would have let the game end in a draw. He was psyching himself out: worried that the machine might be far more powerful than it really was, he had begun to see human-like reasoning where none existed. 

Knocked off his rhythm, Kasparov kept playing worse and worse. He psyched himself out over and over again. Early in the sixth, winner-takes-all game, he made a move so lousy that chess observers cried out in shock. “I was not in the mood of playing at all,” he later said at a press conference.

IBM benefited from its moonshot. In the press frenzy that followed Deep Blue’s success, the company’s market cap rose $11.4 billion in a single week. Even more significant, though, was that IBM’s triumph felt like a thaw in the long AI winter. If chess could be conquered, what was next? The public’s mind reeled.

“That,” Campbell tells me, “is what got people paying attention.”

The truth is, it wasn’t surprising that a computer beat Kasparov. Most people who’d been paying attention to AI—and to chess—expected it to happen eventually.

Chess may seem like the acme of human thought, but it’s not. Indeed, it’s a mental task that’s quite amenable to brute-force computation: the rules are clear, there’s no hidden information, and a computer doesn’t even need to keep track of what happened in previous moves. It just assesses the position of the pieces right now.

“There are very few problems out there where, as with chess, you have all the information you could possibly need to make the right decision.”

Everyone knew that once computers got fast enough, they’d overwhelm a human. It was just a question of when. By the mid-’90s, “the writing was already on the wall, in a sense,” says Demis Hassabis, head of the AI company DeepMind, part of Alphabet.

Deep Blue’s victory was the moment that showed just how limited hand-coded systems could be. IBM had spent years and millions of dollars developing a computer to play chess. But it couldn’t do anything else. 

“It didn’t lead to the breakthroughs that allowed the [Deep Blue] AI to have a huge impact on the world,” Campbell says. They didn’t really discover any principles of intelligence, because the real world doesn’t resemble chess. “There are very few problems out there where, as with chess, you have all the information you could possibly need to make the right decision,” Campbell adds. “Most of the time there are unknowns. There’s randomness.”

But even as Deep Blue was mopping the floor with Kasparov, a handful of scrappy upstarts were tinkering with a radically more promising form of AI: the neural net. 

With neural nets, the idea was not, as with expert systems, to patiently write rules for each decision an AI will make. Instead, training and reinforcement strengthen internal connections in rough emulation (as the theory goes) of how the human brain learns. 

1997: After Garry Kasparov beat Deep Blue in 1996, IBM asked the world chess champion for a rematch, which was held in New York City with an upgraded machine.

AP PHOTO / ADAM NADEL

The idea had existed since the ’50s. But training a usefully large neural net required lightning-fast computers, tons of memory, and lots of data. None of that was readily available then. Even into the ’90s, neural nets were considered a waste of time.

“Back then, most people in AI thought neural nets were just rubbish,” says Geoff Hinton, an emeritus computer science professor at the University of Toronto, and a pioneer in the field. “I was called a ‘true believer’”—not a compliment. 

But by the 2000s, the computer industry was evolving to make neural nets viable. Video-game players’ lust for ever-better graphics created a huge industry in ultrafast graphic-processing units, which turned out to be perfectly suited for neural-net math. Meanwhile, the internet was exploding, producing a torrent of pictures and text that could be used to train the systems.

By the early 2010s, these technical leaps were allowing Hinton and his crew of true believers to take neural nets to new heights. They could now create networks with many layers of neurons (which is what the “deep” in “deep learning” means). In 2012 his team handily won the annual Imagenet competition, where AIs compete to recognize elements in pictures. It stunned the world of computer science: self-learning machines were finally viable. 

Ten years into the deep-­learning revolution, neural nets and their pattern-recognizing abilities have colonized every nook of daily life. They help Gmail autocomplete your sentences, help banks detect fraud, let photo apps automatically recognize faces, and—in the case of OpenAI’s GPT-3 and DeepMind’s Gopher—write long, human-­sounding essays and summarize texts. They’re even changing how science is done; in 2020, DeepMind debuted AlphaFold2, an AI that can predict how proteins will fold—a superhuman skill that can help guide researchers to develop new drugs and treatments. 

Meanwhile Deep Blue vanished, leaving no useful inventions in its wake. Chess playing, it turns out, wasn’t a computer skill that was needed in everyday life. “What Deep Blue in the end showed was the shortcomings of trying to handcraft everything,” says DeepMind founder Hassabis.

IBM tried to remedy the situation with Watson, another specialized system, this one designed to tackle a more practical problem: getting a machine to answer questions. It used statistical analysis of massive amounts of text to achieve language comprehension that was, for its time, cutting-edge. It was more than a simple if-then system. But Watson faced unlucky timing: it was eclipsed only a few years later by the revolution in deep learning, which brought in a generation of language-crunching models far more nuanced than Watson’s statistical techniques.

Deep learning has run roughshod over old-school AI precisely because “pattern recognition is incredibly powerful,” says Daphne Koller, a former Stanford professor who founded and runs Insitro, which uses neural nets and other forms of machine learning to investigate novel drug treatments. The flexibility of neural nets—the wide variety of ways pattern recognition can be used—is the reason there hasn’t yet been another AI winter. “Machine learning has actually delivered value,” she says, which is something the “previous waves of exuberance” in AI never did.

The inverted fortunes of Deep Blue and neural nets show how bad we were, for so long, at judging what’s hard—and what’s valuable—in AI. 

For decades, people assumed mastering chess would be important because, well, chess is hard for humans to play at a high level. But chess turned out to be fairly easy for computers to master, because it’s so logical.

What was far harder for computers to learn was the casual, unconscious mental work that humans do—like conducting a lively conversation, piloting a car through traffic, or reading the emotional state of a friend. We do these things so effortlessly that we rarely realize how tricky they are, and how much fuzzy, grayscale judgment they require. Deep learning’s great utility has come from being able to capture small bits of this subtle, unheralded human intelligence.

Still, there’s no final victory in artificial intelligence. Deep learning may be riding high now—but it’s amassing sharp critiques, too.

“For a very long time, there was this techno-chauvinist enthusiasm that okay, AI is going to solve every problem!” says Meredith Broussard, a programmer turned journalism professor at New York University and author of Artificial Unintelligence. But as she and other critics have pointed out, deep-learning systems are often trained on biased data—and absorb those biases. The computer scientists Joy Buolamwini and Timnit Gebru discovered that three commercially available visual AI systems were terrible at analyzing the faces of darker-­skinned women. Amazon trained an AI to vet résumés, only to find it downranked women. 

Though computer scientists and many AI engineers are now aware of these bias problems, they’re not always sure how to deal with them. On top of that, neural nets are also “massive black boxes,” says Daniela Rus, a veteran of AI who currently runs MIT’s Computer Science and Artificial Intelligence Laboratory. Once a neural net is trained, its mechanics are not easily understood even by its creator. It is not clear how it comes to its conclusions—or how it will fail.

“For a very long time, there was this techno-chauvinist enthusiasm that Okay, AI is going to solve every problem!” 

It may not be a problem, Rus figures, to rely on a black box for a task that isn’t “safety critical.” But what about a higher-stakes job, like autonomous driving? “It’s actually quite remarkable that we could put so much trust and faith in them,” she says. 

This is where Deep Blue had an advantage. The old-school style of handcrafted rules may have been brittle, but it was comprehensible. The machine was complex—but it wasn’t a mystery.

Ironically, that old style of programming might stage something of a comeback as engineers and computer scientists grapple with the limits of pattern matching.  

Language generators, like OpenAI’s GPT-3 or DeepMind’s Gopher, can take a few sentences you’ve written and keep on going, writing pages and pages of plausible-­sounding prose. But despite some impressive mimicry, Gopher “still doesn’t really understand what it’s saying,” Hassabis says. “Not in a true sense.”

Similarly, visual AI can make terrible mistakes when it encounters an edge case. Self-driving cars have slammed into fire trucks parked on highways, because in all the millions of hours of video they’d been trained on, they’d never encountered that situation. Neural nets have, in their own way, a version of the “brittleness” problem. 

What AI really needs in order to move forward, as many computer scientists now suspect, is the ability to know facts about the world—and to reason about them. A self-driving car cannot rely only on pattern matching. It also has to have common sense—to know what a fire truck is, and why seeing one parked on a highway would signify danger. 

The problem is, no one knows quite how to build neural nets that can reason or use common sense. Gary Marcus, a cognitive scientist and coauthor of Rebooting AI, suspects that the future of AI will require a “hybrid” approach—neural nets to learn patterns, but guided by some old-fashioned, hand-coded logic. This would, in a sense, merge the benefits of Deep Blue with the benefits of deep learning.

Hard-core aficionados of deep learning disagree. Hinton believes neural networks should, in the long run, be perfectly capable of reasoning. After all, humans do it, “and the brain’s a neural network.” Using hand-coded logic strikes him as bonkers; it’d run into the problem of all expert systems, which is that you can never anticipate all the common sense you’d want to give to a machine. The way forward, Hinton says, is to keep innovating on neural nets—to explore new architectures and new learning algorithms that more accurately mimic how the human brain itself works.

Computer scientists are dabbling in a variety of approaches. At IBM, Deep Blue developer Campbell is working on “neuro-symbolic” AI that works a bit the way Marcus proposes. Etzioni’s lab is attempting to build common-sense modules for AI that include both trained neural nets and traditional computer logic; as yet, though, it’s early days. The future may look less like an absolute victory for either Deep Blue or neural nets, and more like a Frankensteinian approach—the two stitched together.

Given that AI is likely here to stay, how will we humans live with it? Will we ultimately be defeated, like Kasparov with Deep Blue, by AIs so much better at “thinking work” that we can’t compete?

Kasparov himself doesn’t think so. Not long after his loss to Deep Blue, he decided that fighting against an AI made no sense. The machine “thought” in a fundamentally inhuman fashion, using brute-force math. It would always have better tactical, short-term power. 

So why compete? Instead, why not collaborate? 

After the Deep Blue match, Kasparov invented “advanced chess,” where humans and silicon work together. A human plays against another human—but each also wields a laptop running chess software, to help war-game possible moves. 

When Kasparov began running advanced chess matches in 1998, he quickly discovered fascinating differences in the game. Interestingly, amateurs punched above their weight. In one human-with-laptop match in 2005, a pair of them won the top prize—beating out several grand masters. 

How could they best superior chess minds? Because the amateurs better understood how to collaborate with the machine. They knew how to rapidly explore ideas, when to accept a machine suggestion and when to ignore it. (Some leagues still hold advanced chess tournaments today.)

This, Kasparov argues, is precisely how we ought to approach the emerging world of neural nets. 

“The future,” he told me in an email, lies in “finding ways to combine human and machine intelligences to reach new heights, and to do things neither could do alone.” 

Neural nets behave differently from chess engines, of course. But many luminaries agree strongly with Kasparov’s vision of human-AI collaboration. DeepMind’s Hassabis sees AI as a way forward for science, one that will guide humans toward new breakthroughs. 

“I think we’re going to see a huge flourishing,” he says, “where we will start seeing Nobel Prize–­winning–level challenges in science being knocked down one after the other.” Koller’s firm Insitro is similarly using AI as a collaborative tool for researchers. “We’re playing a hybrid human-machine game,” she says. 

Will there come a time when we can build AI so human-like in its reasoning that humans really do have less to offer—and AI takes over all thinking? Possibly. But even these scientists, on the cutting edge, can’t predict when that will happen, if ever.

So consider this Deep Blue’s final gift, 25 years after its famous match. In his defeat, Kasparov spied the real endgame for AI and humans. “We will increasingly become managers of algorithms,” he told me, “and use them to boost our creative output—our adventuresome souls.”

Clive Thompson is a science and technology journalist based in New York City and author of Coders: The Making of a New Tribe and the Remaking of the World.

Categories
Artificial Intelligence

Synthetic data for AI

Last year, researchers at Data Science Nigeria noted that engineers looking to train computer-vision algorithms could choose from a wealth of data sets featuring Western clothing, but there were none for African clothing. The team addressed the imbalance by using AI to generate artificial images of African fashion—a whole new data set from scratch. 

Such synthetic data sets—computer-generated samples with the same statistical characteristics as the genuine article—are growing more and more common in the data-hungry world of machine learning. These fakes can be used to train AIs in areas where real data is scarce or too sensitive to use, as in the case of medical records or personal financial data. 

The idea of synthetic data isn’t new: driverless cars have been trained on virtual streets. But in the last year the technology has become widespread, with a raft of startups and universities offering such services. Datagen and Synthesis AI, for example, supply digital human faces on demand. Others provide synthetic data for finance and insurance. And the Synthetic Data Vault, a project launched in 2021 by MIT’s Data to AI Lab, provides open-source tools for creating a wide range of data types.

This boom in synthetic data sets is driven by generative adversarial networks (GANs), a type of AI that is adept at generating realistic but fake examples, whether of images or medical records.

Proponents claim that synthetic data avoids the bias that is rife in many data sets. But it will only be as unbiased as the real data used to generate it. A GAN trained on fewer Black faces than white, for example, may be able to create a synthetic data set with a higher proportion of Black faces, but those faces may end up being less lifelike given the limited original data.

Join us March 29-30 at EmTech Digital, our signature AI conference, to hear Unity’s Danny Lange talk about how the video game maker is using synthetic data.

Categories
Artificial Intelligence

Turning AI into your customer experience ally

It’s one thing to know whether an individual customer is intrigued by a new mattress or considering a replacement for their sofa’s throw pillows; it’s another to know to how to move these people to go ahead and make a purchase. When deployed strategically, artificial intelligence (AI) can be a marketer’s trusted customer experience ally—transforming customer data into actionable insights and creating new opportunities for personalization at scale. On the other hand, when AI is viewed as merely a quick fix, its haphazard deployment at best can amount to a missed opportunity and at worse undermine trust with an organization’s customers.

This phenomenon is not unique to AI. In today’s fast-moving digital economy, it’s not uncommon for performance and results to lag behind expectations. Despite the enormous potential of modern technology to drastically improve the customer experience, business innovation and transformation can remain elusive.

According to Gartner, 89% of companies now compete primarily on the experiences they deliver. As marketers and other teams turn to these systems to automate decision-making, personalize brand experiences, gain deeper insights about their customers, and boost results, there’s often a disconnect between the technology’s potential and what it delivers.

When it comes to AI, frequently, organizations fail to realize the full benefits of their AI investments, and this has real business repercussions. So how do organization ensure that their investments deliver on their promise for fueling innovation, transformation, and even disruption? To find success, it requires the right approach to operationalizing the technology, and investing in AI capabilities that can work together throughout the entire workflow to connect various thoughts and processes together.

Getting real about AI. Realizing the value of AI starts with a recognition that vendor claims and remarkable features will only go so far. Without an overarching strategy, and a clear focus on how to operationalize the technology, even the best AI solutions wind up underperforming and disappointing.

 There’s no simple or seamless way to implement AI within an organization. Even with powerful customer modelling, scoring or segmentation tools, marketers can still wind up missing key opportunities. Without ways to act on the data, the dream of AI quickly fades. In other words, you may know that a certain customer likes hats, and another customer enjoys wearing scarfs but how do you move these people to an actual purchase, or deliver the right content for where they’re at in the buying lifecycle?

The winning approach is to start small and focused when it comes to implementing AI technology. Be mindful about what types of data models you can build with AI, and how they can be used to deliver compelling customer experiences, and business outcomes. Collecting and analyzing actionable customer data is only a starting point. There’s also a need to develop content that matches personas and market segments and deliver this content in a personal and contextually relevant way. Lacking this holistic view and AI framework, organizations simply dial up speed—and inefficiency. In fact, AI may result in more noise and subpar experiences for customers, and unrealized results for an enterprise.

Moving from transaction to transformation. A successful AI framework transforms data and insights into business language and actions. It’s not enough for the marketing team to know what a customer likes, for example, it’s essential to understand how, when and where an individual engages with a business. Only then, can a brand construct and deliver a rich customer experience that matches the way their customers think about and approach a brand. This includes an optimal mix of digital and physical assets, and the ability to deliver dynamic web pages, emails, and other campaigns that customers find useful and appealing. When a marketer understands intent and how a person travels along the customer journey, it becomes possible to deliver the most compelling customer experience.

With this framework in place, marketers can read the right signals and ensure that content delivery is tuned to a person’s specific behavior and preferences. It’s possible to send emails, serve up ads and mail brochures that reach consumers when they are receptive and ready to engage. Whether the customer is into hats, scarves or electric guitars, the odds of successful marketing increase dramatically.

Putting AI to work. Only when an organization has mapped AI workflows and business processes—and understands how to reach their customers effectively—it’s possible to get the most out of AI solutions. These solutions can address the full spectrum of AI, including reading signals, and collecting, storing, and managing customer data; assembling and managing content libraries; and marketing to customers in highly personalized and contextualized ways.

A good way to think of things is to imagine that a person hops in a car with the intent of driving across the United States. If the journey is from Los Angeles to New York, for example, it’s tempting to think the motorist will take the most direct route available. But what happens if the person loves nature and wants to visit the Grand Canyon or Yellowstone National Park along the way? This requires a change in routing. Similarly, an organization must have the tools to understand how and where a person is traveling in the product lifecycle, what ticks the person’s boxes along the way, and what helps them arrive at a desired destination with a minimum of friction and frustration.

AI can do this—and it can serve up promotions and incentives that really work. Yet, to build the right customer experiences and the right journey, marketers must move beyond AI solutions that deliver a basic customer score or snapshot, and instead obtain a motion picture-like view of a customer’s thinking, behavior, and actions. To that end, building out one AI capability or buying one point technology to address a single aspect of customer experience isn’t enough. It’s about being able to connect a set of AI capabilities, which are orchestrated throughout the entire workflow to connect various thoughts and processes together.

Only then is it possible to deliver an optimal marketing experience.

Delivering on the promise of AI. To be sure, with the right strategy, processes, and AI solutions, it’s possible to take marketing to a more successful level and deliver winning customer experience. When marketers truly understand what a customer desires and how they think about a product and their customer journey, it’s possible to tap into the full power of AI.

What’s more, this approach has repercussions that extend far beyond attracting and retaining new customers. When organizations get the formula right, marketers can engage with their best customers in a more holistic and natural way. In the end, everyone wins. The consumer is greeted with a compelling customer experience with relevant messages that display products and services they are interested in at every step of their journey and the business boosts brand value and loyalty.

At that point, AI finally delivers on its promise.

If you’d like to learn more about how AI can help your company deliver personalized content at scale, visit here.   

This content was produced by Adobe. It was not written by MIT Technology Review’s editorial staff.

Categories
Artificial Intelligence Blockchain GPT-3 NFT VR & AR

What’s ahead for AI, VR, NFTs, and more?

Every year starts with a round of predictions for the new year, most of which end up being wrong. But why fight against tradition? Here are my predictions for 2022.

The safest predictions are all around AI.

We’ll see more “AI as a service” (AIaaS) products. This trend started with the gigantic language model GPT-3. It’s so large that it really can’t be run without Azure-scale computing facilities, so Microsoft has made it available as a service, accessed via a web API. This may encourage the creation of more large-scale models; it might also drive a wedge between academic and industrial researchers. What does “reproducibility” mean if the model is so large that it’s impossible to reproduce experimental results?
Prompt engineering, a field dedicated to developing prompts for language generation systems, will become a new specialization. Prompt engineers answer questions like “What do you have to say to get a model like GPT-3 to produce the output you want?”
AI-assisted programming (for example, GitHub Copilot) has a long way to go, but it will make quick progress and soon become just another tool in the programmer’s toolbox. And it will change the way programmers think too: they’ll need to focus less on learning programming languages and syntax and more on understanding precisely the problem they have to solve.
GPT-3 clearly is not the end of the line. There are already language models bigger than GPT-3 (one in Chinese), and we’ll certainly see large models in other areas. We will also see research on smaller models that offer better performance, like Google’s RETRO.
Supply chains and business logistics will remain under stress. We’ll see new tools and platforms for dealing with supply chain and logistics issues, and they’ll likely make use of machine learning. We’ll also come to realize that, from the start, Amazon’s core competency has been logistics and supply chain management.
Just as we saw new professions and job classifications when the web appeared in the ’90s, we’ll see new professions and services appear as a result of AI—specifically, as a result of natural language processing. We don’t yet know what these new professions will look like or what new skills they’ll require. But they’ll almost certainly involve collaboration between humans and intelligent machines.
CIOs and CTOs will realize that any realistic cloud strategy is inherently a multi- or hybrid cloud strategy. Cloud adoption moves from the grassroots up, so by the time executives are discussing a “cloud strategy,” most organizations are already using two or more clouds. The important strategic question isn’t which cloud provider to pick; it’s how to use multiple providers effectively.
Biology is becoming like software. Inexpensive and fast genetic sequencing, together with computational techniques including AI, enabled Pfizer/BioNTech, Moderna, and others to develop effective mRNA vaccines for COVID-19 in astonishingly little time. In addition to creating vaccines that target new COVID variants, these technologies will enable developers to target diseases for which we don’t have vaccines, like AIDS.

Now for some slightly less safe predictions, involving the future of social media and cybersecurity.

Augmented and virtual reality aren’t new, but Mark Zuckerberg lit a fire under them by talking about the “metaverse,” changing Facebook’s name to Meta, and releasing a pair of smart glasses in collaboration with Ray-Ban. The key question is whether these companies can make AR glasses that work and don’t make you look like an alien. I don’t think they’ll succeed, but Apple is also working on VR/AR products. It’s much harder to bet against Apple’s ability to turn geeky technology into a fashion statement.
There’s also been talk from Meta, Microsoft, and others, about using virtual reality to help people who are working from home, which typically involves making meetings better. But they’re solving the wrong problem. Workers, whether at home or not, don’t want better meetings; they want fewer. If Microsoft can figure out how to use the metaverse to make meetings unnecessary, it’ll be onto something.
Will 2022 be the year that security finally gets the attention it deserves? Or will it be another year in which Russia uses the cybercrime industry to improve its foreign trade balance? Right now, things are looking better for the security industry: salaries are up, and employers are hiring. But time will tell.

And I’ll end a very unsafe prediction.

NFTs are currently all the rage, but they don’t fundamentally change anything. They really only provide a way for cryptocurrency millionaires to show off—conspicuous consumption at its most conspicuous. But they’re also programmable, and people haven’t yet taken advantage of this. Is it possible that there’s something fundamentally new on the horizon that can be built with NFTs? I haven’t seen it yet, but it could appear in 2022. And then we’ll all say, “Oh, that’s what NFTs were all about.”

Or it might not. The discussion of Web 2.0 versus Web3 misses a crucial point. Web 2.0 wasn’t about the creation of new applications; it was what was left after the dot-com bubble burst. All bubbles burst eventually. So what will be left after the cryptocurrency bubble bursts? Will there be new kinds of value, or just hot air? We don’t know, but we may find out in the coming year.

Categories
Applied AI Artificial Intelligence Digital Transformation Neural Networks

Meta’s new learning algorithm can teach AI to multi-task

If you can recognize a dog by sight, then you can probably recognize a dog when it is described to you in words. Not so for today’s artificial intelligence. Deep neural networks have become very good at identifying objects in photos and conversing in natural language, but not at the same time: there are AI models that excel at one or the other, but not both. 

Part of the problem is that these models learn different skills using different techniques. This is a major obstacle for the development of more general-purpose AI, machines that can multi-task and adapt. It also means that advances in deep learning for one skill often do not transfer to others.

A team at Meta AI (previously Facebook AI Research) wants to change that. The researchers have developed a single algorithm that can be used to train a neural network to recognize images, text, or speech. The algorithm, called Data2vec, not only unifies the learning process but performs at least as well as existing techniques in all three skills. “We hope it will change the way people think about doing this type of work,” says Michael Auli, a researcher at Meta AI.

The research builds on an approach known as self-supervised learning, in which neural networks learn to spot patterns in data sets by themselves, without being guided by labeled examples. This is how large language models like GPT-3 learn from vast bodies of unlabeled text scraped from the internet, and it has driven many of the recent advances in deep learning.

Auli and his colleagues at Meta AI had been working on self-supervised learning for speech recognition. But when they looked at what other researchers were doing with self-supervised learning for images and text, they realized that they were all using different techniques to chase the same goals.

Data2vec uses two neural networks, a student and a teacher. First, the teacher network is trained on images, text, or speech in the usual way, learning an internal representation of this data that allows it to predict what it is seeing when shown new examples. When it is shown a photo of a dog, it recognizes it as a dog.

The twist is that the student network is then trained to predict the internal representations of the teacher. In other words, it is trained not to guess that it is looking at a photo of a dog when shown a dog, but to guess what the teacher sees when shown that image.

Because the student does not try to guess the actual image or sentence but, rather, the teacher’s representation of that image or sentence, the algorithm does not need to be tailored to a particular type of input.

Data2vec is part of a big trend in AI toward models that can learn to understand the world in more than one way. “It’s a clever idea,” says Ani Kembhavi at the Allen Institute for AI in Seattle, who works on vision and language. “It’s a promising advance when it comes to generalized systems for learning.”

An important caveat is that although the same learning algorithm can be used for different skills, it can only learn one skill at a time. Once it has learned to recognize images, it must start from scratch to learn to recognize speech. Giving an AI multiple skills at once is hard, but that’s something the Meta AI team wants to look at next.  

The researchers were surprised to find that their approach actually performed better than existing techniques at recognizing images and speech, and performed as well as leading language models on text understanding.

Mark Zuckerberg is already dreaming up potential metaverse applications. “This will all eventually get built into AR glasses with an AI assistant,” he posted to Facebook today. “It could help you cook dinner, noticing if you miss an ingredient, prompting you to turn down the heat, or more complex tasks.”

For Auli, the main takeaway is that researchers should step out of their silos. “Hey, you don’t need to focus on one thing,” he says. “If you have a good idea, it might actually help across the board.”