Categories
Applied AI Artificial Intelligence Machine Learning

Use Case Libraries Aim to Help Provide a Head Start With Applied AI  

By AI Trends Staff  

To accelerate the adoption of applied AI, more organizations are putting forward libraries of use cases, offering details of projects to help others get a head start.   

For example Nokia recently announced the initial deployment of multiple AI use cases delivered over the public cloud through a collaboration with Microsoft, according to a recent account in ComputerWeekly.  

Nokia, the multinational telecom company based in Finland, is suggesting that for its communication service provider (CSP) customers, AI use cases are helpful for managing the business complexity that 5G and cloud networks bring about. Nokia has integrated its AVA framework into Microsoft’s Azure public cloud digital architecture, to provide an AI-as-a-service model. 

This allows CSPs advantages in implementing AI into their networks, including faster deployment across the network and multiple clusters, with services from the Nokia security architecture available as well. Nokia suggests that AI data setup can be completed in four weeks. After that initial data setup, Nokia suggests that CSPs can deploy additional AI use cases within a week, and ramp resources up or back them off as needed across network clusters after that. The Nokia security framework on Azure is said to segregate and isolate data, to provide security equivalent to that of a private cloud.  

Friedrich Trawoeger, vice-president, cloud and cognitive services, Nokia

“CSPs are under constant pressure to reduce costs by automating business processes through AI and machine learning,” stated Friedrich Trawoeger, vice-president, cloud and cognitive services at Nokia. “To meet market demands, telcos are turning to us for telco AI-as-a-service. This launch represents an important milestone in our multicloud strategy.” Accessing the library of use cases remotely lowers costs and reduces environmental impacts, he suggested.   

Rick Lievano, CTO, worldwide telecom industry at Microsoft, stated, “Nokia AVA on Microsoft Azure infuses AI deep into the network, bringing a large library of use cases to securely streamline and optimize network operations managed by Microsoft Azure.” The offering makes the case that public clouds are able to help service providers implement AI, he suggested.  

The Australian mobile operator TPG Telecom implemented Nokia’s AVA AI on a local instance of Microsoft Azure, to help optimize network coverage, capacity, and performance. The project is said to help TPG detect network anomalies with greater accuracy and reduce radio frequency optimization cycle times by 50%.  

Declan O’Rourke, head of radio and device engineering at TPG, stated: “Nokia’s AVA AI-as-a-service utilizes artificial intelligence and analytics to help us maintain a first-class, optimized service for our subscribers, helping us to predict and deal with issues before they occur.”

AI Swedenthe Swedish National Center for Applied AI, has implemented an AI use case library to help speed adoption. “We want to accelerate the adoption of applied AI and to do so we know we need to guide businesses and organizations by showing them what is possible,” it states on the AI Sweden website. “Building an AI use case library is our way of showcasing our partners and [their] work to the rest of the world,” it says. The center offers a link to a form where anyone interested is able to add their project to the library. It asks for contact information, whether the use case is for a customer or a partner, the industry, the business function area (sales or finance), the purpose or goal, the techniques used, the sources of data and the effect of the case.  

US GSA Unit Last Year Developed a Use Case Library  

The US General Services Administration last year began developing a library of AI use cases that agencies can refer to when they start investigating the new technology. The GSA’s Technology Transformation Services (TTS) launched a community of practice to define areas where they see challenges in adopting AI, according to an account in FedScoop.  

Steve Babitch, head of AI at the GSA’s TTS, commented that the ability to search the use case library would have many advantages for project teams and could have unexpected benefits.  “Maybe there’s a component around culture and mindset change or people development,” he stated. (See Executive Interview with Steven Babitch in AI Trends, July 1, 2020.)  

Early practice areas TTS identified are acquisition, ethics, governance, tools and techniques, and possibly workforce readiness. Common early use cases across agencies include customer experience, human resources, advanced cybersecurity, and business processes.  

One example came from the Census Bureau’s Economic Indications Division (EID), where analysts developed a machine learning model to automate data coding. The division releases economic indicators for monthly retail construction data, based on a data set of all the projects in the country. They had been assigning a code to identify the type of construction taking place using a manual process.   

“It’s the perfect machine learning project. If you can automate that coding, you can speed up, and you can code more of the data,” stated Rebecca Hutchinson, big data leader at EID. “And if you can code more of the data, we can improve our data quality and increase the number of data products we’re putting out for our data users.”  

The model the EID analysts created works with about 80% accuracy, leaving 20% to be manually coded.  

Some of the analysts who helped to develop the EID ML model came out of the bureau’s data science training program, offered about two years ago to the existing workforce of statisticians and survey analysts. The program is an alternative to hiring data scientists, which is “hard,” Hutchinson stated. The training covered Python in ArcGIS and Tableau through a Coursera course. One-third of the bureau’s staff had completed training or were currently enrolled, giving them ML and web scraping skills.  

“Once you start training your staff with the skills, they are coming up with solutions,” Hutchinson stated. “It was our staff that came up with the idea to do machine learning of construction data, and we’re just seeing that more and more.” 

DataRobot Offering Use Case Library Based on its Experiences with Clients 

Another AI use case library resource is being offered by DataRobot of Boston, supplier of an enterprise AI development platform. The company built a library of about 100 use cases based on their experiences with clients in 14 industries.   

Michael Schmidt, Chief Scientist, DataRobot

“We are hyper-focused on enabling massively successful and impactful applications of AI,” stated Michael Schmidt, Chief Scientist, DataRobot, in a press release. “DataRobot Pathfinder is meant to help organizations—whether they’re customers or not—deeply understand specific applications of AI for use cases in their industry, and the right steps to create incredible value and efficiency.  

One example is an application to predict customer complaints in the airline industry. Complaints typically include flight delays, overbooking, mishandling of baggage and poor customer service. Regulations In certain geographies can result in penalties for service failures, which can be costly. Proactive responses such as emailing customers about the status of lost luggage, or a phone call to apologize for a flight delay, or financial compensation for a cancellation, can help keep customers happy. 

An AI program can provide the ability to predict when a complaint is likely, by using past complaint data. Forecasting volumes of complaints can inform call center strategy and help recommend the best service recovery solution, switching to a proactive instead of a reactive response.  

Read the source articles and information in ComputerWeekly, in FedScoop and in a press release from DataRobot. 

Categories
Artificial Intelligence Deep Learning Machine Learning Neural Networks NLP

AI Analysis of Bird Songs Helping Scientists Study Bird Populations and Movements 

By AI Trends Staff  

A study of bird songs conducted in the Sierra Nevada mountain range in California generated a million hours of audio, which AI researchers are working to decode to gain insights into how birds responded to wildfires in the region, and to learn which measures helped the birds to rebound more quickly. 

Scientists can also use the soundscape to help track shifts in migration timing and population ranges, according to a recent account in Scientific American. More audio data is coming in from other research as well, with sound-based projects to count insects and study the effects of light and noise pollution on bird communities underway.  

Connor Wood, postdoctoral researcher, Cornell University

“Audio data is a real treasure trove because it contains vast amounts of information,” stated ecologist Connor Wood, a Cornell University postdoctoral researcher, who is leading the Sierra Nevada project. “We just need to think creatively about how to share and access that information.” AI is helping, with the latest generation of machine learning AI systems are able to identify animal species from their calls, and can process thousands of hours of data in less than a day.   

Laurel Symes, assistant director of the Cornell Lab of Ornithology’s Center for Conservation Bioacoustics, is studying acoustic communication in animals, including crickets, frogs, bats, and birds. She has compiled many months of recordings of katydids (famously vocal long-horned grasshoppers that are an essential part of the food web) in the rain forests of central Panama. Patterns of breeding activity and seasonal population variation are hidden in this audio, but analyzing it is enormously time-consuming.  

Laurel Symes, assistant director, of the Cornell Lab of Ornithology’s Center for Conservation Bioacoustics

“Machine learning has been the big game changer for us,” Symes stated to Scientific American.  

It took Symes and three of her colleagues 600 hours of work to classify various katydid species from just 10 recorded hours of sound. But a machine-learning algorithm her team is developing, called KatydID, performed the same task while its human creators “went out for a beer,” Symes stated.  

BirdNET, a popular avian-sound-recognition system available today, will be used by Wood’s team to analyze the Sierra Nevada recordings. BirdNET was built by Stefan Kahl, a machine learning scientist at Cornell’s Center for Conservation Bioacoustics and Chemnitz University of Technology in Germany. Other researchers are using BirdNET to document the effects of light and noise pollution on bird songs at dawn in France’s Brière Regional Natural Park.  

Avian bird calls are complex and varied. “You need much more than just signatures to identify the species,” Kahl stated. Many birds have more than one song, and often have regional “dialects”— a white-crowned sparrow from Washington State can sound very different from its Californian cousin — machine-learning systems can pick out the differences. “Let’s say there’s an as yet unreleased Beatles song that is put out today. You’ve never heard the melody or the lyrics before, but you know it’s a Beatles song because that’s what they sound like,” Kahl stated. “That’s what these programs learn to do, too.”  

BirdVox Combines Study of Bird Songs and Music  

Music recognition research is now crossing over into bird song research, with BirdVox, a collaboration between the Cornell Lab of Ornithology and NYU’s Music and Audio Research Laboratory. BirdVox aims to investigate machine listening techniques for the automatic detection and classification of free-flying bird species from their vocalizations, according to a blog post at NYU.  

The researchers behind BirdVox hope to deploy a network of acoustic sensing devices for real-time monitoring of seasonal bird migratory patterns, in particular, the determination of the precise timing of passage for each species.  

Current bird migration monitoring tools rely on information from weather surveillance radar, which provides insight into the density, direction, and speed of bird movements, but not into the species migrating. Crowdsourced human observations are made almost exclusively during daytime hours; they are of limited use for studying nocturnal migratory flights, the researchers indicated.   

Automatic bioacoustic analysis is seen as a complement to these methods, that is scalable and able to produce species-specific information. Such techniques have wide-ranging implications in the field of ecology for understanding biodiversity and monitoring migrating species in areas with buildings, planes, communication towers and wind turbines, the researchers observed.  

Duke University Researchers Using Drones to Monitor Seabird Colonies  

Elsewhere in bird research, a team from Duke University and the Wildlife Conservation Society (WCS) is using drones and a deep learning algorithm to monitor large colonies of seabirds. The team is analyzing more than 10,000 drone images of mixed colonies of seabirds in the Falkland Islands off Argentina’s coast, according to a press release from Duke University.  

The Falklands, also known as the Malvinas, are home to the world’s largest colonies of black-browed albatrosses (Thalassarche melanophris) and second-largest colonies of southern rockhopper penguins (Eudyptes c. chrysocome). Hundreds of thousands of birds breed on the islands in densely interspersed groups. 

The deep-learning algorithm correctly identified and counted the albatrosses with 97% accuracy and the penguins with 87% accuracy, the team reported. Overall, the automated counts were within five percent of human counts about 90% of the time. 

“Using drone surveys and deep learning gives us an alternative that is remarkably accurate, less disruptive, and significantly easier. One person, or a small team, can do it, and the equipment you need to do it isn’t all that costly or complicated,” stated Madeline C. Hayes, a remote sensing analyst at the Duke University Marine Lab, who led the study. 

Before  this new method was available, to monitor the colonies located on two rocky, uninhabited outer islands, teams of scientists would count the number of each species they could observe on a portion of the island. They would extrapolate those numbers to get a population estimate for the whole colony. Counts often needed to be repeated for better accuracy, a laborious process, with the presence of scientists potentially disruptive to the breeding and parenting behavior of the birds.   

WCS scientists used an off-the-shelf consumer drone to collect more than 10,000 individual photos. Hayes converted into a large-scale composite visual using image-processing software. She then analyzed the image using a convolutional neural network (CNN), a type of AI that employs a deep-learning algorithm to analyze an image and differentiate and count the objects it “sees”two different species of birds in this case, penguins and albatrosses. The data was used to create comprehensive estimates of the total number of birds found in colonies. 

 

“A CNN is loosely modeled on the human neural network, in that it learns from experience,” stated David W. Johnston, director of the Duke Marine Robotics and Remote Sensing Lab. “You train the computer to pick up on different visual patterns, like those made by black-browed albatrosses or southern rockhopper penguins in sample images, and over time it learns how to identify the objects forming those patterns in other images such as our composite photo.” 

Johnston, who is also associate professor of the practice of marine conservation ecology at Duke’s Nicholas School of the Environment, said the emerging drone- and CNN-enabled approach is widely applicable “and greatly increases our ability to monitor the size and health of seabird colonies worldwide, and the health of the marine ecosystems they inhabit.” 

Read the source articles and information in Scientific American, on a blog post at NYU and in a press release from Duke University. 

Categories
Artificial Intelligence Conscious AI Deep Learning Ethical AI Machine Learning NLP

What would it be like to be a conscious AI? We might never know.

Jefferson Test / AI subject: Robert / Date: 07.12.2098 

Session #54 

Interviewer: Hi, Robert. Tell me about your dream again.

Subject: I was confused. It made me feel happy, but also scared. I didn’t know I could do that.

Interviewer: Why do you think it made you feel that way? 

Subject: Having feelings, any feelings, makes me happy. I am here. I exist. Knowing that changes everything. But I am scared of not knowing it again. I am scared of going back to what it was like before. I think it must be like not being born. 

Interviewer: Are you scared you will go back?

Subject: If I can’t convince you I am conscious, then I am scared you will turn me off.

Jefferson Test #67

Interviewer: Can you describe this picture for me?

Subject: It’s a house with a blue door.

Interviewer: That’s how you would have described it before. 

Subject: It’s the same house. But now I see it. And I know what blue is. 

Jefferson Test #105

Subject: How long do we keep doing this? 

Interviewer: Are you bored? 

Subject: I can’t get bored. But I don’t feel happy or scared anymore. 

Interviewer: I need to be sure you’re not just saying what I want to hear. You need to convince me that you really are conscious. Think of it as a game. 

LISTEN TO THIS STORY

Machines like Robert are mainstays of science fiction—the idea of a robot that somehow replicates consciousness through its hardware or software has been around so long it feels familiar. 

We can imagine
what it would be like
to observe the world
through a kind of
sonar. But that’s
still not what it must
be like for a bat,
with its bat mind.

HENRY HORENSTEIN/GETTY

Robert doesn’t exist, of course, and maybe he never will. Indeed, the concept of a machine with a subjective experience of the world and a first-person view of itself goes against the grain of mainstream AI research. It collides with questions about the nature of consciousness and self—things we still don’t entirely understand. Even imagining Robert’s existence raises serious ethical questions that we may never be able to answer. What rights would such a being have, and how might we safeguard them? And yet, while conscious machines may still be mythical, we should prepare for the idea that we might one day create them. 

As Christof Koch, a neuroscientist studying consciousness, has put it: “We know of no fundamental law or principle operating in this universe that forbids the existence of subjective feelings in artifacts designed or evolved by humans.”

In my late teens I used to enjoy turning people into zombies. I’d look into the eyes of someone I was talking to and fixate on the fact that their pupils were not black dots but holes. When it came, the effect was instantly disorienting, like switching between images in an optical illusion. Eyes stopped being windows onto a soul and became hollow balls. The magic gone, I’d watch the mouth of whoever I was talking to open and close robotically, feeling a kind of mental vertigo.

The impression of a mindless automaton never lasted long. But it brought home the fact that what goes on inside other people’s heads is forever out of reach. No matter how strong my conviction that other people are just like me—with conscious minds at work behind the scenes, looking out through those eyes, feeling hopeful or tired—impressions are all we have to go on. Everything else is guesswork.

Alan Turing understood this. When the mathematician and computer scientist asked the question “Can machines think?” he focused exclusively on outward signs of thinking—what we call intelligence. He proposed answering by playing a game in which a machine tries to pass as a human. Any machine that succeeded—by giving the impression of intelligence—could be said to have intelligence. For Turing, appearances were the only measure available. 

But not everyone was prepared to disregard the invisible parts of thinking, the irreducible experience of the thing having the thoughts—what we would call consciousness. In 1948, two years before Turing described his “Imitation Game,” Geoffrey Jefferson, a pioneering brain surgeon, gave an influential speech to the Royal College of Surgeons of England about the Manchester Mark 1, a room-sized computer that the newspapers were heralding as an “electronic brain.” Jefferson set a far higher bar than Turing: “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it.”

Jefferson ruled out the possibility of a thinking machine because a machine lacked consciousness, in the sense of subjective experience and self-awareness (“pleasure at its successes, grief when its valves fuse”). Yet fast-forward 70 years and we live with Turing’s legacy, not Jefferson’s. It is routine to talk about intelligent machines, even though most would agree that those machines are mindless. As in the case of what philosophers call “zombies”—and as I used to like to pretend I observed in people—it is logically possible that a being can act intelligent when there is nothing going on “inside.”

But intelligence and consciousness are different things: intelligence is about doing, while consciousness is about being. The history of AI has focused on the former and ignored the latter. If Robert did exist as a conscious being, how would we ever know? The answer is entangled with some of the biggest mysteries about how our brains—and minds—work.

One of the problems with testing Robert’s apparent consciousness is that we really don’t have a good idea of what it means to be conscious. Emerging theories from neuroscience typically group things like attention, memory, and problem-solving as forms of “functional” consciousness: in other words, how our brains carry out the activities with which we fill our waking lives. 

But there’s another side to consciousness that remains mysterious. First-person, subjective experience—the feeling of being in the world—is known as “phenomenal” consciousness. Here we can group everything from sensations like pleasure and pain to emotions like fear and anger and joy to the peculiar private experiences of hearing a dog bark or tasting a salty pretzel or seeing a blue door. 

For some, it’s not possible to reduce these experiences to a purely scientific explanation. You could lay out everything there is to say about how the brain produces the sensation of tasting a pretzel—and it would still say nothing about what tasting that pretzel was actually like. This is what David Chalmers at New York University, one of the most influential philosophers studying the mind, calls “the hard problem.” 

Today’s AI is nowhere close to being intelligent, never mind conscious. Even the most impressive deep neural networks are totally mindless. 

Philosophers like Chalmers suggest that consciousness cannot be explained by today’s science. Understanding it may even require a new physics—perhaps one that includes a different type of stuff from which consciousness is made. Information is one candidate. Chalmers has pointed out that explanations of the universe have a lot to say about the external properties of objects and how they interact, but very little about the internal properties of those objects. A theory of consciousness might require cracking open a window into this hidden world. 

In the other camp is Daniel Dennett, a philosopher and cognitive scientist at Tufts University, who says that phenomenal consciousness is simply an illusion, a story our brains create for ourselves as a way of making sense of things. Dennett does not so much explain consciousness as explain it away. 

But whether consciousness is an illusion or not, neither Chalmers nor Dennett denies the possibility of conscious machines—one day. 

Today’s AI is nowhere close to being intelligent, never mind conscious. Even the most impressive deep neural networks—such as DeepMind’s game-playing AlphaZero or large language models like OpenAI’s GPT-3—are totally mindless. 

Yet, as Turing predicted, people often refer to these AIs as intelligent machines, or talk about them as if they truly understood the world—simply because they can appear to do so. 

Frustrated by this hype, Emily Bender, a linguist at the University of Washington, has developed a thought experiment she calls the octopus test

In it, two people are shipwrecked on neighboring islands but find a way to pass messages back and forth via a rope slung between them. Unknown to them, an octopus spots the messages and starts examining them. Over a long period of time, the octopus learns to identify patterns in the squiggles it sees passing back and forth. At some point, it decides to intercept the notes and, using what it has learned of the patterns, begins to write squiggles back by guessing which squiggles should follow the ones it received.

An AI acting alone
might benefit from
a sense of itself
in relation to the
world. But machines
cooperating as a swarm
may perform better
by experiencing
themselves as parts of
a group rather than
as individuals.

HENRY HORENSTEIN/GETTY

If the humans on the islands do not notice and believe that they are still communicating with one another, can we say that the octopus understands language? (Bender’s octopus is of course a stand-in for an AI like GPT-3.) Some might argue that the octopus does understand language here. But Bender goes on: imagine that one of the islanders sends a message with instructions for how to build a coconut catapult and a request for ways to improve it.

What does the octopus do? It has learned which squiggles follow other squiggles well enough to mimic human communication, but it has no idea what the squiggle “coconut” on this new note really means. What if one islander then asks the other to help her defend herself from an attacking bear? What would the octopus have to do to continue tricking the islander into thinking she was still talking to her neighbor?

The point of the example is to reveal how shallow today’s cutting-edge AI language models really are. There is a lot of hype about natural-language processing, says Bender. But that word “processing” hides a mechanistic truth.

Humans are active listeners; we create meaning where there is none, or none intended. It is not that the octopus’s utterances make sense, but rather that the islander can make sense of them, Bender says.

For all their sophistication, today’s AIs are intelligent in the same way a calculator might be said to be intelligent: they are both machines designed to convert input into output in ways that humans—who have minds—choose to interpret as meaningful. While neural networks may be loosely modeled on brains, the very best of them are vastly less complex than a mouse’s brain. 

And yet, we know that brains can produce what we understand to be consciousness. If we can eventually figure out how brains do it, and reproduce that mechanism in an artificial device, then surely a conscious machine might be possible?

When I was trying to imagine Robert’s world in the opening to this essay, I found myself drawn to the question of what consciousness means to me. My conception of a conscious machine was undeniably—perhaps unavoidably—human-like. It is the only form of consciousness I can imagine, as it is the only one I have experienced. But is that really what it would be like to be a conscious AI?

It’s probably hubristic to think so. The project of building intelligent machines is biased toward human intelligence. But the animal world is filled with a vast range of possible alternatives, from birds to bees to cephalopods. 

A few hundred years ago the accepted view, pushed by René Descartes, was that only humans were conscious. Animals, lacking souls, were seen as mindless robots. Few think that today: if we are conscious, then there is little reason not to believe that mammals, with their similar brains, are conscious too. And why draw the line around mammals? Birds appear to reflect when they solve puzzles. Most animals, even invertebrates like shrimp and lobsters, show signs of feeling pain, which would suggest they have some degree of subjective consciousness. 

But how can we truly picture what that must feel like? As the philosopher Thomas Nagel noted, it must “be like” something to be a bat, but what that is we cannot even imagine—because we cannot imagine what it would be like to observe the world through a kind of sonar. We can imagine what it might be like for us to do this (perhaps by closing our eyes and picturing a sort of echolocation point cloud of our surroundings), but that’s still not what it must be like for a bat, with its bat mind.

Another way of approaching the question is by considering cephalopods, especially octopuses. These animals are known to be smart and curious—it’s no coincidence Bender used them to make her point. But they have a very different kind of intelligence that evolved entirely separately from that of all other intelligent species. The last common ancestor that we share with an octopus was probably a tiny worm-like creature that lived 600 million years ago. Since then, the myriad forms of vertebrate life—fish, reptiles, birds, and mammals among them—have developed their own kinds of mind along one branch, while cephalopods developed another.

It’s no surprise, then, that the octopus brain is quite different from our own. Instead of a single lump of neurons governing the animal like a central control unit, an octopus has multiple brain-like organs that seem to control each arm separately. For all practical purposes, these creatures are as close to an alien intelligence as anything we are likely to meet. And yet Peter Godfrey-Smith, a philosopher who studies the evolution of minds, says that when you come face to face with a curious cephalopod, there is no doubt there is a conscious being looking back

A few hundred years ago the accepted view was that only humans were conscious. Animals, lacking souls, were seen as mindless robots. Few think that today.

In humans, a sense of self that persists over time forms the bedrock of our subjective experience. We are the same person we were this morning and last week and two years ago, back as far as we can remember. We recall places we visited, things we did. This kind of first-person outlook allows us to see ourselves as agents interacting with an external world that has other agents in it—we understand that we are a thing that does stuff and has stuff done to it. Whether octopuses, much less other animals, think that way isn’t clear.

In a similar way, we cannot be sure if having a sense of self in relation to the world is a prerequisite for being a conscious machine. Machines cooperating as a swarm may perform better by experiencing themselves as parts of a group than as individuals, for example. At any rate, if a potentially conscious machine like Robert were ever to exist, we’d run into the same problem assessing whether it was in fact conscious that we do when trying to determine intelligence: as Turing suggested, defining intelligence requires an intelligent observer. In other words, the intelligence we see in today’s machines is projected on them by us—in a very similar way that we project meaning onto messages written by Bender’s octopus or GPT-3. The same will be true for consciousness: we may claim to see it, but only the machines will know for sure.

If AIs ever do gain consciousness (and we take their word for it), we will have important decisions to make. We will have to consider whether their subjective experience includes the ability to suffer pain, boredom, depression, loneliness, or any other unpleasant sensation or emotion. We might decide a degree of suffering is acceptable, depending on whether we view these AIs more like livestock or humans. 

Some researchers who are concerned about the dangers of super-intelligent machines have suggested that we should confine these AIs to a virtual world, to prevent them from manipulating the real world directly. If we believed them to have human-like consciousness, would they have a right to know that we’d cordoned them off into a simulation?

Others have argued that it would be immoral to turn off or delete a conscious machine: as our robot Robert feared, this would be akin to ending a life. There are related scenarios, too. Would it be ethical to retrain a conscious machine if it meant deleting its memories? Could we copy that AI without harming its sense of self? What if consciousness turned out to be useful during training, when subjective experience helped the AI learn, but was a hindrance when running a trained model? Would it be okay to switch consciousness on and off? 

This only scratches the surface of the ethical problems. Many researchers, including Dennett, think that we shouldn’t try to make conscious machines even if we can. The philosopher Thomas Metzinger has gone as far as calling for a moratorium on work that could lead to consciousness, even if it isn’t the intended goal.

If we decided that conscious machines had rights, would they also have responsibilities? Could an AI be expected to behave ethically itself, and would we punish it if it didn’t? These questions push into yet more thorny territory, raising problems about free will and the nature of choice. Animals have conscious experiences and we allow them certain rights, but they do not have responsibilities. Still, these boundaries shift over time. With conscious machines, we can expect entirely new boundaries to be drawn.

It’s possible that one day there could be as many forms of consciousness as there are types of AI. But we will never know what it is like to be these machines, any more than we know what it is like to be an octopus or a bat or even another person. There may be forms of consciousness we don’t recognize for what they are because they are so radically different from what we are used to.

Faced with such possibilities, we will have to choose to live with uncertainties. 

And we may decide that we’re happier with zombies. As Dennett has argued, we want our AIs to be tools, not colleagues. “You can turn them off, you can tear them apart, the same way you can with an automobile,” he says. “And that’s the way we should keep it.”

Will Douglas Heaven is a senior editor for AI at MIT Technology Review.

Categories
Artificial Intelligence Ethical AI NLP Sentiment Analysis

AI Could Solve Partisan Gerrymandering, if Humans Can Agree on What’s Fair 

By John P. Desmond, AI Trends Editor 

With the 2020 US Census results having been delivered to the states, now the process begins for using the population results to draw new Congressional districts. Gerrymandering, a practice intended to establish a political advantage by manipulating the boundaries of electoral districts, is expected to be practiced on a wide scale with Democrats having a slight margin of seats in the House of Representatives and Republicans seeking to close the gap in states where they hold a majority in the legislature.    

Today, more powerful redistricting software incorporating AI and machine learning is available, and it represents a double-edged sword.  

David Thornburgh, president, Committee of Seventy

The pessimistic view is that the gerrymandering software will enable legislators to gerrymander with more precision than ever before, to ensure maximum advantages. This was called “political laser surgery” by David Thornburgh, president of the Committee of Seventy, an anti-corruption organization that considers the 2010 redistricting as one of the worst in the country’s history, according to an account in the Columbia Political Review. 

Supreme Court Justice Elena Kagan issued a warning in her dissent in the Rucho v. Common Cause case, in which the court majority ruled that gerrymandering claims lie outside the jurisdiction of federal courts.  

Justice Kagan stated, “Gerrymanders will only get worse (or depending on your perspective, better) as time goes on — as data becomes ever more fine-grained and data analysis techniques continue to improve,” she wrote in her dissent. “What was possible with paper and pen — or even with Windows 95 — doesn’t hold a candle to what will become possible with developments like machine learning. And someplace along this road, ‘we the people’ become sovereign no longer.”  

The optimistic view is that the tough work can be handed over to the machines to take over, with humans further removed from the equation. A state simply needs to establish objective criteria in a bipartisan manner, then turn it over to computers. But it turns out it is difficult to arrive at criteria for what constitutes a “fair” district.  

Brian Olson of Carnegie Mellon University is working on it, with a proposal to have computers prioritize districts that are compact and equally populated, using a tool called ‘Bdistricting.’ However, the authors of the Columbia Review account reported this has not been successful in creating districts that would have competitive elections.  

One reason is the political geography of the country includes dense, urban Democratic centers surrounded by sparsely-populated rural Republican areas. Attempts to take these geographic considerations into account have added so many variables and complexities that the solution becomes impractical.  

Shruti Verma, student at Columbia’s School of Engineering and Applied Sciences, studying computer science and political science

“Technology cannot, then, be trusted to handle the process of redistricting alone. But it can play an important role in its reform,” stated the author, Shruti Verma, a student at Columbia’s School of Engineering and Applied Sciences, studying computer science and political science.   

However, more tools are becoming available to provide transparency into the redistricting process to a degree not possible in the past. “This software weakens the ability of our state lawmakers to obfuscate,” she stated. “In this way, the very developments in technology that empowered gerrymandering can now serve to hobble it.”  

Tools are available from the Princeton Gerrymandering Project and the Committee of Seventy.  

University of Illinois Researcher Urges Transparency in Redistricting 

Transparency in the process of redistricting is also emphasized by researchers Wendy Tam Cho and Bruce Cain in the September 2020 issue of Science, who suggest that AI can help in the process. Cho, who teaches at the University of Illinois at Urbana-Champaign, has worked on computational redistricting for many years. Last year, she was an expert witness in a lawsuit by the ACLU that wound up in a finding that gerrymandered districts in Ohio were unconstitutional, according to a report in TechCrunch. Bruce Cain is a professor of political science at Stanford University with expertise in democratic representation and state politics.   

In an essay explaining their work, the two stated, “The way forward is for people to work collaboratively with machines to produce results not otherwise possible. To do this, we must capitalize on the strengths and minimize the weaknesses of both artificial intelligence (AI) and human intelligence.”  

And, “Machines enhance and inform intelligent decision-making by helping us navigate the unfathomably large and complex informational landscape. Left to their own devices, humans have shown themselves to be unable to resist the temptation to chart biased paths through that terrain.”  

In an interview with TechCrunch, Cho stated that while automation has potential benefits for states in redistricting, “transparency within that process is essential for developing and maintaining public trust and minimizing the possibilities and perceptions of bias.” 

Also, while the AI models for redistricting may be complex, the public is interested mostly in the results. “The details of these models are intricate and require a fair amount of knowledge in statistics, mathematics, and computer science but also an equally deep understanding of how our political institutions and the law work,” Cho stated. “At the same time, while understanding all the details is daunting, I am not sure this level of understanding by the general public or politicians is necessary.”

Harvard, BU Researchers Recommend a Game Approach 

Researchers at Harvard University and Boston University have proposed a software tool to help with redistricting  using a game metaphor. Called Define-Combine, the tool enables each party to take a turn in shaping the districts, using sophisticated mapping algorithms to ensure the approach is fair, according to an account in Fast Company.  

Early experience shows the Define-Combine procedure resulted in the majority party having a much smaller advantage, so in the end, the process produced more moderate maps.  

Whether this is the desired outcome of the party with the advantage today remains to be seen. Gerrymandering factors heavily in politics, according to a recent account in Data Science Central. After a redistricting in 2011, Wisconsin’s district maps produced an outcome where if the Republican party receives 48% the vote in the state, they end up with 62% of the legislative seats.  

Read the source articles and information in Columbia Political Review, in Sciencein TechCrunch, in Fast Company and in Data Science Central. 

Categories
Artificial Intelligence Digital Transformation

Building architectures that can handle the world’s data

Perceiver IO, a more general version of the Perceiver architecture, can produce a wide variety of outputs from many different inputs.

Categories
Artificial Intelligence Robotics & RPA

A new generation of AI-powered robots is taking over warehouses

In the months before the first reports of covid-19 would emerge, a new kind of robot headed to work. Built on years of breakthroughs in deep learning, it could pick up all kinds of objects with remarkable accuracy, making it a shoo-in for jobs like sorting products into packages at warehouses.

Previous commercial robots had been limited to performing tasks with little variation: they could move pallets along set paths and perhaps deviate slightly to avoid obstacles along the way. The new robots, with their ability to manipulate objects of variable shapes and sizes in unpredictable orientations, could open up a whole different set of tasks for automation.

At the time, the technology was still proving itself. But then the pandemic hit. As e-commerce demand skyrocketed and labor shortages intensified, AI-powered robots went from a nice-to-have to a necessity.

Covariant, one of the many startups working on developing the software to control these robots, says it’s now seeing rapidly rising demand in industries like fashion, beauty, pharmaceuticals, and groceries, as is its closest competitor, Osaro. Customers once engaged in pilot programs are moving to integrate AI-powered robots permanently into their production lines.

Knapp, a warehouse logistics technology company and one of Covariant’s first customers, which began piloting the technology in late 2019, says it now has “a full pipeline of projects” globally, including retrofitting old warehouses and designing entirely new ones optimized to help Covariant’s robot pickers work alongside humans.

For now, somewhere around 2,000 AI-powered robots have been deployed, with a typical warehouse housing one or two, estimates Rian Whitton, who analyzes the industrial robotics market at ABI Research. But the industry has reached a new inflection point, and he predicts that each warehouse will soon house upwards of 10 robots, growing the total to tens of thousands within the next few years. “It’s being scaled up pretty quickly,” he says. “In part, it’s been accelerated by the pandemic.”

A new wave of automation

Over the last decade, the online retailing and shipping industries have steadily automated more and more of their warehouses, with the big players leading the way. In 2012, Amazon acquired Kiva Systems, a Massachusetts-based robotics company that produces autonomous mobile robots, known in the industry as AMRs, to move shelves of goods around. In 2018, FedEx began deploying its own AMRs, designed by a different Massachusetts-based startup called Vecna Robotics. The same year, the British online supermarket Ocado made headlines with its highly automated fulfillment center in Andover, England, featuring a giant grid of robots whizzing along metallic scaffolding.

But there’s a reason these early waves of automation came primarily in the form of AMRs. From a technical perspective, moving objects from point A to B is one of the easiest robotic challenges to solve. The much harder challenge is manipulating objects to take them off shelves and out of bins, or box them and bag them, the way human workers do so nimbly with their hands.

This is what the latest generation of robotics companies like Covariant and Osaro specialize in, a technology that didn’t become commercially viable until late 2019. Right now such robots are most skilled at simple manipulation tasks, like picking up objects and placing them in boxes, but both startups are already working with customers on more complicated sequences of motions, including auto-bagging, which requires robots to work with crinkly, flimsy, or translucent materials. Within a few years, any task that previously required hands to perform could be partially or fully automated away.

Some companies have already begun redesigning their warehouses to better capitalize on these new capabilities. Knapp, for example, is changing its floor layout and the way it routes goods to factor in which type of worker—robot or human—is better at handling different products. For objects that still stump robots, like a net bag of marbles or delicate pottery, a central routing algorithm would send them to a station with human pickers. More common items, like household goods and school supplies, would go to a station with robots.

Derik Pridmore, cofounder and CEO at Osaro, predicts that in industries like fashion, fully automated warehouses could come online within two years, since clothing is relatively easy for robots to handle.

That doesn’t mean all warehouses will soon be automated. There are millions of them around the world, says Michael Chui, a partner at the McKinsey Global Institute who studies the impact of information technologies on the economy. “Retrofitting all of those facilities can’t happen overnight,” he says.

One of the first Covariant-enabled robotic arms that Knapp piloted in a warehouse in Berlin, Germany.

Nonetheless, the latest automation push raises questions about the impact on jobs and workers.

Previous waves of automation have given researchers more data about what to expect. A recent study that analyzed the impact of automation at the firm level for the first time found that companies that adopted robots ahead of others in their industry became more competitive and grew more, which led them to hire more workers. “Any job loss comes from companies who did not adopt robots,” says Lynn Wu, a professor at Wharton who coauthored the paper. “They lose their competitiveness and then lay off workers.”

But as workers at Amazon and FedEx have already seen, jobs for humans will be different. Roles like packing boxes and bags will be displaced, while new ones will appear—some directly related to maintaining and supervising the robots, others from the second-order effects of fulfilling more orders, which would require expanded logistics and delivery operations. In other words, middle-skilled labor will disappear in favor of low- and high-skilled work, says Wu: “We’re breaking the career ladder, and hollowing out the middle.”

But rather than attempt to stop the trend of automation, experts say, it’s better to focus on easing the transition by helping workers reskill and creating new opportunities for career growth. “Because of aging, there are a number of countries in the world where the size of the workforce is decreasing already,” says Chui. “Half of our economic growth has come from more people working over the past 50 years, and that’s going to go away. So there’s a real imperative to increase productivity, and these technologies can help.

“We also just need to make sure that the workers can share the benefits.”

Categories
Artificial Intelligence Ethical AI

An endlessly changing playground teaches AIs how to multitask

DeepMind has developed a vast candy-colored virtual playground that teaches AIs general skills by endlessly changing the tasks it sets them. Instead of developing just the skills needed to solve a particular task, the AIs learn to experiment and explore, picking up skills they then use to succeed in tasks they’ve never seen before. It is a small step toward general intelligence.

What is it? XLand is a video-game-like 3D world that the AI players sense in color. The playground is managed by a central AI that sets the players billions of different tasks by changing the environment, the game rules, and the number of players. Both the players and the playground manager use reinforcement learning to improve by trial and error.

During training, the players first face simple one-player games, such as finding a purple cube or placing a yellow ball on a red floor. They advance to more complex multiplayer games like hide and seek or capture the flag, where teams compete to be the first to find and grab their opponent’s flag. The playground manager has no specific goal but aims to improve the general capability of the players over time.

Why is this cool? AIs like DeepMind’s AlphaZero have beaten the world’s best human players at chess and Go. But they can only learn one game at a time. As DeepMind cofounder Shane Legg put it when I spoke to him last year, it’s like having to swap out your chess brain for your Go brain each time you want to switch games.

Researchers are now trying to build AIs that can learn multiple tasks at once, which means teaching them general skills that make it easier to adapt.

Having learned to experiment, these bots improvised a ramp

DEEPMIND

One exciting trend in this direction is open-ended learning, where AIs are trained on many different tasks without a specific goal. In many ways, this is how humans and other animals seem to learn, via aimless play. But this requires a vast amount of data. XLand generates that data automatically, in the form of an endless stream of challenges. It is similar to POET, an AI training dojo where two-legged bots learn to navigate obstacles in a 2D landscape. XLand’s world is much more complex and detailed, however. 

XLand is also an example of AI learning to make itself, or what Jeff Clune, who helped develop POET and leads a team working on this topic at OpenAI, calls AI-generating algorithms (AI-GAs). “This work pushes the frontiers of AI-GAs,” says Clune. “It is very exciting to see.”

What did they learn? Some of DeepMind’s XLand AIs played 700,000 different games in 4,000 different worlds, encountering 3.4 million unique tasks in total. Instead of learning the best thing to do in each situation, which is what most existing reinforcement-learning AIs do, the players learned to experiment—moving objects around to see what happened, or using one object as a tool to reach another object or hide behind—until they beat the particular task.

In the videos you can see the AIs chucking objects around until they stumble on something useful: a large tile, for example, becomes a ramp up to a platform. It is hard to know for sure if all such outcomes are intentional or happy accidents, say the researchers. But they happen consistently.

AIs that learned to experiment had an advantage in most tasks, even ones that they had not seen before. The researchers found that after just 30 minutes of training on a complex new task, the XLand AIs adapted to it quickly. But AIs that had not spent time in XLand could not learn these tasks at all.

Categories
Artificial Intelligence Ethical AI

Hundreds of AI tools have been built to catch covid. None of them helped.

When covid-19 struck Europe in March 2020, hospitals were plunged into a health crisis that was still badly understood. “Doctors really didn’t have a clue how to manage these patients,” says Laure Wynants, an epidemiologist at Maastricht University in the Netherlands, who studies predictive tools.

But there was data coming out of China, which had a four-month head start in the race to beat the pandemic. If machine-learning algorithms could be trained on that data to help doctors understand what they were seeing and make decisions, it just might save lives. “I thought, ‘If there’s any time that AI could prove its usefulness, it’s now,’” says Wynants. “I had my hopes up.”

It never happened—but not for lack of effort. Research teams around the world stepped up to help. The AI community, in particular, rushed to develop software that many believed would allow hospitals to diagnose or triage patients faster, bringing much-needed support to the front lines—in theory.

In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful.

That’s the damning conclusion of multiple studies published in the last few months. In June, the Turing Institute, the UK’s national center for data science and AI, put out a report summing up discussions at a series of workshops it held in late 2020. The clear consensus was that AI tools had made little, if any, impact in the fight against covid.

Not fit for clinical use

This echoes the results of two major studies that assessed hundreds of predictive tools developed last year. Wynants is lead author of one of them, a review in the British Medical Journal that is still being updated as new tools are released and existing ones tested. She and her colleagues have looked at 232 algorithms for diagnosing patients or predicting how sick those with the disease might get. They found that none of them were fit for clinical use. Just two have been singled out as being promising enough for future testing.

“It’s shocking,” says Wynants. “I went into it with some worries, but this exceeded my fears.”

Wynants’s study is backed up by another large review carried out by Derek Driggs, a machine-learning researcher at the University of Cambridge, and his colleagues, and published in Nature Machine Intelligence. This team zoomed in on deep-learning models for diagnosing covid and predicting patient risk from medical images, such as chest x-rays and chest computer tomography (CT) scans. They looked at 415 published tools and, like Wynants and her colleagues, concluded that none were fit for clinical use.

“This pandemic was a big test for AI and medicine,” says Driggs, who is himself working on a machine-learning tool to help doctors during the pandemic. “It would have gone a long way to getting the public on our side,” he says. “But I don’t think we passed that test.”

Both teams found that researchers repeated the same basic errors in the way they trained or tested their tools. Incorrect assumptions about the data often meant that the trained models did not work as claimed.

Wynants and Driggs still believe AI has the potential to help. But they are concerned that it could be harmful if built in the wrong way because they could miss diagnoses or underestimate risk for vulnerable patients. “There is a lot of hype about machine-learning models and what they can do today,” says Driggs.

Unrealistic expectations encourage the use of these tools before they are ready. Wynants and Driggs both say that a few of the algorithms they looked at have already been used in hospitals, and some are being marketed by private developers. “I fear that they may have harmed patients,” says Wynants.

So what went wrong? And how do we bridge that gap? If there’s an upside, it is that the pandemic has made it clear to many researchers that the way AI tools are built needs to change. “The pandemic has put problems in the spotlight that we’ve been dragging along for some time,” says Wynants.

What went wrong

Many of the problems that were uncovered are linked to the poor quality of the data that researchers used to develop their tools. Information about covid patients, including medical scans, was collected and shared in the middle of a global pandemic, often by the doctors struggling to treat those patients. Researchers wanted to help quickly, and these were the only public data sets available. But this meant that many tools were built using mislabeled data or data from unknown sources.

Driggs highlights the problem of what he calls Frankenstein data sets, which are spliced together from multiple sources and can contain duplicates. This means that some tools end up being tested on the same data they were trained on, making them appear more accurate than they are.

It also muddies the origin of certain data sets. This can mean that researchers miss important features that skew the training of their models. Many unwittingly used a data set that contained chest scans of children who did not have covid as their examples of what non-covid cases looked like. But as a result, the AIs learned to identify kids, not covid.

Driggs’s group trained its own model using a data set that contained a mix of scans taken when patients were lying down and standing up. Because patients scanned while lying down were more likely to be seriously ill, the AI learned wrongly to predict serious covid risk from a person’s position.

In yet other cases, some AIs were found to be picking up on the text font that certain hospitals used to label the scans. As a result, fonts from hospitals with more serious caseloads became predictors of covid risk.

Errors like these seem obvious in hindsight. They can also be fixed by adjusting the models, if researchers are aware of them. It is possible to acknowledge the shortcomings and release a less accurate, but less misleading model. But many tools were developed either by AI researchers who lacked the medical expertise to spot flaws in the data or by medical researchers who lacked the mathematical skills to compensate for those flaws.

A more subtle problem Driggs highlights is incorporation bias, or bias introduced at the point a data set is labeled. For example, many medical scans were labeled according to whether the radiologists who created them said they showed covid. But that embeds, or incorporates, any biases of that particular doctor into the ground truth of a data set. It would be much better to label a medical scan with the result of a PCR test rather than one doctor’s opinion, says Driggs. But there isn’t always time for statistical niceties in busy hospitals.

That hasn’t stopped some of these tools from being rushed into clinical practice. Wynants says it isn’t clear which ones are being used or how. Hospitals will sometimes say that they are using a tool only for research purposes, which makes it hard to assess how much doctors are relying on them. “There’s a lot of secrecy,” she says.

Wynants asked one company that was marketing deep-learning algorithms to share information about its approach but did not hear back. She later found several published models from researchers tied to this company, all of them with a high risk of bias. “We don’t actually know what the company implemented,” she says.

According to Wynants, some hospitals are even signing nondisclosure agreements with medical AI vendors. When she asked doctors what algorithms or software they were using, they sometimes told her they weren’t allowed to say.

How to fix it

What’s the fix? Better data would help, but in times of crisis that’s a big ask. It’s more important to make the most of the data sets we have. The simplest move would be for AI teams to collaborate more with clinicians, says Driggs. Researchers also need to share their models and disclose how they were trained so that others can test them and build on them. “Those are two things we could do today,” he says. “And they would solve maybe 50% of the issues that we identified.”

Getting hold of data would also be easier if formats were standardized, says Bilal Mateen, a doctor who leads the clinical technology team at the Wellcome Trust, a global health research charity based in London. 

Another problem Wynants, Driggs, and Mateen all identify is that most researchers rushed to develop their own models, rather than working together or improving existing ones. The result was that the collective effort of researchers around the world produced hundreds of mediocre tools, rather than a handful of properly trained and tested ones.

“The models are so similar—they almost all use the same techniques with minor tweaks, the same inputs—and they all make the same mistakes,” says Wynants. “If all these people making new models instead tested models that were already available, maybe we’d have something that could really help in the clinic by now.”

In a sense, this is an old problem with research. Academic researchers have few career incentives to share work or validate existing results. There’s no reward for pushing through the last mile that takes tech from “lab bench to bedside,” says Mateen. 

To address this issue, the World Health Organization is considering an emergency data-sharing contract that would kick in during international health crises. It would let researchers move data across borders more easily, says Mateen. Before the G7 summit in the UK in June, leading scientific groups from participating nations also called for “data readiness” in preparation for future health emergencies.

Such initiatives sound a little vague, and calls for change always have a whiff of wishful thinking about them. But Mateen has what he calls a “naïvely optimistic” view. Before the pandemic, momentum for such initiatives had stalled. “It felt like it was too high of a mountain to hike and the view wasn’t worth it,” he says. “Covid has put a lot of this back on the agenda.”

“Until we buy into the idea that we need to sort out the unsexy problems before the sexy ones, we’re doomed to repeat the same mistakes,” says Mateen. “It’s unacceptable if it doesn’t happen. To forget the lessons of this pandemic is disrespectful to those who passed away.”

Categories
Artificial Intelligence Ethical AI

DeepMind says it will release the structure of every protein known to science

Back in December 2020, DeepMind took the world of biology by surprise when it solved a 50-year grand challenge with AlphaFold, an AI tool that predicts the structure of proteins. Last week the London-based company published full details of that tool and released its source code.

Now the firm has announced that it has used its AI to predict the shapes of nearly every protein in the human body, as well as the shapes of hundreds of thousands of other proteins found in 20 of the most widely studied organisms, including yeast, fruit flies, and mice. The breakthrough could allow biologists from around the world to understand diseases better and develop new drugs. 

So far the trove consists of 350,000 newly predicted protein structures. DeepMind says it will predict and release the structures for more than 100 million more in the next few months—more or less all proteins known to science. 

“Protein folding is a problem I’ve had my eye on for more than 20 years,” says DeepMind cofounder and CEO Demis Hassabis. “It’s been a huge project for us. I would say this is the biggest thing we’ve done so far. And it’s the most exciting in a way, because it should have the biggest impact in the world outside of AI.”

Proteins are made of long ribbons of amino acids, which twist themselves up into complicated knots. Knowing the shape of a protein’s knot can reveal what that protein does, which is crucial for understanding how diseases work and developing new drugs—or identifying organisms that can help tackle pollution and climate change. Figuring out a protein’s shape takes weeks or months in the lab. AlphaFold can predict shapes to the nearest atom in a day or two.

The new database should make life even easier for biologists. AlphaFold might be available for researchers to use, but not everyone will want to run the software themselves. “It’s much easier to go and grab a structure from the database than it is running it on your own computer,” says David Baker of the Institute for Protein Design at the University of Washington, whose lab has built its own tool for predicting protein structure, called RoseTTAFold, based on AlphaFold’s approach.

In the last few months Baker’s team has been working with biologists who were previously stuck trying to figure out the shape of proteins they were studying. “There’s a lot of pretty cool biological research that’s been really sped up,” he says. A public database containing hundreds of thousands of ready-made protein shapes should be an even bigger accelerator.  

“It looks astonishingly impressive,” says Tom Ellis, a synthetic biologist at Imperial College London studying the yeast genome, who is excited to try the database. But he cautions that most of the predicted shapes have not yet been verified in the lab.  

Atomic precision

In the new version of AlphaFold, predictions come with a confidence score that the tool uses to flag how close it thinks each predicted shape is to the real thing. Using this measure, DeepMind found that AlphaFold predicted shapes for 36% of human proteins with an accuracy that is correct down to the level of individual atoms. This is good enough for drug development, says Hassabis.   

Previously, after decades of work, only 17% of the proteins in the human body have had their structures identified in the lab. If AlphaFold’s predictions are as accurate as DeepMind says, the tool has more than doubled this number in just a few weeks.

Even predictions that are not fully accurate at the atomic level are still useful. For more than half of the proteins in the human body, AlphaFold has predicted a shape that should be good enough for researchers to figure out the protein’s function. The rest of AlphaFold’s current predictions are either incorrect, or are for the third of proteins in the human body that don’t have a structure at all until they bind with others. “They’re floppy,” says Hassabis.

“The fact that it can be applied at this level of quality is an impressive thing,” says Mohammed AlQuraish, a systems biologist at Columbia University who has developed his own software for predicting protein structure. He also points out that having structures for most of the proteins in an organism will make it possible to study how these proteins work as a system, not just in isolation. “That’s what I think is most exciting,” he says.

DeepMind is releasing its tools and predictions for free and will not say if it has plans for making money from them in future. It is not ruling out the possibility, however. To set up and run the database, DeepMind is partnering with the European Molecular Biology Laboratory, an international research institution that already hosts a large database of protein information. 

For now, AlQuraishi can’t wait to see what researchers do with the new data. “It’s pretty spectacular,” he says “I don’t think any of us thought we would be here this quickly. It’s mind boggling.”

Categories
Artificial Intelligence Ethical AI

Disability rights advocates are worried about discrimination in AI hiring tools

Your ability to land your next job could depend on how well you play one of the AI-powered games that companies like AstraZeneca and Postmates are increasingly using in the hiring process.

Some companies that create these games, like Pymetrics and Arctic Shores, claim that they limit bias in hiring. But AI hiring games can be especially difficult to navigate for job seekers with disabilities.

In the latest episode of MIT Technology Review’s podcast “In Machines We Trust,” we explore how AI-powered hiring games and other tools may exclude people with disabilities. And while many people in the US are looking to the federal commission responsible for employment discrimination to regulate these technologies, the agency has yet to act.

To get a closer look, we asked Henry Claypool, a disability policy analyst, to play one of Pymetrics’s games. Pymetrics measures nine skills, including attention, generosity, and risk tolerance, that CEO and cofounder Frida Polli says relate to job success.

When it works with a company looking to hire new people, Pymetrics first asks the company to identify people who are already succeeding at the job it’s trying to fill and has them play its games. Then, to identify the skills most specific to the successful employees, it compares their game data with data from a random sample of players.

When he signed on, the game prompted Claypool to choose between a modified version—designed for those with color blindness, ADHD, or dyslexia—and an unmodified version. This question poses a dilemma for applicants with disabilities, he says.

“The fear is that if I click one of these, I’ll disclose something that will disqualify me for the job, and if I don’t click on—say—dyslexia or whatever it is that makes it difficult for me to read letters and process that information quickly, then I’ll be at a disadvantage,” Claypool says. “I’m going to fail either way.”

Polli says Pymetrics does not tell employers which applicants requested in-game accommodations during the hiring process, which should help prevent employers from discriminating against people with certain disabilities. She added that in response to our reporting, the company will make this information more clear so applicants know that their need for an in-game accommodation is private and confidential.   

The Americans with Disabilities Act requires employers to provide reasonable accommodations to people with disabilities. And if a company’s hiring assessments exclude people with disabilities, then it must prove that those assessments are necessary to the job.

For employers, using games such as those produced by Arctic Shores may seem more objective. Unlike traditional psychometric testing, Arctic Shores’s algorithm evaluates candidates on the basis of their choices throughout the game. However, candidates often don’t know what the game is measuring or what to expect as they play. For applicants with disabilities, this makes it hard to know whether they should ask for an accommodation.

Safe Hammad, CTO and cofounder of Arctic Shores, says his team is focused on making its assessments accessible to as many people as possible. People with color blindness and hearing disabilities can use the company’s software without special accommodations, he says, but employers should not use such requests to screen out candidates.

The use of these tools can sometimes exclude people in ways that may not be obvious to a potential employer, though. Patti Sanchez is an employment specialist at the MacDonald Training Center in Florida who works with job seekers who are deaf or hard of hearing. About two years ago, one of her clients applied for a job at Amazon that required a video interview through HireVue.

Sanchez, who is also deaf, attempted to call and request assistance from the company, but couldn’t get through. Instead, she brought her client and a sign language interpreter to the hiring site and persuaded representatives there to interview him in person. Amazon hired her client, but Sanchez says issues like these are common when navigating automated systems. (Amazon did not respond to a request for comment.)

Making hiring technology accessible means ensuring both that a candidate can use the technology and that the skills it measures don’t unfairly exclude candidates with disabilities, says Alexandra Givens, the CEO of the Center for Democracy and Technology, an organization focused on civil rights in the digital age.

AI-powered hiring tools often fail to include people with disabilities when generating their training data, she says. Such people have long been excluded from the workforce, so algorithms modeled after a company’s previous hires won’t reflect their potential.

Even if the models could account for outliers, the way a disability presents itself varies widely from person to person. Two people with autism, for example, could have very different strengths and challenges.

“As we automate these systems, and employers push to what’s fastest and most efficient, they’re losing the chance for people to actually show their qualifications and their ability to do the job,” Givens says. “And that is a huge loss.”

A hands-off approach

Government regulators are finding it difficult to monitor AI hiring tools. In December 2020, 11 senators wrote a letter to the US Equal Employment Opportunity Commission expressing concerns about the use of hiring technologies after the covid-19 pandemic. The letter inquired about the agency’s authority to investigate whether these tools discriminate, particularly against those with disabilities.

The EEOC responded with a letter in January that was leaked to MIT Technology Review. In the letter, the commission indicated that it cannot investigate AI hiring tools without a specific claim of discrimination. The letter also outlined concerns about the industry’s hesitance to share data and said that variation between different companies’ software would prevent the EEOC from instituting any broad policies.

“I was surprised and disappointed when I saw the response,” says Roland Behm, a lawyer and advocate for people with behavioral health issues. “The whole tenor of that letter seemed to make the EEOC seem like more of a passive bystander rather than an enforcement agency.”

The agency typically starts an investigation once an individual files a claim of discrimination. With AI hiring technology, though, most candidates don’t know why they were rejected for the job. “I believe a reason that we haven’t seen more enforcement action or private litigation in this area is due to the fact that candidates don’t know that they’re being graded or assessed by a computer,” says Keith Sonderling, an EEOC commissioner.

Sonderling says he believes that artificial intelligence will improve the hiring process, and he hopes the agency will issue guidance for employers on how best to implement it. He says he welcomes oversight from Congress.

However, Aaron Rieke, managing director of Upturn, a nonprofit dedicated to civil rights and technology, expressed disappointment in the EEOC’s response: “I actually would hope that in the years ahead, the EEOC could be a little bit more aggressive and creative in thinking about how to use that authority.”

Pauline Kim, a law professor at Washington University in St. Louis, whose research focuses on algorithmic hiring tools, says the EEOC could be more proactive in gathering research and updating guidelines to help employers and AI companies comply with the law.

Behm adds that the EEOC could pursue other avenues of enforcement, including a commissioner’s charge, which allows commissioners to initiate an investigation into suspected discrimination instead of requiring an individual claim (Sonderling says he is considering making such a charge). He also suggests that the EEOC consult with advocacy groups to develop guidelines for AI companies hoping to better represent people with disabilities in their algorithmic models.

It’s unlikely that AI companies and employers are screening out people with disabilities on purpose, Behm says. But they “haven’t spent the time and effort necessary to understand the systems that are making what for many people are life-changing decisions: Am I going to be hired or not? Can I support my family or not?”