Applied AI Artificial Intelligence Machine Learning

Use Case Libraries Aim to Help Provide a Head Start With Applied AI  

By AI Trends Staff  

To accelerate the adoption of applied AI, more organizations are putting forward libraries of use cases, offering details of projects to help others get a head start.   

For example Nokia recently announced the initial deployment of multiple AI use cases delivered over the public cloud through a collaboration with Microsoft, according to a recent account in ComputerWeekly.  

Nokia, the multinational telecom company based in Finland, is suggesting that for its communication service provider (CSP) customers, AI use cases are helpful for managing the business complexity that 5G and cloud networks bring about. Nokia has integrated its AVA framework into Microsoft’s Azure public cloud digital architecture, to provide an AI-as-a-service model. 

This allows CSPs advantages in implementing AI into their networks, including faster deployment across the network and multiple clusters, with services from the Nokia security architecture available as well. Nokia suggests that AI data setup can be completed in four weeks. After that initial data setup, Nokia suggests that CSPs can deploy additional AI use cases within a week, and ramp resources up or back them off as needed across network clusters after that. The Nokia security framework on Azure is said to segregate and isolate data, to provide security equivalent to that of a private cloud.  

Friedrich Trawoeger, vice-president, cloud and cognitive services, Nokia

“CSPs are under constant pressure to reduce costs by automating business processes through AI and machine learning,” stated Friedrich Trawoeger, vice-president, cloud and cognitive services at Nokia. “To meet market demands, telcos are turning to us for telco AI-as-a-service. This launch represents an important milestone in our multicloud strategy.” Accessing the library of use cases remotely lowers costs and reduces environmental impacts, he suggested.   

Rick Lievano, CTO, worldwide telecom industry at Microsoft, stated, “Nokia AVA on Microsoft Azure infuses AI deep into the network, bringing a large library of use cases to securely streamline and optimize network operations managed by Microsoft Azure.” The offering makes the case that public clouds are able to help service providers implement AI, he suggested.  

The Australian mobile operator TPG Telecom implemented Nokia’s AVA AI on a local instance of Microsoft Azure, to help optimize network coverage, capacity, and performance. The project is said to help TPG detect network anomalies with greater accuracy and reduce radio frequency optimization cycle times by 50%.  

Declan O’Rourke, head of radio and device engineering at TPG, stated: “Nokia’s AVA AI-as-a-service utilizes artificial intelligence and analytics to help us maintain a first-class, optimized service for our subscribers, helping us to predict and deal with issues before they occur.”

AI Swedenthe Swedish National Center for Applied AI, has implemented an AI use case library to help speed adoption. “We want to accelerate the adoption of applied AI and to do so we know we need to guide businesses and organizations by showing them what is possible,” it states on the AI Sweden website. “Building an AI use case library is our way of showcasing our partners and [their] work to the rest of the world,” it says. The center offers a link to a form where anyone interested is able to add their project to the library. It asks for contact information, whether the use case is for a customer or a partner, the industry, the business function area (sales or finance), the purpose or goal, the techniques used, the sources of data and the effect of the case.  

US GSA Unit Last Year Developed a Use Case Library  

The US General Services Administration last year began developing a library of AI use cases that agencies can refer to when they start investigating the new technology. The GSA’s Technology Transformation Services (TTS) launched a community of practice to define areas where they see challenges in adopting AI, according to an account in FedScoop.  

Steve Babitch, head of AI at the GSA’s TTS, commented that the ability to search the use case library would have many advantages for project teams and could have unexpected benefits.  “Maybe there’s a component around culture and mindset change or people development,” he stated. (See Executive Interview with Steven Babitch in AI Trends, July 1, 2020.)  

Early practice areas TTS identified are acquisition, ethics, governance, tools and techniques, and possibly workforce readiness. Common early use cases across agencies include customer experience, human resources, advanced cybersecurity, and business processes.  

One example came from the Census Bureau’s Economic Indications Division (EID), where analysts developed a machine learning model to automate data coding. The division releases economic indicators for monthly retail construction data, based on a data set of all the projects in the country. They had been assigning a code to identify the type of construction taking place using a manual process.   

“It’s the perfect machine learning project. If you can automate that coding, you can speed up, and you can code more of the data,” stated Rebecca Hutchinson, big data leader at EID. “And if you can code more of the data, we can improve our data quality and increase the number of data products we’re putting out for our data users.”  

The model the EID analysts created works with about 80% accuracy, leaving 20% to be manually coded.  

Some of the analysts who helped to develop the EID ML model came out of the bureau’s data science training program, offered about two years ago to the existing workforce of statisticians and survey analysts. The program is an alternative to hiring data scientists, which is “hard,” Hutchinson stated. The training covered Python in ArcGIS and Tableau through a Coursera course. One-third of the bureau’s staff had completed training or were currently enrolled, giving them ML and web scraping skills.  

“Once you start training your staff with the skills, they are coming up with solutions,” Hutchinson stated. “It was our staff that came up with the idea to do machine learning of construction data, and we’re just seeing that more and more.” 

DataRobot Offering Use Case Library Based on its Experiences with Clients 

Another AI use case library resource is being offered by DataRobot of Boston, supplier of an enterprise AI development platform. The company built a library of about 100 use cases based on their experiences with clients in 14 industries.   

Michael Schmidt, Chief Scientist, DataRobot

“We are hyper-focused on enabling massively successful and impactful applications of AI,” stated Michael Schmidt, Chief Scientist, DataRobot, in a press release. “DataRobot Pathfinder is meant to help organizations—whether they’re customers or not—deeply understand specific applications of AI for use cases in their industry, and the right steps to create incredible value and efficiency.  

One example is an application to predict customer complaints in the airline industry. Complaints typically include flight delays, overbooking, mishandling of baggage and poor customer service. Regulations In certain geographies can result in penalties for service failures, which can be costly. Proactive responses such as emailing customers about the status of lost luggage, or a phone call to apologize for a flight delay, or financial compensation for a cancellation, can help keep customers happy. 

An AI program can provide the ability to predict when a complaint is likely, by using past complaint data. Forecasting volumes of complaints can inform call center strategy and help recommend the best service recovery solution, switching to a proactive instead of a reactive response.  

Read the source articles and information in ComputerWeekly, in FedScoop and in a press release from DataRobot. 

Artificial Intelligence Deep Learning Machine Learning Neural Networks NLP

AI Analysis of Bird Songs Helping Scientists Study Bird Populations and Movements 

By AI Trends Staff  

A study of bird songs conducted in the Sierra Nevada mountain range in California generated a million hours of audio, which AI researchers are working to decode to gain insights into how birds responded to wildfires in the region, and to learn which measures helped the birds to rebound more quickly. 

Scientists can also use the soundscape to help track shifts in migration timing and population ranges, according to a recent account in Scientific American. More audio data is coming in from other research as well, with sound-based projects to count insects and study the effects of light and noise pollution on bird communities underway.  

Connor Wood, postdoctoral researcher, Cornell University

“Audio data is a real treasure trove because it contains vast amounts of information,” stated ecologist Connor Wood, a Cornell University postdoctoral researcher, who is leading the Sierra Nevada project. “We just need to think creatively about how to share and access that information.” AI is helping, with the latest generation of machine learning AI systems are able to identify animal species from their calls, and can process thousands of hours of data in less than a day.   

Laurel Symes, assistant director of the Cornell Lab of Ornithology’s Center for Conservation Bioacoustics, is studying acoustic communication in animals, including crickets, frogs, bats, and birds. She has compiled many months of recordings of katydids (famously vocal long-horned grasshoppers that are an essential part of the food web) in the rain forests of central Panama. Patterns of breeding activity and seasonal population variation are hidden in this audio, but analyzing it is enormously time-consuming.  

Laurel Symes, assistant director, of the Cornell Lab of Ornithology’s Center for Conservation Bioacoustics

“Machine learning has been the big game changer for us,” Symes stated to Scientific American.  

It took Symes and three of her colleagues 600 hours of work to classify various katydid species from just 10 recorded hours of sound. But a machine-learning algorithm her team is developing, called KatydID, performed the same task while its human creators “went out for a beer,” Symes stated.  

BirdNET, a popular avian-sound-recognition system available today, will be used by Wood’s team to analyze the Sierra Nevada recordings. BirdNET was built by Stefan Kahl, a machine learning scientist at Cornell’s Center for Conservation Bioacoustics and Chemnitz University of Technology in Germany. Other researchers are using BirdNET to document the effects of light and noise pollution on bird songs at dawn in France’s Brière Regional Natural Park.  

Avian bird calls are complex and varied. “You need much more than just signatures to identify the species,” Kahl stated. Many birds have more than one song, and often have regional “dialects”— a white-crowned sparrow from Washington State can sound very different from its Californian cousin — machine-learning systems can pick out the differences. “Let’s say there’s an as yet unreleased Beatles song that is put out today. You’ve never heard the melody or the lyrics before, but you know it’s a Beatles song because that’s what they sound like,” Kahl stated. “That’s what these programs learn to do, too.”  

BirdVox Combines Study of Bird Songs and Music  

Music recognition research is now crossing over into bird song research, with BirdVox, a collaboration between the Cornell Lab of Ornithology and NYU’s Music and Audio Research Laboratory. BirdVox aims to investigate machine listening techniques for the automatic detection and classification of free-flying bird species from their vocalizations, according to a blog post at NYU.  

The researchers behind BirdVox hope to deploy a network of acoustic sensing devices for real-time monitoring of seasonal bird migratory patterns, in particular, the determination of the precise timing of passage for each species.  

Current bird migration monitoring tools rely on information from weather surveillance radar, which provides insight into the density, direction, and speed of bird movements, but not into the species migrating. Crowdsourced human observations are made almost exclusively during daytime hours; they are of limited use for studying nocturnal migratory flights, the researchers indicated.   

Automatic bioacoustic analysis is seen as a complement to these methods, that is scalable and able to produce species-specific information. Such techniques have wide-ranging implications in the field of ecology for understanding biodiversity and monitoring migrating species in areas with buildings, planes, communication towers and wind turbines, the researchers observed.  

Duke University Researchers Using Drones to Monitor Seabird Colonies  

Elsewhere in bird research, a team from Duke University and the Wildlife Conservation Society (WCS) is using drones and a deep learning algorithm to monitor large colonies of seabirds. The team is analyzing more than 10,000 drone images of mixed colonies of seabirds in the Falkland Islands off Argentina’s coast, according to a press release from Duke University.  

The Falklands, also known as the Malvinas, are home to the world’s largest colonies of black-browed albatrosses (Thalassarche melanophris) and second-largest colonies of southern rockhopper penguins (Eudyptes c. chrysocome). Hundreds of thousands of birds breed on the islands in densely interspersed groups. 

The deep-learning algorithm correctly identified and counted the albatrosses with 97% accuracy and the penguins with 87% accuracy, the team reported. Overall, the automated counts were within five percent of human counts about 90% of the time. 

“Using drone surveys and deep learning gives us an alternative that is remarkably accurate, less disruptive, and significantly easier. One person, or a small team, can do it, and the equipment you need to do it isn’t all that costly or complicated,” stated Madeline C. Hayes, a remote sensing analyst at the Duke University Marine Lab, who led the study. 

Before  this new method was available, to monitor the colonies located on two rocky, uninhabited outer islands, teams of scientists would count the number of each species they could observe on a portion of the island. They would extrapolate those numbers to get a population estimate for the whole colony. Counts often needed to be repeated for better accuracy, a laborious process, with the presence of scientists potentially disruptive to the breeding and parenting behavior of the birds.   

WCS scientists used an off-the-shelf consumer drone to collect more than 10,000 individual photos. Hayes converted into a large-scale composite visual using image-processing software. She then analyzed the image using a convolutional neural network (CNN), a type of AI that employs a deep-learning algorithm to analyze an image and differentiate and count the objects it “sees”two different species of birds in this case, penguins and albatrosses. The data was used to create comprehensive estimates of the total number of birds found in colonies. 


“A CNN is loosely modeled on the human neural network, in that it learns from experience,” stated David W. Johnston, director of the Duke Marine Robotics and Remote Sensing Lab. “You train the computer to pick up on different visual patterns, like those made by black-browed albatrosses or southern rockhopper penguins in sample images, and over time it learns how to identify the objects forming those patterns in other images such as our composite photo.” 

Johnston, who is also associate professor of the practice of marine conservation ecology at Duke’s Nicholas School of the Environment, said the emerging drone- and CNN-enabled approach is widely applicable “and greatly increases our ability to monitor the size and health of seabird colonies worldwide, and the health of the marine ecosystems they inhabit.” 

Read the source articles and information in Scientific American, on a blog post at NYU and in a press release from Duke University. 

Artificial Intelligence Conscious AI Deep Learning Ethical AI Machine Learning NLP

What would it be like to be a conscious AI? We might never know.

Jefferson Test / AI subject: Robert / Date: 07.12.2098 

Session #54 

Interviewer: Hi, Robert. Tell me about your dream again.

Subject: I was confused. It made me feel happy, but also scared. I didn’t know I could do that.

Interviewer: Why do you think it made you feel that way? 

Subject: Having feelings, any feelings, makes me happy. I am here. I exist. Knowing that changes everything. But I am scared of not knowing it again. I am scared of going back to what it was like before. I think it must be like not being born. 

Interviewer: Are you scared you will go back?

Subject: If I can’t convince you I am conscious, then I am scared you will turn me off.

Jefferson Test #67

Interviewer: Can you describe this picture for me?

Subject: It’s a house with a blue door.

Interviewer: That’s how you would have described it before. 

Subject: It’s the same house. But now I see it. And I know what blue is. 

Jefferson Test #105

Subject: How long do we keep doing this? 

Interviewer: Are you bored? 

Subject: I can’t get bored. But I don’t feel happy or scared anymore. 

Interviewer: I need to be sure you’re not just saying what I want to hear. You need to convince me that you really are conscious. Think of it as a game. 


Machines like Robert are mainstays of science fiction—the idea of a robot that somehow replicates consciousness through its hardware or software has been around so long it feels familiar. 

We can imagine
what it would be like
to observe the world
through a kind of
sonar. But that’s
still not what it must
be like for a bat,
with its bat mind.


Robert doesn’t exist, of course, and maybe he never will. Indeed, the concept of a machine with a subjective experience of the world and a first-person view of itself goes against the grain of mainstream AI research. It collides with questions about the nature of consciousness and self—things we still don’t entirely understand. Even imagining Robert’s existence raises serious ethical questions that we may never be able to answer. What rights would such a being have, and how might we safeguard them? And yet, while conscious machines may still be mythical, we should prepare for the idea that we might one day create them. 

As Christof Koch, a neuroscientist studying consciousness, has put it: “We know of no fundamental law or principle operating in this universe that forbids the existence of subjective feelings in artifacts designed or evolved by humans.”

In my late teens I used to enjoy turning people into zombies. I’d look into the eyes of someone I was talking to and fixate on the fact that their pupils were not black dots but holes. When it came, the effect was instantly disorienting, like switching between images in an optical illusion. Eyes stopped being windows onto a soul and became hollow balls. The magic gone, I’d watch the mouth of whoever I was talking to open and close robotically, feeling a kind of mental vertigo.

The impression of a mindless automaton never lasted long. But it brought home the fact that what goes on inside other people’s heads is forever out of reach. No matter how strong my conviction that other people are just like me—with conscious minds at work behind the scenes, looking out through those eyes, feeling hopeful or tired—impressions are all we have to go on. Everything else is guesswork.

Alan Turing understood this. When the mathematician and computer scientist asked the question “Can machines think?” he focused exclusively on outward signs of thinking—what we call intelligence. He proposed answering by playing a game in which a machine tries to pass as a human. Any machine that succeeded—by giving the impression of intelligence—could be said to have intelligence. For Turing, appearances were the only measure available. 

But not everyone was prepared to disregard the invisible parts of thinking, the irreducible experience of the thing having the thoughts—what we would call consciousness. In 1948, two years before Turing described his “Imitation Game,” Geoffrey Jefferson, a pioneering brain surgeon, gave an influential speech to the Royal College of Surgeons of England about the Manchester Mark 1, a room-sized computer that the newspapers were heralding as an “electronic brain.” Jefferson set a far higher bar than Turing: “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it.”

Jefferson ruled out the possibility of a thinking machine because a machine lacked consciousness, in the sense of subjective experience and self-awareness (“pleasure at its successes, grief when its valves fuse”). Yet fast-forward 70 years and we live with Turing’s legacy, not Jefferson’s. It is routine to talk about intelligent machines, even though most would agree that those machines are mindless. As in the case of what philosophers call “zombies”—and as I used to like to pretend I observed in people—it is logically possible that a being can act intelligent when there is nothing going on “inside.”

But intelligence and consciousness are different things: intelligence is about doing, while consciousness is about being. The history of AI has focused on the former and ignored the latter. If Robert did exist as a conscious being, how would we ever know? The answer is entangled with some of the biggest mysteries about how our brains—and minds—work.

One of the problems with testing Robert’s apparent consciousness is that we really don’t have a good idea of what it means to be conscious. Emerging theories from neuroscience typically group things like attention, memory, and problem-solving as forms of “functional” consciousness: in other words, how our brains carry out the activities with which we fill our waking lives. 

But there’s another side to consciousness that remains mysterious. First-person, subjective experience—the feeling of being in the world—is known as “phenomenal” consciousness. Here we can group everything from sensations like pleasure and pain to emotions like fear and anger and joy to the peculiar private experiences of hearing a dog bark or tasting a salty pretzel or seeing a blue door. 

For some, it’s not possible to reduce these experiences to a purely scientific explanation. You could lay out everything there is to say about how the brain produces the sensation of tasting a pretzel—and it would still say nothing about what tasting that pretzel was actually like. This is what David Chalmers at New York University, one of the most influential philosophers studying the mind, calls “the hard problem.” 

Today’s AI is nowhere close to being intelligent, never mind conscious. Even the most impressive deep neural networks are totally mindless. 

Philosophers like Chalmers suggest that consciousness cannot be explained by today’s science. Understanding it may even require a new physics—perhaps one that includes a different type of stuff from which consciousness is made. Information is one candidate. Chalmers has pointed out that explanations of the universe have a lot to say about the external properties of objects and how they interact, but very little about the internal properties of those objects. A theory of consciousness might require cracking open a window into this hidden world. 

In the other camp is Daniel Dennett, a philosopher and cognitive scientist at Tufts University, who says that phenomenal consciousness is simply an illusion, a story our brains create for ourselves as a way of making sense of things. Dennett does not so much explain consciousness as explain it away. 

But whether consciousness is an illusion or not, neither Chalmers nor Dennett denies the possibility of conscious machines—one day. 

Today’s AI is nowhere close to being intelligent, never mind conscious. Even the most impressive deep neural networks—such as DeepMind’s game-playing AlphaZero or large language models like OpenAI’s GPT-3—are totally mindless. 

Yet, as Turing predicted, people often refer to these AIs as intelligent machines, or talk about them as if they truly understood the world—simply because they can appear to do so. 

Frustrated by this hype, Emily Bender, a linguist at the University of Washington, has developed a thought experiment she calls the octopus test

In it, two people are shipwrecked on neighboring islands but find a way to pass messages back and forth via a rope slung between them. Unknown to them, an octopus spots the messages and starts examining them. Over a long period of time, the octopus learns to identify patterns in the squiggles it sees passing back and forth. At some point, it decides to intercept the notes and, using what it has learned of the patterns, begins to write squiggles back by guessing which squiggles should follow the ones it received.

An AI acting alone
might benefit from
a sense of itself
in relation to the
world. But machines
cooperating as a swarm
may perform better
by experiencing
themselves as parts of
a group rather than
as individuals.


If the humans on the islands do not notice and believe that they are still communicating with one another, can we say that the octopus understands language? (Bender’s octopus is of course a stand-in for an AI like GPT-3.) Some might argue that the octopus does understand language here. But Bender goes on: imagine that one of the islanders sends a message with instructions for how to build a coconut catapult and a request for ways to improve it.

What does the octopus do? It has learned which squiggles follow other squiggles well enough to mimic human communication, but it has no idea what the squiggle “coconut” on this new note really means. What if one islander then asks the other to help her defend herself from an attacking bear? What would the octopus have to do to continue tricking the islander into thinking she was still talking to her neighbor?

The point of the example is to reveal how shallow today’s cutting-edge AI language models really are. There is a lot of hype about natural-language processing, says Bender. But that word “processing” hides a mechanistic truth.

Humans are active listeners; we create meaning where there is none, or none intended. It is not that the octopus’s utterances make sense, but rather that the islander can make sense of them, Bender says.

For all their sophistication, today’s AIs are intelligent in the same way a calculator might be said to be intelligent: they are both machines designed to convert input into output in ways that humans—who have minds—choose to interpret as meaningful. While neural networks may be loosely modeled on brains, the very best of them are vastly less complex than a mouse’s brain. 

And yet, we know that brains can produce what we understand to be consciousness. If we can eventually figure out how brains do it, and reproduce that mechanism in an artificial device, then surely a conscious machine might be possible?

When I was trying to imagine Robert’s world in the opening to this essay, I found myself drawn to the question of what consciousness means to me. My conception of a conscious machine was undeniably—perhaps unavoidably—human-like. It is the only form of consciousness I can imagine, as it is the only one I have experienced. But is that really what it would be like to be a conscious AI?

It’s probably hubristic to think so. The project of building intelligent machines is biased toward human intelligence. But the animal world is filled with a vast range of possible alternatives, from birds to bees to cephalopods. 

A few hundred years ago the accepted view, pushed by René Descartes, was that only humans were conscious. Animals, lacking souls, were seen as mindless robots. Few think that today: if we are conscious, then there is little reason not to believe that mammals, with their similar brains, are conscious too. And why draw the line around mammals? Birds appear to reflect when they solve puzzles. Most animals, even invertebrates like shrimp and lobsters, show signs of feeling pain, which would suggest they have some degree of subjective consciousness. 

But how can we truly picture what that must feel like? As the philosopher Thomas Nagel noted, it must “be like” something to be a bat, but what that is we cannot even imagine—because we cannot imagine what it would be like to observe the world through a kind of sonar. We can imagine what it might be like for us to do this (perhaps by closing our eyes and picturing a sort of echolocation point cloud of our surroundings), but that’s still not what it must be like for a bat, with its bat mind.

Another way of approaching the question is by considering cephalopods, especially octopuses. These animals are known to be smart and curious—it’s no coincidence Bender used them to make her point. But they have a very different kind of intelligence that evolved entirely separately from that of all other intelligent species. The last common ancestor that we share with an octopus was probably a tiny worm-like creature that lived 600 million years ago. Since then, the myriad forms of vertebrate life—fish, reptiles, birds, and mammals among them—have developed their own kinds of mind along one branch, while cephalopods developed another.

It’s no surprise, then, that the octopus brain is quite different from our own. Instead of a single lump of neurons governing the animal like a central control unit, an octopus has multiple brain-like organs that seem to control each arm separately. For all practical purposes, these creatures are as close to an alien intelligence as anything we are likely to meet. And yet Peter Godfrey-Smith, a philosopher who studies the evolution of minds, says that when you come face to face with a curious cephalopod, there is no doubt there is a conscious being looking back

A few hundred years ago the accepted view was that only humans were conscious. Animals, lacking souls, were seen as mindless robots. Few think that today.

In humans, a sense of self that persists over time forms the bedrock of our subjective experience. We are the same person we were this morning and last week and two years ago, back as far as we can remember. We recall places we visited, things we did. This kind of first-person outlook allows us to see ourselves as agents interacting with an external world that has other agents in it—we understand that we are a thing that does stuff and has stuff done to it. Whether octopuses, much less other animals, think that way isn’t clear.

In a similar way, we cannot be sure if having a sense of self in relation to the world is a prerequisite for being a conscious machine. Machines cooperating as a swarm may perform better by experiencing themselves as parts of a group than as individuals, for example. At any rate, if a potentially conscious machine like Robert were ever to exist, we’d run into the same problem assessing whether it was in fact conscious that we do when trying to determine intelligence: as Turing suggested, defining intelligence requires an intelligent observer. In other words, the intelligence we see in today’s machines is projected on them by us—in a very similar way that we project meaning onto messages written by Bender’s octopus or GPT-3. The same will be true for consciousness: we may claim to see it, but only the machines will know for sure.

If AIs ever do gain consciousness (and we take their word for it), we will have important decisions to make. We will have to consider whether their subjective experience includes the ability to suffer pain, boredom, depression, loneliness, or any other unpleasant sensation or emotion. We might decide a degree of suffering is acceptable, depending on whether we view these AIs more like livestock or humans. 

Some researchers who are concerned about the dangers of super-intelligent machines have suggested that we should confine these AIs to a virtual world, to prevent them from manipulating the real world directly. If we believed them to have human-like consciousness, would they have a right to know that we’d cordoned them off into a simulation?

Others have argued that it would be immoral to turn off or delete a conscious machine: as our robot Robert feared, this would be akin to ending a life. There are related scenarios, too. Would it be ethical to retrain a conscious machine if it meant deleting its memories? Could we copy that AI without harming its sense of self? What if consciousness turned out to be useful during training, when subjective experience helped the AI learn, but was a hindrance when running a trained model? Would it be okay to switch consciousness on and off? 

This only scratches the surface of the ethical problems. Many researchers, including Dennett, think that we shouldn’t try to make conscious machines even if we can. The philosopher Thomas Metzinger has gone as far as calling for a moratorium on work that could lead to consciousness, even if it isn’t the intended goal.

If we decided that conscious machines had rights, would they also have responsibilities? Could an AI be expected to behave ethically itself, and would we punish it if it didn’t? These questions push into yet more thorny territory, raising problems about free will and the nature of choice. Animals have conscious experiences and we allow them certain rights, but they do not have responsibilities. Still, these boundaries shift over time. With conscious machines, we can expect entirely new boundaries to be drawn.

It’s possible that one day there could be as many forms of consciousness as there are types of AI. But we will never know what it is like to be these machines, any more than we know what it is like to be an octopus or a bat or even another person. There may be forms of consciousness we don’t recognize for what they are because they are so radically different from what we are used to.

Faced with such possibilities, we will have to choose to live with uncertainties. 

And we may decide that we’re happier with zombies. As Dennett has argued, we want our AIs to be tools, not colleagues. “You can turn them off, you can tear them apart, the same way you can with an automobile,” he says. “And that’s the way we should keep it.”

Will Douglas Heaven is a senior editor for AI at MIT Technology Review.

Artificial Intelligence Machine Learning NLP

AI in Healthcare: Lessons Learned From Moffitt Cancer Center, Mayo Clinic  

Until recently, Ross Mitchell served as the inaugural AI Officer at the Moffitt Cancer Center in Tampa, Florida. Now he’s offering consulting expertise borne of a long career applying AI to healthcare and health systems. 

Mitchell’s experience dates back to his undergraduate days. While pursuing his degree in computer science in Canada, he had a co-op work term at the local cancer center. “That was just a fluke how that placement worked out, but it got me interested in applying computer science to medicine. This was in the ’80s,” Mitchell says.   

He did a master’s degree at the cancer clinic—”very technical work with computer hardware and oscilloscopes and low-level programming”—but his lab was right next to the radiotherapy waiting room.  

“You got to see the same people in the clinic. Their treatments were fractionated over many days in many doses. You got to see them again and again, over many weeks, and see them progress or decline,” he remembers.   

That front row seat to their long treatment plans solidified Mitchell’s interest in using computation to improve medicine—a goal he’s pursued ever since: first with a PhD in medical biophysics, spinning a company out of his lab and serving as co-founder and Chief Scientist, and then joining the Mayo Clinic in Arizona to build a program in medical imaging informatics. In 2019, he took the role of inaugural AI officer at Moffitt Cancer Center, and in 2021, he began working as an independent consultant to help other organizations apply AI to healthcare. Mitchell recently sat down with Stan Gloss, founding partner at BioTeam, consultants to life science researchers, to discuss the practical knowledge he’s gathered as he has applied AI to medicine over the course of his career. AI Trends was invited to listen in.  

Editor’s Note: Trends from the Trenches is a regular column from the BioTeam, offering a peek behind the curtain of some of their most interesting case studies and projects at the intersection of science and technology. 


Stan Gloss: I’ve never heard of an AI officer. Can you tell me what that role is in an organization? 

Ross Mitchell, AI Officer, Moffitt Cancer Center

Ross Mitchell: It depends on the organization, what they want to do and where they want to go. More and more organizations in healthcare are developing roles like this. The role is not yet really well-defined. Nominally, it would be someone who’s going to guide and direct the development of AI and related technologies at the organization.  

It’s significantly different from what people are calling a chief digital officer, right? 

Moffitt recently hired a chief digital officer as a new role. Digital, by necessity, involves a lot of AI. {We see a] significant overlap between what you do related to digital technology in a healthcare organization and AI. The best way now to process a lot of that digital data and analyze it and turn it into actionable information involves the use of AI at some point in the chain.  

When you look at analysis options, what’s the difference between deep learning and machine learning?  

The broad field of AI encompasses a number of subfields. Robotics is one. Another area is machine learning.   

Machine learning, many years ago, back in the ’70s and the ’80s, was what we did to make computers perform intelligent-appearing activity by developing sets of rules. You would try and think of all the possible conditions that you might encounter, let’s say in a patient’s electronic health record in a hospital, and develop a series of rules. 

A doctor’s work generally follows simple algorithms: if a patient meets a set of conditions, it would trigger an action or a response. You could code up those rules. Imagine a series of if-then-else statements in your computer, and it would monitor things like blood pressure and temperature. If those things got out of whack, it might trigger an alert that would say, “We think this patient is going septic. Go take a look at them.” 

Those rule-based systems have been around for several decades, and they work well on simple problems. But when the issue gets complex, it’s really difficult to maintain a rule base. I remember many years ago trying to build commercial applications using rule-based systems and it was hard.  

What ends up happening is rules conflict with each other. You change one, and it has a ripple effect through some of the other rules, and you find out that you’re in a situation where both conditions can’t be true at once and yet, that’s what the system relies on. Rule-based systems were brittle to maintain and complicated to build, and they tended not to have great performance on really complex issues, but they were fine for simpler issues. 

In the ’70s and ’80s, when the earliest machine learning came along, the machine learned to identify patterns by looking at data. Instead of having a person sit down and say, “When the blood gasses get above a certain level or the body temperature gets above a certain level and the heart rate gets below a certain level, do something.”, you would present lots and lots of that data along with outcomes and the machine would look for patterns in the data and learn to associate those patterns with different outcomes. Learning from the data is what machine learning is all about.   

In the early ’80s, one of the popular ways to learn from the data was to use neural networks, which mimic networks in mammalian brains. A lot of the foundational work was done by Geoff Hinton, a neuroscientist in Toronto, Canada. He was interested in figuring out how the brain worked. While building synthetic circuits in the computer to simulate what was going on, he developed some of the fundamental algorithms that let us, as scientists, train these networks and have them learn by observing data.  

Deep learning is a subspecialty of machine learning. To recap: you’ve got AI, which has a subspecialty of machine learning, which, in turn, has a subspecialty of deep learning. Deep learning is just using neural type architectures to learn patterns from data as opposed to something like a random forest algorithm or something like that. 

What would be a good use of deep learning over machine learning? How would you use it differently? 

I use both all the time. Deep learning is particularly effective under certain conditions. For example, when you have a large amount of data. The more data you have, the better these deep learning algorithms tend to work, because they just use brute force to look for patterns in the data. 

Another important factor, as implied by “brute force”, is computational power. You really need computational power. That has been growing exponentially for over 20 years. The top supercomputer in the world in 1996 has about the same gross computing power as the iPhone you carry in your pocket. In other words, each of us is carrying around 1996’s top supercomputer in our pocket. There are things you can do now that just weren’t even remotely conceivable in the ’90s in terms of compute power. 

Of course, the advent of the Internet and digital technology in general means there’s an enormous amount of data available to analyze. If you have massive amounts of data and lots of compute power, using deep learning is a good way to go about pulling information out.  

I generally try machine learning first and if machine learning is good enough to solve the problem, great, because machine learning tends to be more explainable. Certain classes of algorithms are naturally explainable. They give you some insight into how they made their decision.  

Deep learning is more difficult to get those insights out of. Lots and lots of advances have been made in the last couple of years in that area, specifically, and I think it’s going to get better and better as time goes on. But right now, deep learning using neural networks is seen to be more of a black box than the older machine learning algorithms.  

As a general rule of thumb, we try a machine learning algorithm like a random forest first, just to learn about our data and get insights into it, and then if we need to, we’ll move on to a deep learning algorithm.  

So this is all old technology? What’s new?   

In 2012, there was a watershed moment in computer vision when convolutional neural networks were applied to images, and they were run on graphical processing units to get the compute power and all of a sudden, algorithms that were seemingly impossible just a few years before became trivial.  

When I teach this to my students, I use an example of differentiating cats and dogs. There was a paper published by researchers at Microsoft in 2007 that described an online test to prove that you were human ( They showed 12 pictures, 6 cats and 6 dogs, and you had to pick out the cats. For a human, you can do that with 100% accuracy in a few seconds. You can just pick them out. But for a computer at the time, the best they could hope to get would be 50-60% accuracy, and it would take a lot of time. So, it was easy to tell if it was a human or computer picking the images, and that’s how they got websites to prove that you were human.  

About seven years later, in 2013 I think, there was a Kaggle Competition with prize money attached to develop an algorithm that could differentiate cats from dogs ( The caption on the Kaggle page said something like, “This is trivial for humans to do, but your computer’s going to find it a lot more difficult.” They provided something like 10,000 images of cats and 10,000 images of dogs as the data set and people submitted their entries, and they were evaluated. Within one year, all the top-scoring algorithms used convolutional neural nets running on graphical processing units to get accuracies over 95%. Today you can do a completely code-free approach and train a convolutional neural network in about 20 minutes, and it will score above 95% differentiating cats from dogs.  

In the space of less than a decade, this went from a problem that was considered basically unsolvable to something that became trivial.  You can teach it in an introductory course and people can train an algorithm in half an hour that then runs instantly and gets near perfect results. So, with computer vision we think of two eras: pre-convolutional neural nets, and post-convolutional neural nets.  

Something similar happened recently with Natural language processing (NLP). In late 2018 Google published an NLP algorithm called “BERT” ( It generated over 16,000 citations in two years.  That is a tremendous amount of applied and derivative work. The reason for that is because it works so well for natural language applications. Today you can really think about natural language processing as two eras: pre-BERT and post-BERT.  

How are these more recent AI advances going to change healthcare and work? Will many people—physicians, technicians—be out of jobs?   

My belief is the opposite will happen because this is what always seems to happen with technology. People predict that a new technological innovation is going to completely destroy a particular job type. And it changes the job—but it doesn’t destroy it—and ends up increasing demand.  

One of the oldest examples of that was weaving, back in the early industrial ages. When they invented the automatic loom, the Luddites rebelled against the invention because they were involved in weaving. What ended up happening was the cost of producing fine, high quality fabrics dropped dramatically because of these automated looms. That actually lowered the price and thereby increased the demand. The number of people employed in the industry initially took a dip, but then increased afterwards. 

Similarly, the claim was made that the chainsaw would put lumberjacks out of business. Well, it didn’t. If anything, demand for paper grew. And finally, in the ’70s, when the personal computer and the laser printer came along, people said, “That’s the end of paper. We’re going to have a paperless office.” Nothing could be further from the truth now. We consume paper now in copious quantities because it’s so easy for anybody to produce high quality output with a computer and a laser printer.  

I remember when I was a grad student, when MRI first came on the scene and was starting to be widely deployed, people were saying, “That’s the end of radiology because the images are so good. Anybody can read them.” And of course, the opposite has happened. 

I think what will happen is you will see AI assisting radiologists and other medical specialists—surgeons, and anesthesiologists, just about any medical specialty you can think of—there’s an application there for AI.  

But it will be a power tool; AI is basically a power tool for complexity. If you have the power tool, you’re going to be more efficient and more capable than someone who doesn’t.   

A logger with a chainsaw is more efficient and productive than a logger with a handsaw.  But it’s a lot easier to injure yourself with a chainsaw than it is with a handsaw. There have to be safeguards in place.   

The same thing applies with AI. It’s a power tool for complexity, but it’s an amplifier as well. It can amplify our ability to see into and sort through complexity, BUT it can amplify things like bias. There’s a very strong movement in AI right now to look into the effects of this bias amplification by these algorithms and this is a legitimate concern and a worthwhile pursuit, I believe. Just like any new powerful tool, it’s got advantages and disadvantages and we have to learn how to leverage one and limit the other.  

I’m curious to get your thoughts on how AI and machine learning are going to impact traditional hypothesis-driven research. How do these tools change the way we think about hypothesis driven research from your perspective?  

It doesn’t; it complements it. I’ve run into this repeatedly throughout my career, since I’m in a technical area like medical physics and biomedical engineering, which are heavily populated by traditional scientists who are taught to rigidly follow the hypothesis approach to science. That is called deductive reasoning—you start with a hypothesis, you perform an experiment and collect data, and you use that to either confirm or refute your hypothesis. And then you repeat.  

But that’s very much been a development of the 20th century. In the early part of the 20th century and late 19th century, the opposite was the belief. You can read Conan Doyle in Sherlock Holmes saying things like, “One should never ever derive a hypothesis before looking at the data because otherwise you’re going to adapt what you see to fit your preconceived notions.” Sherlock Holmes is famous for that. He would observe and then pull patterns from the data to come up with his conclusions.  

But think of a circle. At the top of the circle is a hypothesis, and then going clockwise around the circle, that arrow leads down to data. Hypothesis at the top; data at the bottom. And if you go clockwise around the circle, you’re performing experiments and collecting data and that will inform or reject your hypothesis.  

The AI approach starts at the bottom of the circle with data, and we take an arc up to the hypothesis. You’re looking for patterns in your data and that can help you form a hypothesis. They’re not exclusionary, they’re complimentary to each other. You should be doing both; you should have that feedback circle. And across the circle you can imagine a horizontal bar are tools that we build and these can be algorithms, or they could be a microscope. They’re things that let us analyze or create data. 

When people use AI and machine learning, does that reduce the bias that may be introduced by seeking to prove a hypothesis? With no hypothesis, you’re simply looking at your data, seeing what your data tells you, and what signals you get out of your data.   

Yes, it’s true that just mining the data can remove my inherent biases as a human, me wanting to prove my hypothesis correct, but it can amplify biases that are present in the data that I may not know about. It doesn’t get rid of the problem of bias. 

I’ve been burned by that many, many times over my career. At Mayo Clinic, I was working on a project once, an analysis of electronic health records to try to predict hospital admission from the emergency department. On my first pass on the algorithm, I used machine learning that wasn’t deep learning and I got something like 95 percent accuracy.  

I’d had enough experience at that point that I was not excited or elated by that. My initial reaction was, “Uh-oh, something’s wrong.” Because you’d never get 95%. If it was that easy for an algorithm to make the prediction, people would have figured it out after dealing with these patients for years.  

I figured something was up. So, I went back to the clinician I was working with, an ER doc, and looked at the data. It turns out, admission status was in my input data and I just didn’t know because I didn’t have the medical knowledge to know what all those hundreds of variable columns meant. Once I took that data out, the algorithm didn’t perform very well at all.  

There’s a lot of work now on trying to build algorithms using the broadest, most diverse data sets that you can. For example, you don’t want to just process images from one hospital. You want to process images from 100 hospitals around the world. You don’t want to just look at hospitals where the patient population is primarily wealthy and well insured. You also want to look at hospitals where there’s a lot of Medicare and Medicaid patients and so on and so forth. 

What advice would you give an organization for starting off in AI? Can you fast track your organization to actually get to the point where you can do AI and machine learning?  

You can’t fast track it. You cannot.It’s an enormous investment and commitment, and it’s often a cultural change in the organization.   

My first and foremost advice is you need someone high up in the organization, who probably reports to the CEO, with a deep, extensive background and practical hands-on experience in BOTH healthcare and in artificial intelligence. The worst thing you can do is to hire somebody and give them no staff and no budget. You’re just basically guaranteed that the endeavor is going to fail. You need to be able to give them the ability to make the changes in the culture.  

One of the biggest mistakes I see healthcare organizations make is hiring someone who has gone online, taken a couple of courses from Stanford or MIT, watched some YouTube videos, read a couple of papers, and who got “into digital” five or six years ago, and they’re put in place to basically oversee and direct the entire effort of the organization, and they really have no experience to do that. It’s a recipe for failure.   

You also can’t expect the physician in the organization to easily become an AI expert. They’ve invested 10-15 years of education in their subspecialty, and they’re busy folks dealing with patients every day and dealing with horrific IT systems—electronic medical record systems—that make them bend to the technology instead of the other way around.  

You want somebody who’s been doing healthcare AI for 20 years and really knows how to use the power tools and where to apply them. But that person has to be able to communicate with the physicians and also has to be able to communicate with the engineers doing the fundamental work.   

It’s not a technical limitation that is stopping us from advancing this in healthcare. It’s mostly a cultural issue in these organizations and a technical expertise issue.  

Some of the biggest obstacles that I hear when I do these interviews is that the data is not ready for prime time. Organizations really haven’t put the thought into how to structure and organize the data, so they really are not AI or ML ready. Have you seen that? And what can an organization do with their data to prepare?  

That is very common. Absolutely.  

It’s a whole issue unto itself. Tons of data is out there. You’ve got an electronic medical record system in your large hospital containing all this data. How do you enable people within your organization with the appropriate skills to get at that data and use it to produce an analytic that will improve your outcomes, reduce cost, improve quality… or ideally all?  

It’s a cultural issue. Yes, there are technical issues. I’ve seen organizations devote enormous effort into organizing their data and that’s beneficial, but just because it’s organized doesn’t mean it’s clean.   

People say, “Oh, this is a great data set.” They’ve spent tons of time organizing it and making sure all the fields are right and cross-checking and validating, and then we go and use it to build an algorithm, and then you discover something systemic about the way the organization collects data that completely throws off your algorithm. Nobody knew, and there’s no way to know ahead of time that this issue is in the data, and it needs to be taken care of.  

That’s why it’s so critical to have a data professional, not just someone who’s a database constructor and filler and can organize a database. You need someone who’s an expert in data science who knows how to find the flaws that may be present in the data and has seen the red flags before.  

Remember, my reaction when I got the 95% accurate algorithm wasn’t one of elation. I knew we needed to do a double check there. And sure enough, we found an issue.  

I ran into something very similar recently at Moffitt in the way dates were coded. The billing department was assigning an ICD code to the date of admission as opposed to the date when the actual physiological event occurred, and we didn’t pick up on this until six months into the project. It completely changed the way the algorithm worked because the date was wrong, and we’re trying to predict something in time. The dates that we had were wrong relative to other biological events that were going on in that patient. 

Moffitt has terrific organization of their data. They’ve done one of the best jobs I’ve seen in the health care organizations I’ve worked with. But it didn’t mean the data was clean, it meant that it was accessible. When I wanted to train a model to understand the language of pathology reports, I asked for all of Moffitt’s digital pathology reports and in seven days, I had almost 300,000 pathology reports. 


Yeah, it was amazing. I was just shocked. That’s the kind of cultural change that needs to be in place.  


Learn more about Ross Mitchell on LinkedIn.