Categories
Artificial Intelligence Ethical AI

Alphabet Workers Union Giving Structure to Activism at Google 

By AI Trends Staff  

In a highly unusual undertaking in the technology industry, the Alphabet Workers Union was formed by over 400 Google engineers and other workers in early January. The union now has about 800 members.  

The Alphabet Workers Union is a minority union, representing a fraction of the company’s more than 260,000 full-time employees and contractors. The workers stated at the outset that it was primarily an effort to give structure to activism at Google, rather than to negotiate for a contract.   

The union is affiliated with the Communications Workers of America (CWA), a union representing workers in telecommunications and media in the US and Canada. 

Sara Steffens, secretary-treasurer, Communication Workers of America

“There are those who would want you to believe that organizing in the tech industry is completely impossible,” stated Sara Steffens, CWA’s secretary-treasurer, in an account in the New York Times. “If you don’t have unions in the tech industry, what does that mean for our country? That’s one reason, from CWA’s point of view, that we see this as a priority.” 

The Google union is seen as a “powerful experiment” for bringing unionization into a major tech company by Veena Dubal, a law professor at the University of California, Hastings College of the Law. “If it grows… it could have huge impacts not just for the workers but for the broader issues that we are all thinking about in terms of tech power in society,” she stated. 

The minority union structure gives the union the ability to include Google contractors, who outnumber full-time employees  some 121,000 to 102,000, according to a recent New York Times account.  

Pittsburgh Workers for Google Contractor HCL Voted to Unionize in 2019 

GWA is not the only Google-affiliated union trying to form. In the fall of 2019, some 65 Google contract workers in Pittsburgh voted to form a union, seeking better pay and some benefits. The members had been working for a Google contractor, HCL, on the Google Shopping platform. HCL America as an IT services contractor, with 15,000 employees in the US, The HCL Group was founded by Shiv Nadar in India in 1976, since growing into a global organization. The company reported revenue of $11 billion in 2021.  

Data analyst Gabrielle Norton-Moore said she and other HCL workers voted for a union in the fall of 2019 because they were being treated unfairly. 

“Normally if you have seniority, you [would] be getting benefits, like maybe more vacation days or a bigger raise—just something,” stated Gabrielle Norton-Moore, a data analyst, in an account from 90.5 WESA in Pittsburgh. “They don’t even remotely offer that. And the fact that they only give us like a 1%, or barely that, raise annually… [it] doesn’t even remotely cover inflation.” 

Since the HCL contractors voted to affiliate with the United Steelworkers (USW), not much has happened. Union organizer Ben Gwin stated HCL has been slow-walking the contract negotiations. “They’re not willing to meet more than twice a month. It’s insulting,” Gwin stated. “And it just feels like a complete joke that they’re not taking the process seriously.” 

More recently, the National Labor Relations Board alleged that HCL implemented more strict workplace rules after the union vote, left positions unfilled in Pittsburgh, and moved some work to Krakow, Poland. The NLRB in June issued a corrected complaint, asking that HCL be ordered to restore work sent abroad, according to a press release from the USW, which represents 850,000 workers in a range of industries including technology and services.  

The HCL workers may have little recourse. “There are no tight time structures within the law as it exists now, so workers can end up bargaining for years,” stated Celine McNicholas, director of government affairs at the Economic Policy Institute, in the 90.5 WESA account. 

From 1875 to when the last steel plant in Pittsburgh closed in 1984, local unions had a lot of sway and helped workers to win high wages. Since then, union membership has fallen dramatically. University of Pittsburgh law professor Mike Madison sees that workers have lost power in today’s modern service economy. 

“So one of the things that you’re seeing in a place like Pittsburgh is a revival of interest in labor organizing as a way to recapture some of the equity associated with income distribution that used to be associated with steel,” Madison stated.  

The USW has sought to unionize more tech employees through its Federation of Tech Workers. 

“As employers come in and really see Pittsburgh as a place to have a low cost of living with a highly-educated workforce, it opens the door to exploitation,” said Mariana Padias, the USW’s assistant director of organizing. An increasing number of tech employees view “union democracy” as important to having “more of a say in their work environment,” she stated, 

Why Workers Joined the Union  

A number of Alphabet Workers Union members post comments on the AWU website on why they joined the union and what they hope will be the result. 

For example, software engineer Greg Edelston stated that he joined the union because, “I want to see Alphabet act as ethically as possible. The union offers a way to influence Alphabet’s culture in the name of ethics.” 

Software engineer Alberta Devor stated that he joined the union “to make sure Google and Alphabet follow the slogan that our executives have abandoned: ‘don’t be evil.’” 

[Ed. Note: The motto “don’t be evil” was part of Google’s corporate code of conduct since 2000. When Google was reorganized under the Alphabet parent company in 2015, the phrase was modified to “do the right thing,” according to an account in Gizmodo.] 

Parul Koul, software engineer and executive chair, Google Workers Union

Parul Koul, a software engineer at Google, is the executive chair of the union. Interviewed by Bloomberg Businessweek in January, she stated, “The union is going to be a hugely important tool to bridge the divide between well-paid tech workers and contractors who don’t make as much.” 

Asked about the union’s mission statement that says, “We are responsible for the technology that we bring into the world,” Koul stated, “That statement means acting in solidarity with the rest of the world… I believe we have to see ourselves as part of the working class. Otherwise, we’re going to end up being wealthy people just fighting for our own betterment.” 

The AWU issues press releases on matters of interest. One example is a statement issued in January about the suspension of corporate access for Margaret Mitchell, then a senior scientist at Google and head of its Ethical AI Team. This followed Google’s parting of the ways with Timnit Gebru, was co-leader of the Ethical AI Team at Google. (AI Trends, Dec. 10, 2020) Gebru had submitted a paper on the ethical considerations around large language models to a conference; her managers asked that she withdraw it to give them more time for a review. (AI Trends, Jan. 28, 2021)  

The AWU statement on the matter stated in part, “together these are an attack on the people who are trying to make Google’s technology more ethical.” 

For its part, Google management calls attention to its efforts with AI for Good and for racial equity. In a June blog post, Melonie Parker, Google’s chief diversity officer, stated the company is committed to doubling the number of Black employees by 2025. She also mentioned a student loan repayment program to help Black employees, and partnerships with Historically Black Colleges and Universities (HCBUs) to broaden access to higher education and opportunities in tech. She announced that 10 HCBUs will each receive an unrestricted financial grant of $5 million.  

A search of the Google blog found no mention of the Alphabet Workers Union.  

Read the source articles and information in the New York Times, from 90.5 WESA in Pittsburgh, and a press release from the United Steelworkers. 

Categories
AgTech Artificial Intelligence Autonomy

Ag-tech Employing AI and Range of Tools With Dramatic Results 

By AI Trends Staff  

An agricultural technology (ag-tech) startup in San Francisco, Plenty, plants its crops vertically indoors, in a year-round operation employing AI and robots that uses 95% less water and 99% less land than conventional farming. 

Plenty’s vertical farm approach can produce the same quantity of fruits and vegetables as a 720-acre flat farm, on only two acres.    

Nate Storey, cofounder and chief science officer of the startup Plenty

“Vertical farming exists because we want to grow the world’s capacity for fresh fruits and vegetables, and we know it’s necessary,” stated Nate Storey, cofounder and chief science officer of the startup Plenty, in an account in Intelligent Living 

The yield of 400x that of flat farms makes vertical farming “not just an incremental improvement,” and the fraction of water use “is also critical in a time of increasing environmental stress and climate uncertainty,” Storey stated. “All of these are truly game-changers.”  

Plenty is one of hundreds of ag-tech startups using new technology approaches—including AI, drones, robots and IoT sensors—being supported with billions of investments from the capital markets.     

Plenty’s climate-controlled indoor farm has rows of plants growing vertically, hung from the ceiling. LED lights mimicking the sun shine on the plants; robots move them around; AI manages all the variables of water, temperature, and light. The AI continuously learns and optimizes how to grow better crops.   

Also, vertical farms can be located in urban areas resulting in locally-produced food, with many transportation miles eliminated. Benefits of locally-produced crops include reduction of CO2 emissions from transportation vehicles and potentially lower prices for consumers.    

“Supply-chain breakdowns resulting from COVID-19 and natural disruptions like this year’s California wildfires demonstrate the need for a predictable and durable supply of products can only come from vertical farming,” Storey stated.  

Plenty has received $400 million in investment capital from SoftBank, former Google chairman Eric Schmidt, and Amazon’s Jeff Bezos. It also struck a deal with Albertsons stores in California to supply 430 stores with fresh produce.  

Bowery Farming in New York City Supplying 850 Grocery Stores  

Another indoor farming venture is Bowery Farming in New York City, which has raised $467.5 million so far in capital, according to Crunchbase. Experiencing growth during the pandemic, the company’s produce is now available in 850 grocery stores, including Albertsons, Giant Good, Walmart and Whole Foods, according to an account in TechCrunch.   

The infusion of new capital, $300 million in May, “is an acknowledgement of the critical need for new solutions to our current agricultural system,” stated CEO Irving Fain in a release. “This funding not only fuels our continued expansion but the ongoing development of our proprietary technology, which sits at the core of our business and our ability to rapidly and efficiently scale toward an increasingly important opportunity in front of us,” Fain stated. 

The company plans to expand to new locations in the US, including a new site located in an industrial area in Bethlehem, Penn., which Bowery says will be its largest to date.  

blog post on the company’s website describes the BoweryOS as the “central nervous system” of each farm, offering plants individual attention at scale. “It works by collecting billions of data points through an extensive network of sensors and cameras that feed into proprietary machine-learning algorithms that are interpreted by the BoweryOS in real time,” the account states. In addition, “It gets smarter with each grow cycle, gaining a deeper understanding about the conditions each crop truly needs to thrive.”  

Ag-tech Spending Projected to Reach $15.3 Billion by 2025 

Global spending on smart, connected ag-tech systems including AI and machine learning, is projected to trip by 2025, to reach $15.3 billion, according to BI Intelligence Research, quoted in a recent account in Forbes. 

IoT-enabled ag-tech is the fastest growing segment, projected to reach $4.5 billion by 2025, according to PwC. 

Demand should be there. Prediction data on population and hunger from the United Nations shows the world population increasing by two billion people by 2050, requiring a 60% increase in food production. AI and ML are showing the potential to help meet the increased requirement for food.   

Louis Columbus, author and principal of Dassault Systemes, supplier of manufacturing software

“AI and ML are already showing the potential to help close the gap in anticipated food needs,” stated the author of the Forbes article, Louis Columbus, a principal of Dassault Systemes, supplier of manufacturing software.  

AI and machine learning are well-suited to tackle challenges in farming. “Imagine having at least 40 essential processes to keep track of, excel at and monitor at the same time across a large farming area often measured in the hundreds of acres,” Columbus stated.” Gaining insight into how weather, seasonal sunlight, migratory patterns of animals, birds, insects, use of specialized fertilizers, insecticides by crop, planting cycles and irrigation cycles all affect yield is a perfect problem for machine learning,” he stated.  

Among a list of ways AI has the potential to improve agriculture in 2021, he offered:  

Using AI and machine learning-based surveillance systems for monitoring. Real-time video feeds of every crop can be used to send alerts immediately after an animal or human breech, very practical for remote farms. Twenty20 Solutions is a leader in the field of AI and machine learning-based surveillance.  

Improve crop yield prediction with real-time sensor data and visual analytics data from drones. Farms have access to data sets from smart sensors and drones they have never had before. Now it’s possible to access data from in-ground sensors on moisture, fertilizer, and nutrient levels, to analyze growth patterns for each crop over time. Infrared imagery and real-time video analytics also provide farmers with new insights.  

Smart tractors and agribots—robots designed for agricultural purposes—using AI and machine learning are a viable option for many agricultural operations that struggle to find workers. Self-propelled agribots can be programmers for example to distribute fertilizer on each row of crops, in a way to keep operating costs down and improve yields. Robots from VineScout are used to create crop maps, then help manage crops, especially in wine vineyards. Based in Portugal, the project has been backed by the European Union and multiple investors.  

Read the source articles and information in Intelligent Living, in TechCrunch and in Forbes. 

Categories
Artificial Intelligence Autonomy

Parking Between the Lines, a Heady Viral Topic, Ensnares AI Autonomous Cars 

By Lance Eliot, the AI Trends Insider   

Are you a middle parker or a sideline-hugging parker?   

Here’s the deal. A recent TikTok video went viral about how we all should be parking our cars when doing so in those parking lots that have clearly marked lined spaces. A brouhaha has now arisen.

You know how it goes. As you drive down a row of parked cars, you are spying for any next open space. Upon spotting one, you quickly drive up to the prized piece of turf and maneuver your car into the allotted space. There are painted lines on the asphalt that denote what amount of floor space you are considered entitled to consume.   

The question posed to you is whether you tend to park directly midway between those lines, or whether you aim to be closer to one side or the other of your teensy bit of earth. Take a moment to think this over. Your answer is very, very, very important.   

Most of the time, your primary concern is probably that you don’t want to scrape against any other cars as you manage to get into the parking spot.   

Trying to somehow line up perfectly in your now grabbed up parking spot is secondary in priority. They say that possession is nine-tenths of the law, so your crucial first step is to satisfactorily occupy the space. Dive in there, however, you can squeeze into it. This keeps other interlopers from trying to claim they saw it first (which, they might have, but you now “own” that space and have presumably won an intergalactic battle in doing so). 

Okay, after making sure that the landing has happened, and you’ve secured the vaunted spot, now you look around to see how much room there is between your car and the adjacently parked vehicles. Sometimes those other vehicles are rudely protruding into your now conquered space.    

Outrageous!   

But it seems relatively rare that people blatantly bloat over into an adjacent parking space, though it, unfortunately, does indeed occur. We’ll set aside that consideration for now. Let’s assume for the sake of discussion that you’ve found a parking spot that is not being encroached upon. The adjacent vehicles are within their lines and not transgressing into your space. I would guess that most of the time that’s how things are. You have the full extent of width available in your captured parking spot.   

How much space do you have width-wise? 

It all depends, but the traditional width for conventional parking spots is about eight to nine feet or so. A car is typically about six feet to perhaps six and a half feet in width. For ease of discussion, let’s agree to use six feet for the width of a normal car and use eight feet for the width of a typical parking space.  

Based on the presumption that a car is six feet wide and the parking spot is eight feet wide, we can use our heads to calculate that the difference is a matter of two feet. You have about two feet to play with inside your parking space, and those two feet are likely to be needed for getting out of and into your car. The two feet are your means of making egress and ingress related to your parked vehicle.   

Consider how these two feet of space can be allocated.   

By parking perfectly in the middle of the parking spot, you would in theory have one foot of open space to your left and one foot of open space to your right.    

What we also need to include in our calculus is whether the vehicles adjacent to you have managed to include any available space in their respective parking spots. It could be that the vehicle to your immediate left is hugging the line that borders upon your two cars. In that case, you have no added room by trying to temporarily make use of the space to your left, perhaps wanting to momentarily intrude as you open your driver’s side car door.   

Similarly, if the vehicle to your immediate right is hugging the line that borders upon your parking spot, this means that trying to use any space beyond your “internal” one foot of available space is going to be rebuffed. There isn’t any more space available because that other vehicle is hugging the line.   

In a squeeze play of a parking situation, whereby the adjacent vehicles are each hugging the line, you only have your own two feet of available space to exploit. There is no immediately available temporary space to leverage. This is nearly as bad as when adjacent cars encroach, though not quite so since you did at least get into the parking spot successfully.   

The thing is, now you might not have any means to get out of your car. 

Darn!   

Sure, you found a parking spot, nonetheless, you might be stuck inside your vehicle and unable to get out. That’s not what you probably had in mind when you were searching for a parking spot. The usual assumption is that you can park your car, you can get out of it, you can go do whatever you had in mind, and when you return to your parked car you will be able to get into it.   

Seems simple enough, but that doesn’t always happen readily.   

Getting into and out of your vehicle can at times be a contortionist’s job. You tentatively open the driver’s side door, trying desperately not to have your door touch the side of the adjacent car. The odds are that it will bump against the other car in this squeeze play scenario. You look around to see if anyone noticed. Assuming the coast is clear, you steady the door and ooze your body out of your driver’s seat, along with thinking extremely thin thoughts in hopes that your body can become one-dimensional and slide out without any further problems. 

Let’s use a smiley face version of the parking situation and pretend that the adjacent cars have parked perfectly in the middle of their parking spots. We will continue using the assumed sizes of six feet for the car width and eight feet for the width of the parking space.   

This is a blessing.   

It means that you have one foot of temporary space from the car that is to your left, and you have an additional foot of temporary space to your right. All told, you now have available two feet to the left of your car, and two feet to the right of your car. Mathematically, this is your one foot of space to your left inside your space, plus the one foot of space that is to the right of the space to your left. And then there is the one foot of space to your right, combined with the one foot of space that is to the left of the car that is to your right. Say that quickly, ten times, as it is quite the tongue twister. 

I emphasize that this is a calculation of the temporary space. You are not parked into their spaces, and only momentarily their available space when you need to get into and out of your vehicle.   

You have the enviable and luxurious scenario of being able to use an entire two feet to get out of your car on your side, and if you perchance have a passenger in the seat next to you, they have two feet to use on their side too. The world has suddenly become a joyous place. Birds are singing, flowers are blooming. It is fortuitous that those adjacent cars were able to perfectly park in the middle of their spaces.   

Ponder that notion.   

We pretty much assume that most people will try to park their cars in the center of their parking spot. Doing so seems prudent. It gives the maximum allowable space on either side of your car, within the constraints of your limit lines. It makes plain sense to do so.   

Envision a nirvana in which everyone always impeccably parked their cars in the exact center of the parking spots that they opted to occupy. In the case of six-foot-wide cars and eight-foot-wide parking spaces, there would be two feet to either side of each car. Every time. Guaranteed.   

Side note, a smarmy know-it-all might argue that we don’t know what portends for the cars parked at the very edges of the entire row. I think we can safely argue that they would likely have even more than two feet available. The assumption is that there isn’t anything blocking the ends of the row. Of course, this might not be the case and possibly a concrete wall or some barrier is there. Those that park at the end of such rows will decidedly be shortchanged, a sad fact of life.

Do today’s drivers achieve the purity of parking in the center or middle of their captured parking spot?   

Nope.   

Any casual glance at cars parked in a contemporary mall or movie theatre parking lot will showcase abundantly that people do not park that way with any semblance of consistency.   

On top of this, it is easy to justify not parking in the center of your parking spot if there is a vehicle in the adjacent spot that is not abiding with the park-in-the-middle mantra.   

For example, you drive up to a parking spot, and only you are in your car. Only you will need to get out of and later back into the car. You don’t need to worry about having any available space to the right of your car since you don’t have any passengers on board. You notice that the car to your left is hugging the borderline.  

What do you do?   

Indubitably, you would deduce that you ought to park as much to the right in your parking space, providing maximum distance between your driver’s side door and the border to your left. In essence, this creates two feet of space, entirely confined within your available parking spot. The dolt to your right has essentially forced you into doing this, due to their careless parking and not having obeyed the rule to always park in the center of a parking spot. 

You had no choice. The other driver made the choice for you. The moment they hugged the line on their right, it meant that any driver pulling into that parking space to their right is going to inevitably shift over to the right too, seeking to maintain a reasonable distance to get out of their car.   

A close observation of cars parked in a parking lot will oftentimes reveal this cascading effect. Once a vehicle opts to park to the edge of their parking spot, the car adjacent will necessarily opt to do the same. And the next vehicle will do likewise. On and on this goes, causing an entire row to end up being lopsided in parking close to the line.   

It just takes one weak link (instigator) in the chain to get all the other drivers to do the same thing.   

I’m not suggesting this is entirely undertaken. It all depends upon where the lopsided parking effort originates. It also depends obviously on the actual widths of the cars, and so on. The point overall is that this can happen generally and does in fact occur.   

A difficulty for many drivers is that they are not good at gauging where the center of the parking spot is, nor how to align their particular vehicle accordingly. It seems that a lot of drivers have no visceral comprehension of the width of their car. They do things by wild estimation. Even if their lives depended upon precisely pulling into a tight parking spot and had to be in the middle (else, say a menacing gorilla will leap onto the hood of their car), one doubts they could do so.   

In short, parking in the middle of a parking spot is just too much to handle for most drivers.   

Kind of a heartbreaking commentary about how we drive. Sad face. This brings us to the viral video.   

As though the video maker had just discovered the source of the Nile or the secret to those alien spaceships and UFOs, the video asserts that we should all park toward the left line of any parking spot and this would solve the world’s problems. At least with respect to parking. 

The notion is straightforward.

By everyone parking as close as possible to the left line, we would always be leaving open that roughly two feet to our right. Guaranteed (under the assumptions herein about the widths involved). Now, I realize you are thinking that you could do the exact same thing by everyone agreeing to park immediately next to the right sideline. Yes, that’s true. 

The basis for adopting the left line is that in a society wherein the cars are designed with the driver in the left side seat, presumably, drivers can easily align with the left line. Going back to how badly drivers seem to gauge the width of their cars, asking them to saddle up to the right line would seemingly be a disastrous proposition. Cannot be done, they would exhort.   

You would hope that most drivers could at least align their vehicles with the left line. Naturally, any country that has the driver’s side to the right side of the vehicle would probably want to use the right side line, leveraging the same logic already mentioned.   

That is then what got a viral spin going.   

Apparently, some people on this planet think that this is the best idea since the invention of sliced bread. Others scratch their heads and wonder why in the heck this simple idea should be so bandied about and get a buzz in the social media realm. One supposes that this does have a bit more complexity and weightiness than videos that show a cat meowing or a baby that spits up milk (please don’t harp on me about that, I love cats, and babies are certainly adorable too).   

Speaking of cars, the future of cars consists of AI-based true self-driving cars. 

There isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle. 

Here’s an intriguing question that is worth pondering: Would the left side hugging of a parking spot be more feasible due to the advent of AI self-driving cars, and if so, should this be implemented?   

Before jumping into the details, I’d like to further clarify what is meant when I refer to true self-driving cars. 

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving Cars   

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/   

The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Self-Driving Cars And Parking In Parking Spots 

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving.   

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.   

Why this added emphasis about the AI not being sentient?   

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.   

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.   

Let’s dive into the myriad of aspects that come to play on this topic.   

Programming of self-driving cars to always park at the leftmost edge of a parking spot would be relatively straightforward. 

You see, self-driving cars make use of various sensors such as video cameras, radar, LIDAR, ultrasonic units, thermal imaging, and similar devices to derive the nature of the driving scene. You could construe the sensor suite as somewhat akin to the eyes and ears of the AI driving system.   

In the case of parking in a lined parking spot, the AI driving system would receive data via the vehicle-mounted sensors that are scanning the surroundings, and then utilize computationally pattern matching techniques such as Machine Learning (ML) or Deep Learning (DL) to identify the specific parameters associated with a parking spot. Via the video being real-time streamed from the onboard video cameras, the painted lines would be hopefully detectable. The AI driving system would then issue commands to the autonomous vehicle driving controls to maneuver into the parking space accordingly. 

Generally, you could reasonably expect that this would be done with extremely high reliability.   

The odds are that the self-driving car would nearly always park in the leftmost portion of a parking space if that’s what it had been programmed to attain. Occasional exceptions might arise, such as if the adjacent parked cars prevented positioning in the leftmost portion, or possibly due to obstructions or other oddities about a particular parking spot.   

I might add that in the case of human drivers trying to always park toward the leftmost edge of a parking spot, there is a lingering doubt about the reliability of humans being able to do so. Though the earlier point was made that human drivers would presumably find it easier to park toward the left line and do so more consistently than parking in the center of a parking space, that omits the notion that humans innately have human foibles and therefore are not especially robot-like in their behaviors. 

You can imagine how things might go in the case of human drivers trying to adopt a left-line parking rule.   

Some people would flatly refuse to do so. They would potentially feel it is their constitutional right to park within a parking spot wherever they darned wish to do so. We would undoubtedly end up with some parking lots that had the left line rule, while others proclaimed you can park anywhere within the lines. This would draw some drivers to one of those parking lots and other drivers to the other ones. Of course, at some point, a left-line person would get irked that an anywhere person opted to park in the left-line parking lot, and fisticuffs would possibly fly.   

On top of this, it would seem overly optimistic to believe that human drivers would properly align to the left line, even if that was their desired intent. I’m sure that many would cross over the line while attempting to kiss the line. As a result, there would probably be some sizable percentage of parked cars that edged over into the left adjacent parking spot.   

Anyway, all of those complications would fall by the wayside with the AI driving systems at the wheel. No-fuss, no muss. Parking to the left line would be easy-peasy.   

Case closed. Wait for a second, maybe there is more. Of course there sure is.   

We are going to have both self-driving cars and human-driven cars for many decades to come. There are about 250 million conventional cars in the United States alone, and those regular cars are not going to disappear overnight. In short, we can expect that our public roadways will be replete with a mixture of self-driving cars and human-driven cars.   

This includes being in parking lots too. 

Though the self-driving cars could readily and consistently park to the left line, there would certainly be human drivers that violated this principle. It would then toss asunder the precept that all of the cars would need to park in the same manner. We are back to square one.   

You could have parking lots that are devoted exclusively to self-driving cars. In that case, the left line rule would be viable. Will human drivers possibly get upset that they are being kept out of the parking lots being used by self-driving cars?   

Possibly so, depending upon where those parking lots are located, such as near a convenient place to be able to park your car.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: http://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Conclusion 

Lots more twists and turns arise. 

A self-driving car could essentially park anywhere within the lines, consistently, regardless of whether we wanted this to happen on the left or the right. As such, it might not make sense to enforce the left-line rule. You could instead program the AI driving systems to center the car in the parking spot. This would have the same general effect as parking to the left. It would be done consistently and ostensibly smackdab in the middle.   

Some pundits insist there would never be a need to park a self-driving car in a parking lot, or anyplace else. They claim that self-driving cars will always be on the go, other than when getting fueled or undergoing maintenance. Self-driving cars will seemingly always drop off passengers at some suitable drop-off point, and likewise, pick up passengers at some appropriate pick-up spot.   

Thus, it is conceivable that mass parking of self-driving cars is not needed.   

Furthermore, when self-driving cars are parked, humans will not get into or out of the self-driving car while it is in a parked position. Instead, the AI driving system will bring the autonomous vehicle to the passengers. You could then pack self-driving cars together like sardines, assuming you did want them to park someplace.   

Quite a future awaits us.   

Meanwhile, the next time that you seek to park in a lined parking lot, look at how the other cars are parked. Probably will look like a full-on mishmash, having some cars toward the left, toward the right, on the line, in the middle, and so on.   

Human drivers are definitely quite a fun bunch.   

Copyright 2021 Dr. Lance Eliot  

http://ai-selfdriving-cars.libsyn.com/website 

Categories
Artificial Intelligence

Sentiment Analysis Tools Well-Timed in a Low-Empathy Era 

By John P. Desmond, AI Trends Editor 

People in the US are suffering from an empathy deficit, one researcher suggests.   

Children who spend much of their time plugged into self-focused media are less likely to learn how to read emotional cues associated with face-to-face communication, suggests Dr. Michele Borba, an educational psychologist and author of “UnSelfie: Why Empathetic Kids Succeed in Our All-About-Me World  

Her research shows a 40% drop in empathy among college freshmen over the past 30 years, as a result of cultural and environmental factors, resulting in a weakening of our empathy “muscles”.  

Empathy activates conscience, curbs bullying, reduces prejudice and promotes moral courage, the foundation to trust, the benchmark of humanity and core to everything that makes a society civilized,” stated Dr. Borba on her website.  

To better tune into this unfeeling public, some companies are turning to sentiment analysis software that incorporates AI.   

SugarPredict from SugarCRM Adds Sentiment Analysis 

SugarCRM recently added sentiment analysis software, called SugarPredict, into its SugarLive multichannel customer communications platform, used by sales and service staff to track the details of each customer interaction.  SugarPredict incorporates natural language processing AI capability.   

“One of the highest callings of customer experience professionals and enabling technology platforms is helping customers via an understanding of their struggles and aspirations,” stated Paul Greenberg, President of the 56 Group, LLC and author of The Commonwealth of Self Interest: Business Success Through Customer Engagement, in a press release from SugarCRM. “AI-powered sentiment analysis weds customer voice and text to business action, providing every sales and service interaction with the means to account for customers’ emotional tone and attitude—context indispensable to supporting exceptional experiences.”  

Rich Green, CTO, SugarCRM

SugarCRM CTO Rich Green stated, “Sales and service professionals are under a great deal of pressure as a customer’s business can be won or lost in a single misstep. You rarely get a second chance to make a great impression with a customer; it’s profoundly important to get each and every interaction right and connect on a deeply human level.” 

In SugarPredict, SugarCRM is leveraging its August 2020 acquisition of AI supplier Node Inc., an AI platform focused on using customer data to further predictability, the company indicated to AI Trends. 

InVibe Labs Tapping GPT-3 Large Language Model for Insights  

InVibe Labs of Costa Mesa, Calif., a startup focused on voice technology research, is incorporating its use of the GPT-3 large language model created by OpenAI, to help analyze its archive of transcribed audio files from research surveys.    

“Recent technological advancements in machine learning and NLP have pushed us to expand our thinking about what processing tasks can be offloaded to a machine,” stated Jeremy Franz, cofounder and CTO of inVibe, in a blog post on the company’s website. The company worked with Edge Analytics to build several algorithms on top of GPT-3, used to perform novel MLP tasks. “GPT-3 is better at some tasks than others; however, combining GPT-3 with expert human analysts and other automated tools gives us the best of both worlds,” Franz stated. 

In testing, inVibe achieved better performance on “subject-aware” sentiment analysis using the GPT-3 algorithms, than any other solution they tried. “Subject-aware” means that the technique considers the subject of the phrase, not just whether the words are positive or negative. “This is a marked improvement over traditional techniques, and even exceeds the performance of advanced models like Google Cloud’s semantic analysis API,” Franz stated. He followed with several specific examples to flesh out his point.  

“With the steps we’ve taken, inVibe hopes to increase the clarity, organization, and timeliness of our insights for clients,” Franz stated.  

HARK Connect Extends Qualitative Research with Sentiment Analysis 

HARK Connect of Austin has focused on qualitative research for market research involving groups, one-on-one interviews and use experience (UX) designs. The company is now introducing applications for Sentiment Analysis and Facial Coding.   

“Our focus has been on the benefits of the AI-human interaction and how it empowers qualitative research,” stated Dr. Duane Varan, founder and CEO of Hark Connect, in an email to AI Trends. “This is very much about technology, but it’s a ‘warm’ technology, if you will. It’s not intimidating; if anything, it’s accessible. It ends up making the people who use it feel good about it,” he stated.   

HARK Connect Managing Partner and Tech Evangelist Elissa Moses stated, “Through AI, Qual is rapidly leapfrogging from being the one of the lowest tech forms of research to one of the highest. She has worked for advertising agencies including BBDO, and for clients including Gillette and Seagram. “Clients across a range of business and institutional categories are reaping the benefits of its speed and depth of analysis,” she stated.   

Comments from clients on the company’s website include this one from Liz Musch, founder and CEO of LM Global Advisor, offering strategic consulting, “HARK Connect has cracked the best way to experience focus groups. Easy to watch and you can communicate remotely. Plus, being international, I really appreciate the real time translations.”  

Cauliflower in Hamburg Uses AI to Derive Sentiment from Text Analysis  

Startup Cauliflower.ai, founded in 2019 in Hamburg, Germany, offers an AI/ML-based multilingual sentiment analysis tool, used to forecast sales, extract key excerpts from customer feedback, automate customer support tickets and create intelligent AI chatbots, according to an email message to AI Trends.   

The company was founded by Lukas Waidelich, who is managing director, and Gianluca-Daniele Speranza, who is technical lead. The two custom-built a natural language understanding algorithm used to analyze text from sources including surveys, review, customer feedback and social media. The tool can translate multiple languages. The analysis aided by AI that can identify topics the target group is discussing and which are most important; the sentiment of the comments, positive and negative feedback; and themes across the network and how they are related; and visual reporting and sharing options for the results of the analysis.  

Tchibo is a German chain of coffee retailers and cafes known for non-coffee products that change weekly, including clothing, furniture, electronics and appliances. “Every week a new world,” is the slogan of the company, which became a Cauliflower.ai customer.   

“We discovered Cauliflower while searching for strong technology partners who can help us with digital transformation,” stated Alexander Falser, head of consumer insights for Tchibo, in a comment posted on the Cauliflower website. “With automated AI-based semantic analysis and excellent visualization, Cauliflower delivered and proved to be a highly customer-oriented and flexible partner,” he stated.  

Authors Have Suggestions for US Empathy Deficit  

The sentiment-analysis products might be well-timed in an era seeing shrinking reservoirs of empathy. Opportunities to give and receive empathy are reduced by decreased social interaction, online gatherings, air hugs and masked conversations, suggest the authors of a recent account in Scientific American entitled, “The US Has an Empathy Deficit,” published in September 2020.   

Judith Hall, a professor of psychology, Northeastern University

“The fact that a recent Gallup poll showed that roughly a third of the country doesn’t think there’s a problem with race relations suggests that many people aren’t grasping other people’s perspectives,” stated the authors, Judith Hall, a professor of psychology at Northeastern University, and Mark Leary, a professor of psychology and neuroscience at Duke University.   

“You don’t have to be a social psychologist like we are to see that Americans are experiencing an empathy deficit,” the authors stated, suggesting remedies such as asking others how they are feeling and listening to their response, and trying to put yourself in other people’s shoes. 

Read the source articles and information on the website of Dr. Michele Borba, in a press release from SugarCRM, in a blog post from inVibe Labs, on the website of Hark Connect, on the website of Cauliflower.ai and in Scientific American.

Categories
Artificial Intelligence Machine Learning NLP

AI in Healthcare: Lessons Learned From Moffitt Cancer Center, Mayo Clinic  

Until recently, Ross Mitchell served as the inaugural AI Officer at the Moffitt Cancer Center in Tampa, Florida. Now he’s offering consulting expertise borne of a long career applying AI to healthcare and health systems. 

Mitchell’s experience dates back to his undergraduate days. While pursuing his degree in computer science in Canada, he had a co-op work term at the local cancer center. “That was just a fluke how that placement worked out, but it got me interested in applying computer science to medicine. This was in the ’80s,” Mitchell says.   

He did a master’s degree at the cancer clinic—”very technical work with computer hardware and oscilloscopes and low-level programming”—but his lab was right next to the radiotherapy waiting room.  

“You got to see the same people in the clinic. Their treatments were fractionated over many days in many doses. You got to see them again and again, over many weeks, and see them progress or decline,” he remembers.   

That front row seat to their long treatment plans solidified Mitchell’s interest in using computation to improve medicine—a goal he’s pursued ever since: first with a PhD in medical biophysics, spinning a company out of his lab and serving as co-founder and Chief Scientist, and then joining the Mayo Clinic in Arizona to build a program in medical imaging informatics. In 2019, he took the role of inaugural AI officer at Moffitt Cancer Center, and in 2021, he began working as an independent consultant to help other organizations apply AI to healthcare. Mitchell recently sat down with Stan Gloss, founding partner at BioTeam, consultants to life science researchers, to discuss the practical knowledge he’s gathered as he has applied AI to medicine over the course of his career. AI Trends was invited to listen in.  

Editor’s Note: Trends from the Trenches is a regular column from the BioTeam, offering a peek behind the curtain of some of their most interesting case studies and projects at the intersection of science and technology. 

 

Stan Gloss: I’ve never heard of an AI officer. Can you tell me what that role is in an organization? 

Ross Mitchell, AI Officer, Moffitt Cancer Center

Ross Mitchell: It depends on the organization, what they want to do and where they want to go. More and more organizations in healthcare are developing roles like this. The role is not yet really well-defined. Nominally, it would be someone who’s going to guide and direct the development of AI and related technologies at the organization.  

It’s significantly different from what people are calling a chief digital officer, right? 

Moffitt recently hired a chief digital officer as a new role. Digital, by necessity, involves a lot of AI. {We see a] significant overlap between what you do related to digital technology in a healthcare organization and AI. The best way now to process a lot of that digital data and analyze it and turn it into actionable information involves the use of AI at some point in the chain.  

When you look at analysis options, what’s the difference between deep learning and machine learning?  

The broad field of AI encompasses a number of subfields. Robotics is one. Another area is machine learning.   

Machine learning, many years ago, back in the ’70s and the ’80s, was what we did to make computers perform intelligent-appearing activity by developing sets of rules. You would try and think of all the possible conditions that you might encounter, let’s say in a patient’s electronic health record in a hospital, and develop a series of rules. 

A doctor’s work generally follows simple algorithms: if a patient meets a set of conditions, it would trigger an action or a response. You could code up those rules. Imagine a series of if-then-else statements in your computer, and it would monitor things like blood pressure and temperature. If those things got out of whack, it might trigger an alert that would say, “We think this patient is going septic. Go take a look at them.” 

Those rule-based systems have been around for several decades, and they work well on simple problems. But when the issue gets complex, it’s really difficult to maintain a rule base. I remember many years ago trying to build commercial applications using rule-based systems and it was hard.  

What ends up happening is rules conflict with each other. You change one, and it has a ripple effect through some of the other rules, and you find out that you’re in a situation where both conditions can’t be true at once and yet, that’s what the system relies on. Rule-based systems were brittle to maintain and complicated to build, and they tended not to have great performance on really complex issues, but they were fine for simpler issues. 

In the ’70s and ’80s, when the earliest machine learning came along, the machine learned to identify patterns by looking at data. Instead of having a person sit down and say, “When the blood gasses get above a certain level or the body temperature gets above a certain level and the heart rate gets below a certain level, do something.”, you would present lots and lots of that data along with outcomes and the machine would look for patterns in the data and learn to associate those patterns with different outcomes. Learning from the data is what machine learning is all about.   

In the early ’80s, one of the popular ways to learn from the data was to use neural networks, which mimic networks in mammalian brains. A lot of the foundational work was done by Geoff Hinton, a neuroscientist in Toronto, Canada. He was interested in figuring out how the brain worked. While building synthetic circuits in the computer to simulate what was going on, he developed some of the fundamental algorithms that let us, as scientists, train these networks and have them learn by observing data.  

Deep learning is a subspecialty of machine learning. To recap: you’ve got AI, which has a subspecialty of machine learning, which, in turn, has a subspecialty of deep learning. Deep learning is just using neural type architectures to learn patterns from data as opposed to something like a random forest algorithm or something like that. 

What would be a good use of deep learning over machine learning? How would you use it differently? 

I use both all the time. Deep learning is particularly effective under certain conditions. For example, when you have a large amount of data. The more data you have, the better these deep learning algorithms tend to work, because they just use brute force to look for patterns in the data. 

Another important factor, as implied by “brute force”, is computational power. You really need computational power. That has been growing exponentially for over 20 years. The top supercomputer in the world in 1996 has about the same gross computing power as the iPhone you carry in your pocket. In other words, each of us is carrying around 1996’s top supercomputer in our pocket. There are things you can do now that just weren’t even remotely conceivable in the ’90s in terms of compute power. 

Of course, the advent of the Internet and digital technology in general means there’s an enormous amount of data available to analyze. If you have massive amounts of data and lots of compute power, using deep learning is a good way to go about pulling information out.  

I generally try machine learning first and if machine learning is good enough to solve the problem, great, because machine learning tends to be more explainable. Certain classes of algorithms are naturally explainable. They give you some insight into how they made their decision.  

Deep learning is more difficult to get those insights out of. Lots and lots of advances have been made in the last couple of years in that area, specifically, and I think it’s going to get better and better as time goes on. But right now, deep learning using neural networks is seen to be more of a black box than the older machine learning algorithms.  

As a general rule of thumb, we try a machine learning algorithm like a random forest first, just to learn about our data and get insights into it, and then if we need to, we’ll move on to a deep learning algorithm.  

So this is all old technology? What’s new?   

In 2012, there was a watershed moment in computer vision when convolutional neural networks were applied to images, and they were run on graphical processing units to get the compute power and all of a sudden, algorithms that were seemingly impossible just a few years before became trivial.  

When I teach this to my students, I use an example of differentiating cats and dogs. There was a paper published by researchers at Microsoft in 2007 that described an online test to prove that you were human (https://dl.acm.org/doi/abs/10.1145/1315245.1315291). They showed 12 pictures, 6 cats and 6 dogs, and you had to pick out the cats. For a human, you can do that with 100% accuracy in a few seconds. You can just pick them out. But for a computer at the time, the best they could hope to get would be 50-60% accuracy, and it would take a lot of time. So, it was easy to tell if it was a human or computer picking the images, and that’s how they got websites to prove that you were human.  

About seven years later, in 2013 I think, there was a Kaggle Competition with prize money attached to develop an algorithm that could differentiate cats from dogs (https://www.kaggle.com/c/dogs-vs-cats). The caption on the Kaggle page said something like, “This is trivial for humans to do, but your computer’s going to find it a lot more difficult.” They provided something like 10,000 images of cats and 10,000 images of dogs as the data set and people submitted their entries, and they were evaluated. Within one year, all the top-scoring algorithms used convolutional neural nets running on graphical processing units to get accuracies over 95%. Today you can do a completely code-free approach and train a convolutional neural network in about 20 minutes, and it will score above 95% differentiating cats from dogs.  

In the space of less than a decade, this went from a problem that was considered basically unsolvable to something that became trivial.  You can teach it in an introductory course and people can train an algorithm in half an hour that then runs instantly and gets near perfect results. So, with computer vision we think of two eras: pre-convolutional neural nets, and post-convolutional neural nets.  

Something similar happened recently with Natural language processing (NLP). In late 2018 Google published an NLP algorithm called “BERT” (https://arxiv.org/abs/1810.04805). It generated over 16,000 citations in two years.  That is a tremendous amount of applied and derivative work. The reason for that is because it works so well for natural language applications. Today you can really think about natural language processing as two eras: pre-BERT and post-BERT.  

How are these more recent AI advances going to change healthcare and work? Will many people—physicians, technicians—be out of jobs?   

My belief is the opposite will happen because this is what always seems to happen with technology. People predict that a new technological innovation is going to completely destroy a particular job type. And it changes the job—but it doesn’t destroy it—and ends up increasing demand.  

One of the oldest examples of that was weaving, back in the early industrial ages. When they invented the automatic loom, the Luddites rebelled against the invention because they were involved in weaving. What ended up happening was the cost of producing fine, high quality fabrics dropped dramatically because of these automated looms. That actually lowered the price and thereby increased the demand. The number of people employed in the industry initially took a dip, but then increased afterwards. 

Similarly, the claim was made that the chainsaw would put lumberjacks out of business. Well, it didn’t. If anything, demand for paper grew. And finally, in the ’70s, when the personal computer and the laser printer came along, people said, “That’s the end of paper. We’re going to have a paperless office.” Nothing could be further from the truth now. We consume paper now in copious quantities because it’s so easy for anybody to produce high quality output with a computer and a laser printer.  

I remember when I was a grad student, when MRI first came on the scene and was starting to be widely deployed, people were saying, “That’s the end of radiology because the images are so good. Anybody can read them.” And of course, the opposite has happened. 

I think what will happen is you will see AI assisting radiologists and other medical specialists—surgeons, and anesthesiologists, just about any medical specialty you can think of—there’s an application there for AI.  

But it will be a power tool; AI is basically a power tool for complexity. If you have the power tool, you’re going to be more efficient and more capable than someone who doesn’t.   

A logger with a chainsaw is more efficient and productive than a logger with a handsaw.  But it’s a lot easier to injure yourself with a chainsaw than it is with a handsaw. There have to be safeguards in place.   

The same thing applies with AI. It’s a power tool for complexity, but it’s an amplifier as well. It can amplify our ability to see into and sort through complexity, BUT it can amplify things like bias. There’s a very strong movement in AI right now to look into the effects of this bias amplification by these algorithms and this is a legitimate concern and a worthwhile pursuit, I believe. Just like any new powerful tool, it’s got advantages and disadvantages and we have to learn how to leverage one and limit the other.  

I’m curious to get your thoughts on how AI and machine learning are going to impact traditional hypothesis-driven research. How do these tools change the way we think about hypothesis driven research from your perspective?  

It doesn’t; it complements it. I’ve run into this repeatedly throughout my career, since I’m in a technical area like medical physics and biomedical engineering, which are heavily populated by traditional scientists who are taught to rigidly follow the hypothesis approach to science. That is called deductive reasoning—you start with a hypothesis, you perform an experiment and collect data, and you use that to either confirm or refute your hypothesis. And then you repeat.  

But that’s very much been a development of the 20th century. In the early part of the 20th century and late 19th century, the opposite was the belief. You can read Conan Doyle in Sherlock Holmes saying things like, “One should never ever derive a hypothesis before looking at the data because otherwise you’re going to adapt what you see to fit your preconceived notions.” Sherlock Holmes is famous for that. He would observe and then pull patterns from the data to come up with his conclusions.  

But think of a circle. At the top of the circle is a hypothesis, and then going clockwise around the circle, that arrow leads down to data. Hypothesis at the top; data at the bottom. And if you go clockwise around the circle, you’re performing experiments and collecting data and that will inform or reject your hypothesis.  

The AI approach starts at the bottom of the circle with data, and we take an arc up to the hypothesis. You’re looking for patterns in your data and that can help you form a hypothesis. They’re not exclusionary, they’re complimentary to each other. You should be doing both; you should have that feedback circle. And across the circle you can imagine a horizontal bar are tools that we build and these can be algorithms, or they could be a microscope. They’re things that let us analyze or create data. 

When people use AI and machine learning, does that reduce the bias that may be introduced by seeking to prove a hypothesis? With no hypothesis, you’re simply looking at your data, seeing what your data tells you, and what signals you get out of your data.   

Yes, it’s true that just mining the data can remove my inherent biases as a human, me wanting to prove my hypothesis correct, but it can amplify biases that are present in the data that I may not know about. It doesn’t get rid of the problem of bias. 

I’ve been burned by that many, many times over my career. At Mayo Clinic, I was working on a project once, an analysis of electronic health records to try to predict hospital admission from the emergency department. On my first pass on the algorithm, I used machine learning that wasn’t deep learning and I got something like 95 percent accuracy.  

I’d had enough experience at that point that I was not excited or elated by that. My initial reaction was, “Uh-oh, something’s wrong.” Because you’d never get 95%. If it was that easy for an algorithm to make the prediction, people would have figured it out after dealing with these patients for years.  

I figured something was up. So, I went back to the clinician I was working with, an ER doc, and looked at the data. It turns out, admission status was in my input data and I just didn’t know because I didn’t have the medical knowledge to know what all those hundreds of variable columns meant. Once I took that data out, the algorithm didn’t perform very well at all.  

There’s a lot of work now on trying to build algorithms using the broadest, most diverse data sets that you can. For example, you don’t want to just process images from one hospital. You want to process images from 100 hospitals around the world. You don’t want to just look at hospitals where the patient population is primarily wealthy and well insured. You also want to look at hospitals where there’s a lot of Medicare and Medicaid patients and so on and so forth. 

What advice would you give an organization for starting off in AI? Can you fast track your organization to actually get to the point where you can do AI and machine learning?  

You can’t fast track it. You cannot.It’s an enormous investment and commitment, and it’s often a cultural change in the organization.   

My first and foremost advice is you need someone high up in the organization, who probably reports to the CEO, with a deep, extensive background and practical hands-on experience in BOTH healthcare and in artificial intelligence. The worst thing you can do is to hire somebody and give them no staff and no budget. You’re just basically guaranteed that the endeavor is going to fail. You need to be able to give them the ability to make the changes in the culture.  

One of the biggest mistakes I see healthcare organizations make is hiring someone who has gone online, taken a couple of courses from Stanford or MIT, watched some YouTube videos, read a couple of papers, and who got “into digital” five or six years ago, and they’re put in place to basically oversee and direct the entire effort of the organization, and they really have no experience to do that. It’s a recipe for failure.   

You also can’t expect the physician in the organization to easily become an AI expert. They’ve invested 10-15 years of education in their subspecialty, and they’re busy folks dealing with patients every day and dealing with horrific IT systems—electronic medical record systems—that make them bend to the technology instead of the other way around.  

You want somebody who’s been doing healthcare AI for 20 years and really knows how to use the power tools and where to apply them. But that person has to be able to communicate with the physicians and also has to be able to communicate with the engineers doing the fundamental work.   

It’s not a technical limitation that is stopping us from advancing this in healthcare. It’s mostly a cultural issue in these organizations and a technical expertise issue.  

Some of the biggest obstacles that I hear when I do these interviews is that the data is not ready for prime time. Organizations really haven’t put the thought into how to structure and organize the data, so they really are not AI or ML ready. Have you seen that? And what can an organization do with their data to prepare?  

That is very common. Absolutely.  

It’s a whole issue unto itself. Tons of data is out there. You’ve got an electronic medical record system in your large hospital containing all this data. How do you enable people within your organization with the appropriate skills to get at that data and use it to produce an analytic that will improve your outcomes, reduce cost, improve quality… or ideally all?  

It’s a cultural issue. Yes, there are technical issues. I’ve seen organizations devote enormous effort into organizing their data and that’s beneficial, but just because it’s organized doesn’t mean it’s clean.   

People say, “Oh, this is a great data set.” They’ve spent tons of time organizing it and making sure all the fields are right and cross-checking and validating, and then we go and use it to build an algorithm, and then you discover something systemic about the way the organization collects data that completely throws off your algorithm. Nobody knew, and there’s no way to know ahead of time that this issue is in the data, and it needs to be taken care of.  

That’s why it’s so critical to have a data professional, not just someone who’s a database constructor and filler and can organize a database. You need someone who’s an expert in data science who knows how to find the flaws that may be present in the data and has seen the red flags before.  

Remember, my reaction when I got the 95% accurate algorithm wasn’t one of elation. I knew we needed to do a double check there. And sure enough, we found an issue.  

I ran into something very similar recently at Moffitt in the way dates were coded. The billing department was assigning an ICD code to the date of admission as opposed to the date when the actual physiological event occurred, and we didn’t pick up on this until six months into the project. It completely changed the way the algorithm worked because the date was wrong, and we’re trying to predict something in time. The dates that we had were wrong relative to other biological events that were going on in that patient. 

Moffitt has terrific organization of their data. They’ve done one of the best jobs I’ve seen in the health care organizations I’ve worked with. But it didn’t mean the data was clean, it meant that it was accessible. When I wanted to train a model to understand the language of pathology reports, I asked for all of Moffitt’s digital pathology reports and in seven days, I had almost 300,000 pathology reports. 

Unbelievable. 

Yeah, it was amazing. I was just shocked. That’s the kind of cultural change that needs to be in place.  

 

Learn more about Ross Mitchell on LinkedIn. 

Categories
Artificial Intelligence Digital Identity

Securing Digital Identities with Help of AI New Path After Pandemic of 2020  

By John P. Desmond, AI Trends Editor  

Organizations are looking to secure their data and identities with the help of AI,  as business models move online and increased cybercrime follows.   

The new era of identity authentication incorporates AI and biometrics in more sophisticated systems that make it more difficult for cybercriminals. Biometric data such as fingerprints help prevent identity theft, since criminals would not be able to gain access to information by only providing credentials.    

Deepak Gupta, Cofounder and CTO, LoginRadius

“In order to protect data, digital identities need to meet a stricter set of security regulations.” stated Deepak Gupta, Cofounder and CTO of LoginRadius, a cloud-based consumer identity platform, writing recently in Forbes. “Stolen identities can allow criminals to impersonate someone else and access secured resources,” he stated. 

On a network, AI and machine learning make it easier to implement algorithms that can identify fraudulent activities; they can spy the difference between normal and abnormal trends, so can identify actions that may be coming from fraudulent activities.  

While authentication with AI is still in an early stage, businesses can begin to implement it by adding intelligent adaptive authentication, using location and device fingerprints to identify the consumer, Gupta suggested. Also, smart data filters can identify fraudulent IPs, domains and user data. “AI has the power to save the world from digital identity fraud. In the fight against ID theft, it is already a strong weapon,” stated Gupta. 

Pandemic Accelerated Online Interactions, Pressuring Cybersecurity 

Deborah Golden, US Cyber and Strategic Risk leader, Deloitte, Washington, DC

With the Covid-19 pandemic increasing reliance on digital interactions, more focus was put on cybersecurity. “The pandemic accelerated the critical role digital identity plays in providing the foundation for building trust among a broad spectrum of stakeholders, whether they be consumers, employees, partners or vendors, governments, and civilians,” stated Deborah Golden, US Cyber and Strategic Risk leader at Deloitte, based in Washington, DC, in a recent interview in Security Magazine. “It’s how we can protect ourselves from data breaches, identity theft, data misuse, and other forms of fraud and cybercrime,” she stated. 

Organizations have an opportunity to leverage digital identities to build trust, by applying AI, machine learning and data analytics to detect unauthorized access to enterprise systems, and fraudulent consumer activity, she suggested. “By using AI and analytics to facilitate trusted interactions, organizations can distinguish themselves for their vigilance in protecting data and proactively mitigating risk,” she stated. 

In B2C markets, she recommended that organizations provide consumers with incentives to enroll their devices, such as messages emphasizing that enrolling will make interactions faster and more secure while keeping data private, and/or points in a loyalty program.  

“Digital identity allows organizations to track and monitor how customer data, preferences, and consent are cataloged, shared, and used internally,” Golden stated.  

Organizations can use the identify data to make it easier to align user access to different data classifications, use policies and permissions. Anecdotal evidence shows that companies with digital identity programs get better results in security audits than companies that do not, she suggested. 

Secure digital identity techniques include passwordless authentication methods, such as proximity-based authentication, and digital identity behavior analysis. This can detect and prevent, for example, bot attacks, session hijacking and attempts to use stolen credentials. 

Telehealth Option Accelerated in 2020, Creating More Exposure 

Telehealth, the delivery of medical services by a professional online, grew dramatically in 2020 due to the pandemic. That put consumer confidential medical information more over the air into the electronic health record system of the providers, each with its own secure access system. These secure access systems posed challenges for many patients, and increased the volume of work for IT professionals at the medical institutions..  

“On our end, we saw a huge number of patients who struggled,” using the authentication portals to the telehealth systems, stated Adam Silverman, chief medical officer for Syllable, a healthcare AI services, in a recent interview in PYMNTS.com.  The difficulties created obstacles to care.  

“Health systems now were not only being inundated with people calling to schedule a video consultation, but… they were being flooded on their IT help desks by the same patients who could not log in to the portal in order to obtain care, and so it sort of reinforced this constant struggle between privacy on one hand and access or interoperability on the other,” Silverman stated.  

Health care consumers are concerned about whether they can trust the system where they submit their confidential credentials, and what companies have access to their personal medical data, suggested Vig Chandramouli, principal of Oak HC/FT, a venture capital firm focused on healthcare information services. “Every day in the news, you now read about some ransomware attack or cybercrime or see how some pipeline is shut down or some hospital was held hostage, and these are all very possible and increasingly more likely things going forward,” said Chandramouli in the PYMNTS.com account.  

He also recommended a move away from username and password authentication that have become “more frustrating and less secure” in favor of more advanced technologies, such as biometric identification systems incorporating fingerprints, iris scans, and facial images. “That would certainly improve their experience and would probably be less expensive, because people would stop calling the IT help desk, calls that take 20 to 30 minutes to try to complete and help somebody change the password, stated Chandramouli. HC/FT is an investor in Syllable.  

Retail and digital payments industries could have lessons for health care providers in how they identify customers online, which might require some retooling of operational systems on the back end. “That is where AI really can be very beneficial in monitoring the networks and identifying signals or patterns that are abnormal for a given persona,” stated Silverman.  

Expert Expects More Healthcare Fraud From 2020 Will be Uncovered  

Many observers expect many incidents of healthcare fraud that happened in 2020 but not yet known will be coming to light.  Jacques Smith, a partner at law firm Arent Fox and a member of the firm’s Business Loan Task Force, suggested that in a recent account in Healthcare Finance that fraud was likely more widespread than usual in 2020, but it was not a priority for the previous presidential administration.  

The chaos caused by COVID-19, said Smith, encouraged fraudulent behavior by some opportunists that seems to have run the gamut from PPE fraud to Medicare reimbursement shenanigans.  

“During the prior administration, there was a rumor that if you only apply for a million (dollars) or two, the government wasn’t going to have any oversight over that money,” Smith stated, adding that it is not the case. “We’ve seen investigations already for small amounts under $2 million. Quite a number of criminal cases are expected,” he stated.  

Senator Chuck Grassley, R-Iowa, has shown interest in amending and modifying the False Claims Act so it can have a greater enforcement arm, as well as stronger whistleblower protection over the next four years. Greater incentives for whistleblowers could open the floodgates. 

“It’s a misnomer that last year was a down year,” stated Smith. “There were more cases last year. It was a record year in filings, just not recoveries.” That implies that prosecutors will have a busy year wading through the backlog of cases. 

Read the source articles and information in Forbes, in Security Magazine, from PYMNTS.com and in Healthcare Finance 

Categories
Artificial Intelligence Smart Cities

Experiences in Smart City Challenges Such as in Columbus, Ohio, Sobering  

By AI Trends Staff  

Four years ago, the first international AI City Challenge was conducted to spur the development of AI to support transportation infrastructure in a smarter way. Teams representing American companies or universities took the top spots.  

Last year, Chinese companies took the top three out of four competitions, and in June, Chinese tech companies Alibaba and Baidu swept the AI City Challenge, beating competitors from some 40 nations.  

The results were a payoff of investments in smart cities by the government in China, which is conducting pilot programs in hundreds of cities and has by some estimates half of the world’s smart cities, according to a recent account in Wired.  

China is investing more than the US in areas of emerging technology, according to Stan Caldwell, executive director of Mobility21, a project at Carnegie Mellon University to assist smart-city development in Pittsburgh. AI researchers in the US can compete for government grants, from the National Science Foundation’s Civic Innovation Challenge, or the Department of Transportation’s Smart City Challenge.   

“We want the technologies to develop, because we want to improve safety and efficiency and sustainability. But selfishly, we also want this technology to develop here and improve our economy,” Caldwell stated in the Wired account.  

Final Report on Smart Columbus Cites Some Promising Endeavors 

The first Smart City Challenge sponsored by the US Department of Transportation selected Columbus, Ohio, to receive $50 million to be spent over five years to reshape the city’s transportation options by tapping into new technology. A final report recently issued by the city’s Smart Columbus Program described the effort as promising and falling a bit short.  

Part of it was bad luck. Several programs set to get off the ground hit just as the pandemic led to lockdowns in 2020, reducing demand for transportation options. “It was not supposed to be a competition for who has more sensors, or anything like that, and I think we got a little distracted at a certain point,” stated Jordan Davis, director of Smart Columbus, to Wired.   

His organization is charged with continuing the work of the challenge. He said the focus will be, “How do we use technology to improve quality of life, so solve community issues of equity, to mitigate climate change and to achieve prosperity in the region?”  

The selection of Columbus led to a flood of proposals from companies that proved difficult to manage. “A lot of people were expecting a lot from this project, and perhaps too much,” stated Harvey Miller, a geography professor and director of the Center for Urban and Regional Analysis at Ohio State University, who helped plan and evaluate the challenge. “What Columbus did was test revolutionary ideas,” Miller stated. “They learned a lot about what works and what doesn’t work.”   

Five of the eight projects launched by the challenge will continue, including a citywide “operating system” to share data between government and private entities, for the support of smart kiosks and parking and trip-planning apps.   

Pivot App from Etch Helps Soft Multimodal Transportation Options   

Etch, a geospatial solutions startup, was founded in 2018 in Columbus. The company got its start by working with Smart Columbus on a multimodal transportation app, called Pivot, to help users plan trips throughout central Ohio using buses, ride hailing, carpool, micro mobility or personal vehicles.  

Darlene Magold, CEO and co-founder, Etch

“The mobility problem in Columbus is access to mobility and people not understanding or knowing what options are available to them,” stated Darlene Magold, CEO and co-founder of Etch, stated in an account in TechCrunch “Part of our mission was to show the community what was available and give them options to sort those options based on cost or other information.” 

The app uses these open-source tools: OpenStreetMap to get up-to-date crowdsourcing information from the community, similar to how the driving app Waze works; and OpenTripPlanner to find itineraries for different modes of transportation.  

“Because we are open source, the integration with Uber, Lyft and other mobility providers really gives users a lot of options, so they can actually see what mobility options are available, other than their own vehicle if they have one,” stated Magold. “It takes away that anxiety of traveling” using different modes such as a scooter, bike, bus or Uber, she suggested. 

The Pivot app had 3,849 downloads in late June; the city will continue to fund its development and use. 

Connected Vehicle Traffic Management Test Aimed at Distracted Driving 

In an attempt to cut down accidents attributed to distracted driving, Columbus experimented with connected vehicles. From October 2020 to March 2021, the city worked with Siemens, which provided onboard and roadside units to create a vehicle-to-infrastructure (V2I) and Vehicle-to-Vehicle (V2V) infrastructure. The connected vehicles could “talk” to each other and to 85 intersections, seven of which had the highest crash rates in central Ohio. The investment with the German multinational conglomerate was $11.3 million.  

Mandy Bishop,  program manager, Smart Columbus

“We were looking at 11 different applications including red light signal warning, school zone notifications, intersection collision warning, freight signal priority and transit signal priority, using the connected vehicle technology,” stated Mandy Bishop, Smart Columbus program manager, to TechCrunch. 

The project was deployed to 1,100 vehicles in a region with about one million residents. Results were encouraging. “We did see drivers using signals coming from the connected vehicle environment. We’re seeing improvements in driver behavior,” Bishop stated.   

Autonomous Traffic Management Platform Being Tried in Oakland 

The city of Oakland, Calif. Is testing an autonomous traffic management platform from startup Hayden AI, with vision-based perception devices that attempt to plug into the city’s transportation infrastructure. The goal of Hayden AI is to deliver reliable, sustainable and equitable public transportation, according to a recent account in Forbes. 

Hayden’s devices are mounted in multiple types of vehicles in the city’s fleet, including transit buses, street sweepers, and garbage trucks.  Each perception device has precise localization, enabling it to detect and map lane lines, traffic lights, street signs, fire hydrants, parking meters and trees. The data is used to create a “digital twin,” a rich, 3D virtual model of the city. 

“The network of spatially aware perception devices collaborate to build a real-time 3D map of the city. These devices learn over time and from each other to provide data and insights that can be shared across city agencies,” stated Vaibhav Ghadiok, co-founder and VP of Engineering with Hayden AI. “This can be used to make buses run on time by clearing bus lanes of parked vehicles, or help with city planning through better parking and curbside management.” 

Ghadiok applied his expertise in robotics, computer vision, and machine learning to develop the system, which has a range of uses. For one, the system can identify open parking meters, alerting drivers to available parking spaces nearby. Also, the platform can perform traffic pattern analysis to determine pedestrian traffic through intersections by time of day, enabling delivery trucks to schedule curb space deliveries more efficiently.   

Read the source articles and information in Wired, in TechCrunch and in Forbes. 

Categories
Artificial Intelligence Autonomy Edge Technologies

Edge Cases And The Long-Tail Grind Towards AI Autonomous Cars 

By Lance Eliot, the AI Trends Insider   

Imagine that you are driving your car and come upon a dog that has suddenly darted into the roadway. Most of us have had this happen. You hopefully were able to take evasive action. Assuming that all went well, the pooch was fine and nobody in your car got hurt either.

In a kind of Groundhog Day movie manner, let’s repeat the scenario, but we will make a small change. Are you ready?   

Imagine that you are driving your car and come upon a deer that has suddenly darted into the roadway. Fewer of us have had this happen, though nonetheless, it is a somewhat common occurrence for those that live in a region that has deer aplenty.   

Would you perform the same evasive actions when coming upon a deer as you would in the case of the dog that was in the middle of the road?   

Some might claim that a deer is more likely to be trying to get out of the street and be more inclined to sprint toward the side of the road. The dog might decide to stay in the street and run around in circles. It is hard to say whether there would be any pronounced difference in behavior.   

Let’s iterate this once again and make another change.   

Imagine that you are driving your car and come upon a chicken that has suddenly darted into the roadway. What do you do? 

For some drivers, a chicken is a whole different matter than a deer or a dog. If you were going fast while in the car and there wasn’t much latitude to readily avoid the chicken, it is conceivable that you would go ahead and ram the chicken. We generally accept the likelihood of having chicken as part of our meals, thus one less chicken is ostensibly okay, especially in comparison to the risk of possibly rolling your car or veering into a ditch upon sudden braking.   

Essentially, you might be more risk-prone if the animal was a deer or a dog and be willing to put yourself at greater risk to save the deer or the dog. But when the situation involves a chicken, you might decide that the personal risk versus the harming of the intruding creature is differently balanced. Of course, some would vehemently argue that the chicken, the deer, and the dog are all equal and drivers should not try to split hairs by saying that one animal is more precious than the other.   

We’ll move on.   

Let’s make another change. Without having said so, it was likely that you assumed that the weather for these scenarios of the animal crossing into the street was relatively neutral. Perhaps it was a sunny day and the road conditions were rather plain or uneventful.  

Adverse Weather Creates Another Variation on Edge Cases  

Change that assumption about the conditions and imagine that there have been gobs of rain, and you are in the midst of a heavy downpour. Your windshield wiper blades can barely keep up with the sheets of water, and you are straining mightily to see the road ahead. The roadway is completely soaked and extremely slick. 

Do your driving choices alter now that the weather is adverse?    

Whereas you might have earlier opted to radically steer around the animal, any such maneuver now, while in the rain, is a lot dicier. The tires might not stick to the roadway due to the coating of water. Your visibility is reduced, and you might not be able to properly judge where the animal is, or what else might be near the street. All in all, the bad weather makes this an even worse situation.

We can keep going. 

For example, pretend that it is nighttime rather than being daytime. That certainly changes things. Imagine that the setting involves no other traffic for miles. After you’ve given that situation some careful thought, reimagine things and pretend that there is traffic all around you, cars and trucks abundantly, and also there is heavy traffic on the opposing side of the roadway.   

How many such twists and turns can we concoct? 

We can continue to add or adjust the elements, doing so over and over. Each new instance becomes its own particular consideration. You would presumably need to mentally recalculate what to do as the driver. Some of the story adjustments might reduce your viable options, while other of the adjustments might widen the number of feasible options.   

The combination and permutations can be dizzying.   

A newbie teenage driver is often taken aback by the variability of driving. They encounter one situation that they’ve not encountered before and go into a bit of a momentary panic mode. What to do? Much of the time, they muddle their way through and do so without any scrape or calamity. Hopefully, they learn what to do the next time that a similar setting presents itself and thus be less caught off-guard.   

Experienced drivers have seen more and therefore are able to react as needed. That vastness of knowledge about driving situations does have its limits. As an example, there was a news report about a plane that landed on a highway because of in-flight engine troubles. I ask you, how many of us have seen a plane land on the roadway in front of them? A rarity, for sure.   

These examples bring up a debate about the so-called edge or corner cases that can occur when driving a car. An edge or corner case is a reference to the instance of something that is considered rare or unusual. These are events that tend to happen once in a blue moon: outliers. 

A plane landing on the roadway amid car traffic would be a candidate for consideration as an edge or corner case. A dog or deer or chicken that wanders into the roadway would be less likely construed as an edge or corner case—it would be a more common, or core experience. The former instance is extraordinary, while the latter instance is somewhat commonplace.   

Another way to define an edge or corner case are the instances beyond the core or crux of whatever our focus is.    

But here’s the rub. How do we decide what is on the edge or corner, rather than being classified as in the core? 

This can get very nebulous and be subject to acrimonious discourse. Those instances that someone claims are edge or corner cases might be more appropriately tagged as part of the core. Meanwhile, instances tossed into the core could be potentially argued as more rightfully be placed into the edge or corner cases category.    

One aspect that escapes attention oftentimes is that the core cases does not necessarily have to be larger than the number or size of the edge cases. We just assume that would be the logical arrangement. Yet it could be that we have a very small core and a tremendously large set of edges or corner cases.   

We can add more fuel to this fire by bringing up the concept of having a long tail. People use the catchphrase “long tail” to refer to circumstances of having a preponderance of something as a constituted central or core and then in a presumed ancillary sense have a lot of other elements that tail off. You can mentally make a picture of a large bunched area on a graph and then have a narrow offshoot that goes on and on, becoming a veritable tail to the bunched-up portion. 

This notion is borrowed from the field of statistics. There is a somewhat more precise meaning in a purely statistical sense, but that’s not how most people make use of the phrase. The informal meaning is that you might have lots of less noticeable aspects that are in the tail of whatever else you are doing.   

A company might have a core product that is considered its blockbuster or main seller. Perhaps they sell only a relative few of those but do so at a hefty price each, and it gives them prominence in the marketplace. Turns out the company has a lot of other products too. Those aren’t as well-known. When you add up the total revenue from the sales of their products, it could be that all of those itty-bitty products bring in more dough than do the blockbuster products.   

Based on that description, I trust that you realize that the long tail can be quite important, even if it doesn’t get much overt attention. The long tail can be the basis for a company and be extremely important. If the firm only keeps its eye on the blockbuster, it could end up in ruin if they ignore or undervalue the long tail.   

That doesn’t always have to be the case. It could be that the long tail is holding the company back. Maybe they have a slew of smaller products that just aren’t worth keeping around. Those long-tail products might be losing money and draining away from the blockbuster end. 

Overall, the long tail ought to get its due and be given proper scrutiny. Combining the concept of the long tail with the concept of the edge or corner cases, we could suggest that the edge or corner cases are lumped into that long tail. 

Getting back to driving a car, the dog or even a deer that ran into the street proffers a driving incident or event that we probably would agree is somewhere in the core of driving.    

In terms of a chicken entering into the roadway, well, unless you live near a farm, this would seem a bit more extreme. On a daily drive in a typical city setting, you probably will not see many chickens charging into the street.   

So how will self-driving cars handle edge cases and the long tail of the core?  

Self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle.   

Some pundits fervently state that we will never attain true self-driving cars because of the long tail problem. The argument is that there are zillions of edge or corner cases that will continually arise unexpectedly, and the AI driving system won’t be prepared to handle those instances. This in turn means that self-driving cars will be ill-prepared to adequately perform on our public roadways.   

Furthermore, those pundits assert that no matter how tenaciously those heads-down all-out AI developers keep trying to program the AI driving systems, they will always fall short of the mark. There will be yet another new edge or corner case to be had. It is like a game of whack-a-mole, wherein another mole will pop up.   

The thing is, this is not simply a game, it is a life-or-death matter since whatever a driver does at the wheel of a car can spell life or possibly death for the driver, and the passengers, and for drivers of nearby cars, and pedestrians, etc. 

Here’s an intriguing question that is worth pondering: Are AI-based true self-driving cars doomed to never be capable on our roadways due to the endless possibilities of edge or corner cases and the infamous long-tail conundrum?   

Before jumping into the details, I’d like to clarify what is meant when referring to true self-driving cars.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ 

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/ 

Understanding The Levels Of Self-Driving Cars   

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems). 

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/   

The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Self-Driving Cars And The Long Tail 

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving.   

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.   

Why this added emphasis about the AI not being sentient? 

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.   

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car. 

Let’s dive into the myriad of aspects that come to play on this topic. 

First, it would seem nearly self-evident that the number of combinations and permutations of potential driving situations is going to be enormous. We can quibble whether this is an infinite number or a finite number, though in practical terms this is one of those counting dilemmas akin to the number of grains of sand on all the beaches throughout the entire globe. In brief, it is a very, very, very big number.   

If you were to try and program an AI driving system based on each possible instance, this indeed would be a laborious task. Even if you added a veritable herd of ace AI software developers, you can certainly expect this would take years upon years to undertake, likely many decades or perhaps centuries, and still be faced with the fact that there is one more unaccounted edge or corner case remaining.   

The pragmatic view is that there would always be that last one that evades being preestablished.   

Some are quick to offer that perhaps simulations would solve this quandary.   

Most of the automakers and self-driving tech firms are using computer-based simulations to try and ferret out driving situations and get their AI driving systems ready for whatever might arise. The belief by some is that if enough simulations are run, the totality of whatever will occur in the real world will have already been surfaced and dealt with before entering self-driving cars into the real world.   

The other side of that coin is the contention that simulations are based on what humans believe might occur. As such, the real world can be surprising in comparison to what humans might normally envision will occur. Those computer-based simulations will always then fall short and not end up covering all the possibilities, say those critics.   

Amid the heated debates about the use of simulations, do not get lost in the fray and somehow reach a conclusion that simulations are either the final silver bullet or fall into the trap that simulations won’t reach the highest bar and ergo they should be utterly disregarded.   

Make no mistake, simulations are essential and a crucial tool in the pursuit of AI-based true self-driving cars.   

There is a floating argument that there ought not to be any public roadway trials taking place of true self-driving cars until the proper completion of extensive and apparently exhaustive simulations. The counterargument is that this is impractical in that it would delay roadway testing on an indefinite basis, and that the delay means more lives lost due to everyday human driving.   

An allied topic entails the use of closed tracks that are purposely set up for the testing of self-driving cars. By being off the public roadways, a proving ground ensures that the public at large is not endangered by whatever waywardness might emerge during driverless testing. The same arguments surrounding the closed track or proving grounds approach are similar to the tradeoffs mentioned when discussing the use of simulations (again, see my remarks posted in my columns).   

This has taken us full circle and returned us back to the angst over an endless supply of edge or corner cases. It has also brought us squarely back to the dilemma of what constitutes an edge or corner case in the context of driving a car. The long tail for self-driving cars is frequently referred to in a hand waving manner. This ambiguity is spurred or sparked due to the lack of a definitive agreement about what is indeed in the long tail versus what is in the core.   

This squishiness has another undesirable effect.   

Whenever a self-driving car does something amiss, it is easy to excuse the matter by claiming that the act was merely in the long tail. This disarms anyone expressing concern about the misdeed. Here’s how that goes. The contention is that any such concern or finger-pointing is misplaced since the edge case is only an edge case, implying a low-priority and less weighty aspect, and not significant in comparison to whatever the core contains.   

There is also the haughtiness factor.   

Those that blankly refer to the long-tail of self-driving cars can have the appearance of one-upmanship, holding court over those that do not know what the long-tail is or what it contains. With the right kind of indignation and tonal inflection, the haughty speaker can make others feel incomplete or ignorant when they “naively” try to refute the legendary (and notorious) long-tail.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: http://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Conclusion   

There are a lot more twists and turns on this topic. Due to space constraints, I’ll offer just a few more snippets to further whet your appetite. 

One perspective is that it makes little sense to try and enumerate all the possible edge cases. Presumably, human drivers do not know all the possibilities and despite this lack of awareness are able to drive a car and do so safely the preponderance of the time. You could argue that humans lump together edge cases into more macroscopic collectives and treat the edge cases as particular instances of those larger conceptualizations.   

You sit at the steering wheel with those macroscopic mental templates and invoke them when a specific instance arises, even if the specifics are somewhat surprising or unexpected. If you’ve dealt with a dog that was loose in the street, you likely have formed a template for when nearly any kind of animal is loose in the street, including deer, chickens, turtles, and so on. You don’t need to prepare beforehand for every animal on the planet.   

The developers of AI driving systems can presumably try to leverage a similar approach.   

Some also believe that the emerging ontologies for self-driving cars will aid in this endeavor. You see, for Level 4 self-driving cars, the developers are supposed to indicate the Operational Design Domain (ODD) in which the AI driving system is capable of driving the vehicle. Perhaps the ontologies being crafted toward a more definitive semblance of ODDs would give rise to the types of driving action templates needed.   

The other kicker is the matter of common-sense reasoning.  

One viewpoint is that humans fill in the gaps of what they might know by exploiting their capability of performing common-sense reasoning. This acts as the always-ready contender for coping with unexpected circumstances. Today’s AI efforts have not yet been able to crack open how common-sense reasoning seems to occur, and thus we cannot, for now, rely upon this presumed essential backstop (for my coverage about AI and common-sense reasoning, see my columns). 

Doomsayers would indicate that self-driving cars are not going to successfully be readied for public roadway use until all edge or corner cases have been conquered. In that vein, that future nirvana can be construed as the day and moment when we have completely emptied out and covered all the bases that furtively reside in the imperious long-tail of autonomous driving. 

That’s a tall order and a tale that might be telling, or it could be a tail that is wagging the dog and we can find other ways to cope with those vexing edges and corners. 

 Copyright 2021 Dr. Lance Eliot  

http://ai-selfdriving-cars.libsyn.com/website 

Categories
Artificial Intelligence Autonomy

Researchers Working to Improve Autonomous Vehicle Driving Vision in the Rain 

By John P. Desmond, AI Trends Editor 

To help autonomous cars navigate safely in the rain and other inclement weather, researchers are looking into a new type of radar.  

Self-driving vehicles can have trouble “seeing” in the rain or fog, with the car’s sensors potentially blocked by snow, ice or torrential downpours, and their ability to “read” road signs and road markings impaired. 

Many autonomous vehicles rely on lidar radar technology, which works by bouncing laser beams off surrounding objects to give a high-resolution 3D picture on a clear day, but does not do so well in fog, dust, rain or snow, according to a recent report from abc10 of Sacramento, Calif. 

“A lot of automatic vehicles these days are using lidar, and these are basically lasers that shoot out and keep rotating to create points for a particular object,” stated Kshitiz Bansal, a computer science and engineering Ph.D. student at University of California San Diego, in an interview. 

The university’s autonomous driving research team is working on a new way to improve the imaging capability of existing radar sensors, so they more accurately predict the shape and size of objects in an autonomous car’s view.  

Dinesh Bharadia, professor of electrical and computer engineering,UC San Diego Jacobs School of Engineering

“It’s a lidar-like radar,” stated Dinesh Bharadia, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering, adding that it is an inexpensive approach. “Fusing lidar and radar can also be done with our techniques, but radars are cheap. This way, we don’t need to use expensive lidars.” 

The team places two radar sensors on the hood of the car, enabling the system to see more space and detail than a single radar sensor. The team conducted tests to compare their system’s performance on clear days and nights, and then with foggy weather simulation, to a lidar-based system. The result was the radar plus lidar system performed better than the lidar-alone system.  

“So, for example, a car that has lidar, if it’s going in an environment where there is a lot of fog, it won’t be able to see anything through that fog,” Bansaid stated. “Our radar can pass through these bad weather conditions and can even see through fog or snow,” he stated.  

The team uses millimeter radar, a version of radar that uses short-wavelength electromagnetic waves to detect the range, velocity and angle of objects.   

20 Partners Working on AI-SEE in Europe to Apply AI to Vehicle Vision 

Enhanced autonomous vehicle vision is also the goal of a project in Europe—called AI-SEE—involving startup Algolux, which is cooperating with 20 partners over a period of three years to work towards Level 4 autonomy for mass-market vehicles. Founded in 2014, Algolux is headquartered in Montreal and has raised $31.8 million to date, according to Crunchbase.  

The intent is to build a novel robust sensor system supported by artificial intelligence enhanced vehicle vision for low visibility conditions, to enable safe travel in every relevant weather and lighting condition such as snow, heavy rain or fog, according to a recent account from AutoMobilSport.    

The Algolux technology employs a multisensory data fusion approach, in which the sensor data acquired will be fused and simulated by means of sophisticated AI algorithms tailored to adverse weather perception needs. Algolux plans to provide technology and domain expertise in the areas of deep learning AI algorithms, fusion of data from distinct sensor types, long-range stereo sensing, and radar signal processing.  

Dr. Werner Ritter, Consortium Lead, Mercedes Benz AG: “Algolux is one of the few companies in the world that is well versed in the end-to-end deep neural networks that are needed to decouple the underlying hardware from our application,” stated Dr. Werner Ritter, consortium lead, from Mercedes Benz AG. “This, along with the company’s in-depth knowledge of applying their networks for robust perception in bad weather, directly supports our application domain in AI-SEE.”  

The project will be co-funded by the National Research Council of Canada Industrial Research Assistance Program (NRC IRAP), the Austrian Research Promotion Agency (FFG), Business Finland, and the German Federal Ministry of Education and Research BMBF under the PENTA EURIPIDES label endorsed by EUREKA. 

Nvidia Researching Stationary Objects in its Driving Lab  

The ability of the autonomous car to detect what is in motion around it is crucial, no matter the weather conditions, and the ability of the car to know which items around it are stationary is also important, suggests a recent blog post in the Drive Lab series from Nvidia, an engineering look at individual autonomous vehicle challenges. Nvidia is a chipmaker best known for its graphic processing units, widely used for development and deployment of applications employing AI techniques.   

The Nvidia lab is working on using AI to address the shortcomings of radar signal processing in distinguishing moving and stationary objects, with the aim of improving autonomous vehicle perception.   

Neda Cvijetic, autonomous vehicles and computer vision research, Nvidia

“We trained a DNN [deep neural network] to detect moving and stationary objects, as well as accurately distinguish between different types of stationary obstacles, using data from radar sensors,” stated Neda Cvijetic, who works on autonomous vehicles and computer vision for Nvidia; the author of the blog post. In her position for about four years, she previously worked as a systems architect for Tesla’s Autopilot software.   

Ordinary radar processing bounces radar signals off of objects in the environment and analyzes the strength and density of reflections that come back. If a sufficiently strong and dense cluster of reflections comes back, classical radar processing can determine this is likely some kind of large object. If that cluster also happens to be moving over time, then that object is probably a car, the post outlines. 

While this approach can work well for inferring a moving vehicle, the same may not be true for a stationary one. In this case, the object produces a dense cluster of reflections that are not moving. Classical radar processing would interpret the object as a railing, a broken down car, a highway overpass or some other object. “The approach often has no way of distinguishing which,” the author states. 

A deep neural network is an artificial neural network with multiple layers between the input and output layers, according to Wikipedia. The Nvidia team trained their DNN to detect moving and stationary objects, as well as to distinguish between different types of stationary objects, using data from radar sensors.  

Specifically, we trained a DNN to detect moving and stationary objects, as well as accurately distinguish between different types of stationary obstacles, using data from radar sensors.  

Training the DNN first required overcoming radar data sparsity problems. Since radar reflections can be quite sparse, it’s practically infeasible for humans to visually identify and label vehicles from radar data alone. However, Lidar data, which can create a 3D image of surrounding objects using laser pulses, can supplement the radar data. “In this way, the ability of a human labeler to visually identify and label cars from lidar data is effectively transferred into the radar domain,” the author states. 

The approach leads to improved results. “With this additional information, the radar DNN is able to distinguish between different types of obstacles—even if they’re stationary—increase confidence of true positive detections, and reduce false positive detections,” the author stated. 

Many stakeholders involved in fielding safe autonomous vehicles, find themselves working on similar problems from their individual vantage points. Some of those efforts are likely to result in relevant software being available as open source, in an effort to continuously improve autonomous driving systems, a shared interest. 

Read the source articles and information from abc10 of Sacramento, Calif., from AutoMobilSport and in a blog post in the Drive Lab series from Nvidia. 

Categories
Artificial Intelligence Robotics & RPA

Talking Robot Boxes at Norwegian Hospital a Hit with Sick Kids 

By John P. Desmond, AI Trends Editor  

The “Automated Guided Vehicles” at St. Olav’s Hospital in Trondheim, Norway, have personalities. The boxes talk. 

These motorized units, essentially boxes on wheels, are assigned to transport garbage, medical equipment or food from one part of the hospital to another. But because they have to interact with humans, such as by warning them to get out of the way, they have to talk.  

But instead of using a generic Norwegian voice, the hospital robot developers decided to give them a voice that uses the strong, distinctive local dialect, according to an account in the International Journal of Human-Computer Studies 

And in so doing, the developers gave the stainless-steel boxes rolling around the hospital to transport goods, a personality. And they made the robots kind of pushy, a little rude. 

Children with illnesses who were being treated in the wards began to play games with them, trying to find and identify them. One parent with a gravely ill child found solace in the robots’ endless, somewhat mindless battles as they unsuccessfully ordered inanimate objects—like walls—to get out of the way. 

In a new study, researchers from the Norwegian University of Science and Technology (NTNU) examined how the robots came to be seen as friendly, animal-like creatures and why that matters. 

Roger A. Søraa, researcher, Norwegian University of Science and Technology (NTNU)

“We found that these robots, which were not created to be social robots, were actually given social qualities by the humans relating to them,” stated Roger A. Søraa, a researcher at NTNU’s Department of Interdisciplinary Studies of Culture and Department of Neuromedicine and Movement Science, and first author of the new study. “We tend to anthropomorphize technologies like robots—giving them humanlike personalities—so we can put them into a context that we’re more comfortable with.” 

St. Olav’s decided in 2006 to buy 21 automated guided vehicles (AGVs) from Swisslog Healthcare, to do moving work, such as food from the cafeteria to different hospital units, or clean linens to nursing stations. St. Olav’s was the first hospital in Scandinavia to adopt the technology.  

“These are types of jobs that can often be dull, dirty, or dangerous, or what we call 3-D jobs,” Søraa stated. “These are jobs that humans don’t necessarily want to do or like to do. And those are the jobs we are seeing becoming robotized or digitalized the fastest.” 

When the talking robot encounters the talking elevator, things get interesting. The AGV is programmed to take over the elevator, politely asking humans to “please use another elevator” when it is onboard. But the elevator voices are female and speak in a very polite, standard dialect. The contrast between the polite elevators and the more burly, sometimes rude AGVs with the local dialect, is funny, Søraa observed.  

“We found that the use of the local dialect really gave the robots more of a personality,” he stated. “And people often like to give non-living things human qualities to fit them within existing social frameworks.”  

LionsBot Cleaning Robots Tell Jokes 

A Singapore-based robotics company that makes cleaning robots for commercial, industrial, and public spaces, has imbued them with a sense of humor.  

LionsBot is using AI within its LionsOS software it calls Active AI, for managing deployment, scheduling, monitoring and mapping.    

A video demonstration of the “janitorial joke droid” was recently posted to Facebook by the UK’s Maidstone and Tunbridge Wells NHS Trust, which plans on leasing two of the LionsBot cleaners for their pediatric war in Kent, England, according to an account in The New York Post. In the clip, a cleaning bot named Ella is seen telling jokes to infirm children during a trial run.

“The Earth’s rotation really makes my day” quips the machine. In another clip, the cleaning machine Ella says to a child, “How do trees access the internet? They log on.” For added effect, the “cybernetic comic squints her blue LED eyes and wobbles her head to insinuate mirth as a playful giggle emanates from her speaker,” the Post reported. 

Sarah Gray, the Trust’s assistant general manager for facilities, said Ella “put a smile on the faces of some of our youngest patients” with her routine.  

Ella’s delivery might sound robotic, but this “riotous roomba” had social media in stitches as well. “My 7 year old laughed his head off at that joke! Great work, Ella!” wrote one fan on Facebook, the Post reported. Others compared the joke-bot to Eve, the girlfriend of the titular character from “Wall-E.”  

LionsBot Founder Explains Its Smart Robot Features

Ai Trends sent several questions to the LionsBot team. Here are responses from Dylan Ng Terntzer, CEO and Founder of LionsBot:

What is the story behind the sense of humour of the robot?

LionsBot seeks to develop smart robotics solutions that empower cleaning professionals and allowing them to focus on higher-level tasks. Given that our robots are often deployed to public spaces where foot traffic is high – such as hospitals, shopping malls, universities, hotels, and museums – the team wanted to introduce the robots in a friendly, non-threatening manner to its occupants and visitors of the location.

The inclusion of humour and personality have additional benefits, including enhancing the image of the institution where the robots are deployed, as well as offering a novel and fun way for children to learn and be inspired about technology.

Who is programming it?

LionsBot’s technology is built and developed in-house at our headquarters in Singapore. In terms of the codes, all aspects of the robots’ personalities are written within the LionsOS – the cloud operating system that serves as the backbone of every one of our robots.

Passersby can interact with a LionsBot cleaning robot by scanning its QR code, which will allow them to ask the device questions like “What is your name?” or “What type of cleaning do you perform?”. LionsBot’s robots can also sing, rap, wink and even crack jokes at the tap of a button.

Young kids can press the heart of the robot to get a random reaction from the robot. The robot will also politely ask you to move if you block it during cleaning mode. The cleaner can also trigger the personality through the app.

As the cleaning robots are operating in offices and places with higher density of people, we do not have microphones in our robots to prevent any unintended recording of private conversations. Hence, the robot will not understand what the user is saying or respond to speech. The robot instead, responds to stimulus (e.g., pressing of the heart or blocking of the crobot) or commands.

Is it part of the OS?

Yes, it is. As mentioned, all aspects of the robots’ personalities are a part of LionsOS. With our cutting-edge technology in artificial intelligence, LionsBot seeks to develop a pleasant user experience all while placing a strong emphasis on total safety, total security, and total solutions from the ground up.

Our LionsOs is much more than just personality related, but also governs how the robot reacts to its surroundings, the robots’ understanding of its location (localisation), and the robots cleaning behaviour. It is a comprehensive system which is continuously improved, making the robots clean better and work better.

Does the customer have settings to tailor the humour to the site?

Yes, LionsBot offers a comprehensive set of paid customisation options for its users – including language as well as localisation for its engagement features. LionsBot’s robots are currently available in more than 20 countries and seven languages (E.g., German, French, Polish) around the world, with the company working intimately with the customers to customise each robot to ensure that they fit in with the local community from the get-go.

For major languages, we are able to allow the user to choose male or female voice packs, and also fun or professional voices. For example, a 5-star hotel would need a professional voice as compared to a shopping mall. The robot also has a repertoire of songs and jokes.

Animatronic Columbia University Robot Mimics Human Facial Expressions 

A team at Columbia University is combining AI and advanced robotics to have a robot mimic human facial expressions. The device uses animatronics, defined as a multidisciplinary field combining puppetry, anatomy, and mechatronics, which are most often seen in theme park attractions.  

“The ability to generate intelligent and generalizable facial expressions is essential for building human-like social robots,” state the researchers on their website, according to an account in Forbes. The goal was to develop a robot that would train itself in how to react to human facial expressions. The team designed a physical animatronic robotic face with soft skin, tied to a vision-based self-supervised learning framework for mimicking human facial expressions.   

The researchers recently released a video to demonstrate the capabilities of the robot they call Eva. It uses a generative model to create a synthesized image of the face, which can then output motor actions to create the expression in the soft-skin robot face. The team is led by Boyuan Chen, a PhD student in computer science.  

Dr. Sai Balasubramanian, physician, focused on healthcare, digital innovation and policy

“This technology is potentially a massive step into the healthcare artificial intelligence and robotics space,” stated the author of the Forbes account, Dr. Sai Balasubramanian, a physician focused on the intersections of healthcare, digital innovation and policy. “If this learning algorithm can be perfected to not only mimic human emotions, but rather also respond to them appropriately, it may potentially be a novel, yet controversial, addition to the realm of healthcare innovation” he stated. 

For example, social robots have the potential to counter the detrimental effects of social isolation and loneliness, especially among the elderly, in what is called “robo therapy.” The American Psychological Association states, “Socially assistive robots could provide companionship to lonely seniors” and augment the practice of trained professionals.  

Read the source article and information in the International Journal of Human-Computer Studiesin The New York Post and in Forbes.