Categories
Artificial Intelligence Autonomy Edge Technologies

Edge Cases And The Long-Tail Grind Towards AI Autonomous Cars 

By Lance Eliot, the AI Trends Insider   

Imagine that you are driving your car and come upon a dog that has suddenly darted into the roadway. Most of us have had this happen. You hopefully were able to take evasive action. Assuming that all went well, the pooch was fine and nobody in your car got hurt either.

In a kind of Groundhog Day movie manner, let’s repeat the scenario, but we will make a small change. Are you ready?   

Imagine that you are driving your car and come upon a deer that has suddenly darted into the roadway. Fewer of us have had this happen, though nonetheless, it is a somewhat common occurrence for those that live in a region that has deer aplenty.   

Would you perform the same evasive actions when coming upon a deer as you would in the case of the dog that was in the middle of the road?   

Some might claim that a deer is more likely to be trying to get out of the street and be more inclined to sprint toward the side of the road. The dog might decide to stay in the street and run around in circles. It is hard to say whether there would be any pronounced difference in behavior.   

Let’s iterate this once again and make another change.   

Imagine that you are driving your car and come upon a chicken that has suddenly darted into the roadway. What do you do? 

For some drivers, a chicken is a whole different matter than a deer or a dog. If you were going fast while in the car and there wasn’t much latitude to readily avoid the chicken, it is conceivable that you would go ahead and ram the chicken. We generally accept the likelihood of having chicken as part of our meals, thus one less chicken is ostensibly okay, especially in comparison to the risk of possibly rolling your car or veering into a ditch upon sudden braking.   

Essentially, you might be more risk-prone if the animal was a deer or a dog and be willing to put yourself at greater risk to save the deer or the dog. But when the situation involves a chicken, you might decide that the personal risk versus the harming of the intruding creature is differently balanced. Of course, some would vehemently argue that the chicken, the deer, and the dog are all equal and drivers should not try to split hairs by saying that one animal is more precious than the other.   

We’ll move on.   

Let’s make another change. Without having said so, it was likely that you assumed that the weather for these scenarios of the animal crossing into the street was relatively neutral. Perhaps it was a sunny day and the road conditions were rather plain or uneventful.  

Adverse Weather Creates Another Variation on Edge Cases  

Change that assumption about the conditions and imagine that there have been gobs of rain, and you are in the midst of a heavy downpour. Your windshield wiper blades can barely keep up with the sheets of water, and you are straining mightily to see the road ahead. The roadway is completely soaked and extremely slick. 

Do your driving choices alter now that the weather is adverse?    

Whereas you might have earlier opted to radically steer around the animal, any such maneuver now, while in the rain, is a lot dicier. The tires might not stick to the roadway due to the coating of water. Your visibility is reduced, and you might not be able to properly judge where the animal is, or what else might be near the street. All in all, the bad weather makes this an even worse situation.

We can keep going. 

For example, pretend that it is nighttime rather than being daytime. That certainly changes things. Imagine that the setting involves no other traffic for miles. After you’ve given that situation some careful thought, reimagine things and pretend that there is traffic all around you, cars and trucks abundantly, and also there is heavy traffic on the opposing side of the roadway.   

How many such twists and turns can we concoct? 

We can continue to add or adjust the elements, doing so over and over. Each new instance becomes its own particular consideration. You would presumably need to mentally recalculate what to do as the driver. Some of the story adjustments might reduce your viable options, while other of the adjustments might widen the number of feasible options.   

The combination and permutations can be dizzying.   

A newbie teenage driver is often taken aback by the variability of driving. They encounter one situation that they’ve not encountered before and go into a bit of a momentary panic mode. What to do? Much of the time, they muddle their way through and do so without any scrape or calamity. Hopefully, they learn what to do the next time that a similar setting presents itself and thus be less caught off-guard.   

Experienced drivers have seen more and therefore are able to react as needed. That vastness of knowledge about driving situations does have its limits. As an example, there was a news report about a plane that landed on a highway because of in-flight engine troubles. I ask you, how many of us have seen a plane land on the roadway in front of them? A rarity, for sure.   

These examples bring up a debate about the so-called edge or corner cases that can occur when driving a car. An edge or corner case is a reference to the instance of something that is considered rare or unusual. These are events that tend to happen once in a blue moon: outliers. 

A plane landing on the roadway amid car traffic would be a candidate for consideration as an edge or corner case. A dog or deer or chicken that wanders into the roadway would be less likely construed as an edge or corner case—it would be a more common, or core experience. The former instance is extraordinary, while the latter instance is somewhat commonplace.   

Another way to define an edge or corner case are the instances beyond the core or crux of whatever our focus is.    

But here’s the rub. How do we decide what is on the edge or corner, rather than being classified as in the core? 

This can get very nebulous and be subject to acrimonious discourse. Those instances that someone claims are edge or corner cases might be more appropriately tagged as part of the core. Meanwhile, instances tossed into the core could be potentially argued as more rightfully be placed into the edge or corner cases category.    

One aspect that escapes attention oftentimes is that the core cases does not necessarily have to be larger than the number or size of the edge cases. We just assume that would be the logical arrangement. Yet it could be that we have a very small core and a tremendously large set of edges or corner cases.   

We can add more fuel to this fire by bringing up the concept of having a long tail. People use the catchphrase “long tail” to refer to circumstances of having a preponderance of something as a constituted central or core and then in a presumed ancillary sense have a lot of other elements that tail off. You can mentally make a picture of a large bunched area on a graph and then have a narrow offshoot that goes on and on, becoming a veritable tail to the bunched-up portion. 

This notion is borrowed from the field of statistics. There is a somewhat more precise meaning in a purely statistical sense, but that’s not how most people make use of the phrase. The informal meaning is that you might have lots of less noticeable aspects that are in the tail of whatever else you are doing.   

A company might have a core product that is considered its blockbuster or main seller. Perhaps they sell only a relative few of those but do so at a hefty price each, and it gives them prominence in the marketplace. Turns out the company has a lot of other products too. Those aren’t as well-known. When you add up the total revenue from the sales of their products, it could be that all of those itty-bitty products bring in more dough than do the blockbuster products.   

Based on that description, I trust that you realize that the long tail can be quite important, even if it doesn’t get much overt attention. The long tail can be the basis for a company and be extremely important. If the firm only keeps its eye on the blockbuster, it could end up in ruin if they ignore or undervalue the long tail.   

That doesn’t always have to be the case. It could be that the long tail is holding the company back. Maybe they have a slew of smaller products that just aren’t worth keeping around. Those long-tail products might be losing money and draining away from the blockbuster end. 

Overall, the long tail ought to get its due and be given proper scrutiny. Combining the concept of the long tail with the concept of the edge or corner cases, we could suggest that the edge or corner cases are lumped into that long tail. 

Getting back to driving a car, the dog or even a deer that ran into the street proffers a driving incident or event that we probably would agree is somewhere in the core of driving.    

In terms of a chicken entering into the roadway, well, unless you live near a farm, this would seem a bit more extreme. On a daily drive in a typical city setting, you probably will not see many chickens charging into the street.   

So how will self-driving cars handle edge cases and the long tail of the core?  

Self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle.   

Some pundits fervently state that we will never attain true self-driving cars because of the long tail problem. The argument is that there are zillions of edge or corner cases that will continually arise unexpectedly, and the AI driving system won’t be prepared to handle those instances. This in turn means that self-driving cars will be ill-prepared to adequately perform on our public roadways.   

Furthermore, those pundits assert that no matter how tenaciously those heads-down all-out AI developers keep trying to program the AI driving systems, they will always fall short of the mark. There will be yet another new edge or corner case to be had. It is like a game of whack-a-mole, wherein another mole will pop up.   

The thing is, this is not simply a game, it is a life-or-death matter since whatever a driver does at the wheel of a car can spell life or possibly death for the driver, and the passengers, and for drivers of nearby cars, and pedestrians, etc. 

Here’s an intriguing question that is worth pondering: Are AI-based true self-driving cars doomed to never be capable on our roadways due to the endless possibilities of edge or corner cases and the infamous long-tail conundrum?   

Before jumping into the details, I’d like to clarify what is meant when referring to true self-driving cars.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ 

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/ 

Understanding The Levels Of Self-Driving Cars   

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems). 

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/   

The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Self-Driving Cars And The Long Tail 

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving.   

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.   

Why this added emphasis about the AI not being sentient? 

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.   

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car. 

Let’s dive into the myriad of aspects that come to play on this topic. 

First, it would seem nearly self-evident that the number of combinations and permutations of potential driving situations is going to be enormous. We can quibble whether this is an infinite number or a finite number, though in practical terms this is one of those counting dilemmas akin to the number of grains of sand on all the beaches throughout the entire globe. In brief, it is a very, very, very big number.   

If you were to try and program an AI driving system based on each possible instance, this indeed would be a laborious task. Even if you added a veritable herd of ace AI software developers, you can certainly expect this would take years upon years to undertake, likely many decades or perhaps centuries, and still be faced with the fact that there is one more unaccounted edge or corner case remaining.   

The pragmatic view is that there would always be that last one that evades being preestablished.   

Some are quick to offer that perhaps simulations would solve this quandary.   

Most of the automakers and self-driving tech firms are using computer-based simulations to try and ferret out driving situations and get their AI driving systems ready for whatever might arise. The belief by some is that if enough simulations are run, the totality of whatever will occur in the real world will have already been surfaced and dealt with before entering self-driving cars into the real world.   

The other side of that coin is the contention that simulations are based on what humans believe might occur. As such, the real world can be surprising in comparison to what humans might normally envision will occur. Those computer-based simulations will always then fall short and not end up covering all the possibilities, say those critics.   

Amid the heated debates about the use of simulations, do not get lost in the fray and somehow reach a conclusion that simulations are either the final silver bullet or fall into the trap that simulations won’t reach the highest bar and ergo they should be utterly disregarded.   

Make no mistake, simulations are essential and a crucial tool in the pursuit of AI-based true self-driving cars.   

There is a floating argument that there ought not to be any public roadway trials taking place of true self-driving cars until the proper completion of extensive and apparently exhaustive simulations. The counterargument is that this is impractical in that it would delay roadway testing on an indefinite basis, and that the delay means more lives lost due to everyday human driving.   

An allied topic entails the use of closed tracks that are purposely set up for the testing of self-driving cars. By being off the public roadways, a proving ground ensures that the public at large is not endangered by whatever waywardness might emerge during driverless testing. The same arguments surrounding the closed track or proving grounds approach are similar to the tradeoffs mentioned when discussing the use of simulations (again, see my remarks posted in my columns).   

This has taken us full circle and returned us back to the angst over an endless supply of edge or corner cases. It has also brought us squarely back to the dilemma of what constitutes an edge or corner case in the context of driving a car. The long tail for self-driving cars is frequently referred to in a hand waving manner. This ambiguity is spurred or sparked due to the lack of a definitive agreement about what is indeed in the long tail versus what is in the core.   

This squishiness has another undesirable effect.   

Whenever a self-driving car does something amiss, it is easy to excuse the matter by claiming that the act was merely in the long tail. This disarms anyone expressing concern about the misdeed. Here’s how that goes. The contention is that any such concern or finger-pointing is misplaced since the edge case is only an edge case, implying a low-priority and less weighty aspect, and not significant in comparison to whatever the core contains.   

There is also the haughtiness factor.   

Those that blankly refer to the long-tail of self-driving cars can have the appearance of one-upmanship, holding court over those that do not know what the long-tail is or what it contains. With the right kind of indignation and tonal inflection, the haughty speaker can make others feel incomplete or ignorant when they “naively” try to refute the legendary (and notorious) long-tail.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: http://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Conclusion   

There are a lot more twists and turns on this topic. Due to space constraints, I’ll offer just a few more snippets to further whet your appetite. 

One perspective is that it makes little sense to try and enumerate all the possible edge cases. Presumably, human drivers do not know all the possibilities and despite this lack of awareness are able to drive a car and do so safely the preponderance of the time. You could argue that humans lump together edge cases into more macroscopic collectives and treat the edge cases as particular instances of those larger conceptualizations.   

You sit at the steering wheel with those macroscopic mental templates and invoke them when a specific instance arises, even if the specifics are somewhat surprising or unexpected. If you’ve dealt with a dog that was loose in the street, you likely have formed a template for when nearly any kind of animal is loose in the street, including deer, chickens, turtles, and so on. You don’t need to prepare beforehand for every animal on the planet.   

The developers of AI driving systems can presumably try to leverage a similar approach.   

Some also believe that the emerging ontologies for self-driving cars will aid in this endeavor. You see, for Level 4 self-driving cars, the developers are supposed to indicate the Operational Design Domain (ODD) in which the AI driving system is capable of driving the vehicle. Perhaps the ontologies being crafted toward a more definitive semblance of ODDs would give rise to the types of driving action templates needed.   

The other kicker is the matter of common-sense reasoning.  

One viewpoint is that humans fill in the gaps of what they might know by exploiting their capability of performing common-sense reasoning. This acts as the always-ready contender for coping with unexpected circumstances. Today’s AI efforts have not yet been able to crack open how common-sense reasoning seems to occur, and thus we cannot, for now, rely upon this presumed essential backstop (for my coverage about AI and common-sense reasoning, see my columns). 

Doomsayers would indicate that self-driving cars are not going to successfully be readied for public roadway use until all edge or corner cases have been conquered. In that vein, that future nirvana can be construed as the day and moment when we have completely emptied out and covered all the bases that furtively reside in the imperious long-tail of autonomous driving. 

That’s a tall order and a tale that might be telling, or it could be a tail that is wagging the dog and we can find other ways to cope with those vexing edges and corners. 

 Copyright 2021 Dr. Lance Eliot  

http://ai-selfdriving-cars.libsyn.com/website 

Categories
Artificial Intelligence Autonomy

Researchers Working to Improve Autonomous Vehicle Driving Vision in the Rain 

By John P. Desmond, AI Trends Editor 

To help autonomous cars navigate safely in the rain and other inclement weather, researchers are looking into a new type of radar.  

Self-driving vehicles can have trouble “seeing” in the rain or fog, with the car’s sensors potentially blocked by snow, ice or torrential downpours, and their ability to “read” road signs and road markings impaired. 

Many autonomous vehicles rely on lidar radar technology, which works by bouncing laser beams off surrounding objects to give a high-resolution 3D picture on a clear day, but does not do so well in fog, dust, rain or snow, according to a recent report from abc10 of Sacramento, Calif. 

“A lot of automatic vehicles these days are using lidar, and these are basically lasers that shoot out and keep rotating to create points for a particular object,” stated Kshitiz Bansal, a computer science and engineering Ph.D. student at University of California San Diego, in an interview. 

The university’s autonomous driving research team is working on a new way to improve the imaging capability of existing radar sensors, so they more accurately predict the shape and size of objects in an autonomous car’s view.  

Dinesh Bharadia, professor of electrical and computer engineering,UC San Diego Jacobs School of Engineering

“It’s a lidar-like radar,” stated Dinesh Bharadia, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering, adding that it is an inexpensive approach. “Fusing lidar and radar can also be done with our techniques, but radars are cheap. This way, we don’t need to use expensive lidars.” 

The team places two radar sensors on the hood of the car, enabling the system to see more space and detail than a single radar sensor. The team conducted tests to compare their system’s performance on clear days and nights, and then with foggy weather simulation, to a lidar-based system. The result was the radar plus lidar system performed better than the lidar-alone system.  

“So, for example, a car that has lidar, if it’s going in an environment where there is a lot of fog, it won’t be able to see anything through that fog,” Bansaid stated. “Our radar can pass through these bad weather conditions and can even see through fog or snow,” he stated.  

The team uses millimeter radar, a version of radar that uses short-wavelength electromagnetic waves to detect the range, velocity and angle of objects.   

20 Partners Working on AI-SEE in Europe to Apply AI to Vehicle Vision 

Enhanced autonomous vehicle vision is also the goal of a project in Europe—called AI-SEE—involving startup Algolux, which is cooperating with 20 partners over a period of three years to work towards Level 4 autonomy for mass-market vehicles. Founded in 2014, Algolux is headquartered in Montreal and has raised $31.8 million to date, according to Crunchbase.  

The intent is to build a novel robust sensor system supported by artificial intelligence enhanced vehicle vision for low visibility conditions, to enable safe travel in every relevant weather and lighting condition such as snow, heavy rain or fog, according to a recent account from AutoMobilSport.    

The Algolux technology employs a multisensory data fusion approach, in which the sensor data acquired will be fused and simulated by means of sophisticated AI algorithms tailored to adverse weather perception needs. Algolux plans to provide technology and domain expertise in the areas of deep learning AI algorithms, fusion of data from distinct sensor types, long-range stereo sensing, and radar signal processing.  

Dr. Werner Ritter, Consortium Lead, Mercedes Benz AG: “Algolux is one of the few companies in the world that is well versed in the end-to-end deep neural networks that are needed to decouple the underlying hardware from our application,” stated Dr. Werner Ritter, consortium lead, from Mercedes Benz AG. “This, along with the company’s in-depth knowledge of applying their networks for robust perception in bad weather, directly supports our application domain in AI-SEE.”  

The project will be co-funded by the National Research Council of Canada Industrial Research Assistance Program (NRC IRAP), the Austrian Research Promotion Agency (FFG), Business Finland, and the German Federal Ministry of Education and Research BMBF under the PENTA EURIPIDES label endorsed by EUREKA. 

Nvidia Researching Stationary Objects in its Driving Lab  

The ability of the autonomous car to detect what is in motion around it is crucial, no matter the weather conditions, and the ability of the car to know which items around it are stationary is also important, suggests a recent blog post in the Drive Lab series from Nvidia, an engineering look at individual autonomous vehicle challenges. Nvidia is a chipmaker best known for its graphic processing units, widely used for development and deployment of applications employing AI techniques.   

The Nvidia lab is working on using AI to address the shortcomings of radar signal processing in distinguishing moving and stationary objects, with the aim of improving autonomous vehicle perception.   

Neda Cvijetic, autonomous vehicles and computer vision research, Nvidia

“We trained a DNN [deep neural network] to detect moving and stationary objects, as well as accurately distinguish between different types of stationary obstacles, using data from radar sensors,” stated Neda Cvijetic, who works on autonomous vehicles and computer vision for Nvidia; the author of the blog post. In her position for about four years, she previously worked as a systems architect for Tesla’s Autopilot software.   

Ordinary radar processing bounces radar signals off of objects in the environment and analyzes the strength and density of reflections that come back. If a sufficiently strong and dense cluster of reflections comes back, classical radar processing can determine this is likely some kind of large object. If that cluster also happens to be moving over time, then that object is probably a car, the post outlines. 

While this approach can work well for inferring a moving vehicle, the same may not be true for a stationary one. In this case, the object produces a dense cluster of reflections that are not moving. Classical radar processing would interpret the object as a railing, a broken down car, a highway overpass or some other object. “The approach often has no way of distinguishing which,” the author states. 

A deep neural network is an artificial neural network with multiple layers between the input and output layers, according to Wikipedia. The Nvidia team trained their DNN to detect moving and stationary objects, as well as to distinguish between different types of stationary objects, using data from radar sensors.  

Specifically, we trained a DNN to detect moving and stationary objects, as well as accurately distinguish between different types of stationary obstacles, using data from radar sensors.  

Training the DNN first required overcoming radar data sparsity problems. Since radar reflections can be quite sparse, it’s practically infeasible for humans to visually identify and label vehicles from radar data alone. However, Lidar data, which can create a 3D image of surrounding objects using laser pulses, can supplement the radar data. “In this way, the ability of a human labeler to visually identify and label cars from lidar data is effectively transferred into the radar domain,” the author states. 

The approach leads to improved results. “With this additional information, the radar DNN is able to distinguish between different types of obstacles—even if they’re stationary—increase confidence of true positive detections, and reduce false positive detections,” the author stated. 

Many stakeholders involved in fielding safe autonomous vehicles, find themselves working on similar problems from their individual vantage points. Some of those efforts are likely to result in relevant software being available as open source, in an effort to continuously improve autonomous driving systems, a shared interest. 

Read the source articles and information from abc10 of Sacramento, Calif., from AutoMobilSport and in a blog post in the Drive Lab series from Nvidia. 

Categories
AgTech Artificial Intelligence Autonomy

Ag-tech Employing AI and Range of Tools With Dramatic Results 

By AI Trends Staff  

An agricultural technology (ag-tech) startup in San Francisco, Plenty, plants its crops vertically indoors, in a year-round operation employing AI and robots that uses 95% less water and 99% less land than conventional farming. 

Plenty’s vertical farm approach can produce the same quantity of fruits and vegetables as a 720-acre flat farm, on only two acres.    

Nate Storey, cofounder and chief science officer of the startup Plenty

“Vertical farming exists because we want to grow the world’s capacity for fresh fruits and vegetables, and we know it’s necessary,” stated Nate Storey, cofounder and chief science officer of the startup Plenty, in an account in Intelligent Living 

The yield of 400x that of flat farms makes vertical farming “not just an incremental improvement,” and the fraction of water use “is also critical in a time of increasing environmental stress and climate uncertainty,” Storey stated. “All of these are truly game-changers.”  

Plenty is one of hundreds of ag-tech startups using new technology approaches—including AI, drones, robots and IoT sensors—being supported with billions of investments from the capital markets.     

Plenty’s climate-controlled indoor farm has rows of plants growing vertically, hung from the ceiling. LED lights mimicking the sun shine on the plants; robots move them around; AI manages all the variables of water, temperature, and light. The AI continuously learns and optimizes how to grow better crops.   

Also, vertical farms can be located in urban areas resulting in locally-produced food, with many transportation miles eliminated. Benefits of locally-produced crops include reduction of CO2 emissions from transportation vehicles and potentially lower prices for consumers.    

“Supply-chain breakdowns resulting from COVID-19 and natural disruptions like this year’s California wildfires demonstrate the need for a predictable and durable supply of products can only come from vertical farming,” Storey stated.  

Plenty has received $400 million in investment capital from SoftBank, former Google chairman Eric Schmidt, and Amazon’s Jeff Bezos. It also struck a deal with Albertsons stores in California to supply 430 stores with fresh produce.  

Bowery Farming in New York City Supplying 850 Grocery Stores  

Another indoor farming venture is Bowery Farming in New York City, which has raised $467.5 million so far in capital, according to Crunchbase. Experiencing growth during the pandemic, the company’s produce is now available in 850 grocery stores, including Albertsons, Giant Good, Walmart and Whole Foods, according to an account in TechCrunch.   

The infusion of new capital, $300 million in May, “is an acknowledgement of the critical need for new solutions to our current agricultural system,” stated CEO Irving Fain in a release. “This funding not only fuels our continued expansion but the ongoing development of our proprietary technology, which sits at the core of our business and our ability to rapidly and efficiently scale toward an increasingly important opportunity in front of us,” Fain stated. 

The company plans to expand to new locations in the US, including a new site located in an industrial area in Bethlehem, Penn., which Bowery says will be its largest to date.  

blog post on the company’s website describes the BoweryOS as the “central nervous system” of each farm, offering plants individual attention at scale. “It works by collecting billions of data points through an extensive network of sensors and cameras that feed into proprietary machine-learning algorithms that are interpreted by the BoweryOS in real time,” the account states. In addition, “It gets smarter with each grow cycle, gaining a deeper understanding about the conditions each crop truly needs to thrive.”  

Ag-tech Spending Projected to Reach $15.3 Billion by 2025 

Global spending on smart, connected ag-tech systems including AI and machine learning, is projected to trip by 2025, to reach $15.3 billion, according to BI Intelligence Research, quoted in a recent account in Forbes. 

IoT-enabled ag-tech is the fastest growing segment, projected to reach $4.5 billion by 2025, according to PwC. 

Demand should be there. Prediction data on population and hunger from the United Nations shows the world population increasing by two billion people by 2050, requiring a 60% increase in food production. AI and ML are showing the potential to help meet the increased requirement for food.   

Louis Columbus, author and principal of Dassault Systemes, supplier of manufacturing software

“AI and ML are already showing the potential to help close the gap in anticipated food needs,” stated the author of the Forbes article, Louis Columbus, a principal of Dassault Systemes, supplier of manufacturing software.  

AI and machine learning are well-suited to tackle challenges in farming. “Imagine having at least 40 essential processes to keep track of, excel at and monitor at the same time across a large farming area often measured in the hundreds of acres,” Columbus stated.” Gaining insight into how weather, seasonal sunlight, migratory patterns of animals, birds, insects, use of specialized fertilizers, insecticides by crop, planting cycles and irrigation cycles all affect yield is a perfect problem for machine learning,” he stated.  

Among a list of ways AI has the potential to improve agriculture in 2021, he offered:  

Using AI and machine learning-based surveillance systems for monitoring. Real-time video feeds of every crop can be used to send alerts immediately after an animal or human breech, very practical for remote farms. Twenty20 Solutions is a leader in the field of AI and machine learning-based surveillance.  

Improve crop yield prediction with real-time sensor data and visual analytics data from drones. Farms have access to data sets from smart sensors and drones they have never had before. Now it’s possible to access data from in-ground sensors on moisture, fertilizer, and nutrient levels, to analyze growth patterns for each crop over time. Infrared imagery and real-time video analytics also provide farmers with new insights.  

Smart tractors and agribots—robots designed for agricultural purposes—using AI and machine learning are a viable option for many agricultural operations that struggle to find workers. Self-propelled agribots can be programmers for example to distribute fertilizer on each row of crops, in a way to keep operating costs down and improve yields. Robots from VineScout are used to create crop maps, then help manage crops, especially in wine vineyards. Based in Portugal, the project has been backed by the European Union and multiple investors.  

Read the source articles and information in Intelligent Living, in TechCrunch and in Forbes. 

Categories
Artificial Intelligence Autonomy

Parking Between the Lines, a Heady Viral Topic, Ensnares AI Autonomous Cars 

By Lance Eliot, the AI Trends Insider   

Are you a middle parker or a sideline-hugging parker?   

Here’s the deal. A recent TikTok video went viral about how we all should be parking our cars when doing so in those parking lots that have clearly marked lined spaces. A brouhaha has now arisen.

You know how it goes. As you drive down a row of parked cars, you are spying for any next open space. Upon spotting one, you quickly drive up to the prized piece of turf and maneuver your car into the allotted space. There are painted lines on the asphalt that denote what amount of floor space you are considered entitled to consume.   

The question posed to you is whether you tend to park directly midway between those lines, or whether you aim to be closer to one side or the other of your teensy bit of earth. Take a moment to think this over. Your answer is very, very, very important.   

Most of the time, your primary concern is probably that you don’t want to scrape against any other cars as you manage to get into the parking spot.   

Trying to somehow line up perfectly in your now grabbed up parking spot is secondary in priority. They say that possession is nine-tenths of the law, so your crucial first step is to satisfactorily occupy the space. Dive in there, however, you can squeeze into it. This keeps other interlopers from trying to claim they saw it first (which, they might have, but you now “own” that space and have presumably won an intergalactic battle in doing so). 

Okay, after making sure that the landing has happened, and you’ve secured the vaunted spot, now you look around to see how much room there is between your car and the adjacently parked vehicles. Sometimes those other vehicles are rudely protruding into your now conquered space.    

Outrageous!   

But it seems relatively rare that people blatantly bloat over into an adjacent parking space, though it, unfortunately, does indeed occur. We’ll set aside that consideration for now. Let’s assume for the sake of discussion that you’ve found a parking spot that is not being encroached upon. The adjacent vehicles are within their lines and not transgressing into your space. I would guess that most of the time that’s how things are. You have the full extent of width available in your captured parking spot.   

How much space do you have width-wise? 

It all depends, but the traditional width for conventional parking spots is about eight to nine feet or so. A car is typically about six feet to perhaps six and a half feet in width. For ease of discussion, let’s agree to use six feet for the width of a normal car and use eight feet for the width of a typical parking space.  

Based on the presumption that a car is six feet wide and the parking spot is eight feet wide, we can use our heads to calculate that the difference is a matter of two feet. You have about two feet to play with inside your parking space, and those two feet are likely to be needed for getting out of and into your car. The two feet are your means of making egress and ingress related to your parked vehicle.   

Consider how these two feet of space can be allocated.   

By parking perfectly in the middle of the parking spot, you would in theory have one foot of open space to your left and one foot of open space to your right.    

What we also need to include in our calculus is whether the vehicles adjacent to you have managed to include any available space in their respective parking spots. It could be that the vehicle to your immediate left is hugging the line that borders upon your two cars. In that case, you have no added room by trying to temporarily make use of the space to your left, perhaps wanting to momentarily intrude as you open your driver’s side car door.   

Similarly, if the vehicle to your immediate right is hugging the line that borders upon your parking spot, this means that trying to use any space beyond your “internal” one foot of available space is going to be rebuffed. There isn’t any more space available because that other vehicle is hugging the line.   

In a squeeze play of a parking situation, whereby the adjacent vehicles are each hugging the line, you only have your own two feet of available space to exploit. There is no immediately available temporary space to leverage. This is nearly as bad as when adjacent cars encroach, though not quite so since you did at least get into the parking spot successfully.   

The thing is, now you might not have any means to get out of your car. 

Darn!   

Sure, you found a parking spot, nonetheless, you might be stuck inside your vehicle and unable to get out. That’s not what you probably had in mind when you were searching for a parking spot. The usual assumption is that you can park your car, you can get out of it, you can go do whatever you had in mind, and when you return to your parked car you will be able to get into it.   

Seems simple enough, but that doesn’t always happen readily.   

Getting into and out of your vehicle can at times be a contortionist’s job. You tentatively open the driver’s side door, trying desperately not to have your door touch the side of the adjacent car. The odds are that it will bump against the other car in this squeeze play scenario. You look around to see if anyone noticed. Assuming the coast is clear, you steady the door and ooze your body out of your driver’s seat, along with thinking extremely thin thoughts in hopes that your body can become one-dimensional and slide out without any further problems. 

Let’s use a smiley face version of the parking situation and pretend that the adjacent cars have parked perfectly in the middle of their parking spots. We will continue using the assumed sizes of six feet for the car width and eight feet for the width of the parking space.   

This is a blessing.   

It means that you have one foot of temporary space from the car that is to your left, and you have an additional foot of temporary space to your right. All told, you now have available two feet to the left of your car, and two feet to the right of your car. Mathematically, this is your one foot of space to your left inside your space, plus the one foot of space that is to the right of the space to your left. And then there is the one foot of space to your right, combined with the one foot of space that is to the left of the car that is to your right. Say that quickly, ten times, as it is quite the tongue twister. 

I emphasize that this is a calculation of the temporary space. You are not parked into their spaces, and only momentarily their available space when you need to get into and out of your vehicle.   

You have the enviable and luxurious scenario of being able to use an entire two feet to get out of your car on your side, and if you perchance have a passenger in the seat next to you, they have two feet to use on their side too. The world has suddenly become a joyous place. Birds are singing, flowers are blooming. It is fortuitous that those adjacent cars were able to perfectly park in the middle of their spaces.   

Ponder that notion.   

We pretty much assume that most people will try to park their cars in the center of their parking spot. Doing so seems prudent. It gives the maximum allowable space on either side of your car, within the constraints of your limit lines. It makes plain sense to do so.   

Envision a nirvana in which everyone always impeccably parked their cars in the exact center of the parking spots that they opted to occupy. In the case of six-foot-wide cars and eight-foot-wide parking spaces, there would be two feet to either side of each car. Every time. Guaranteed.   

Side note, a smarmy know-it-all might argue that we don’t know what portends for the cars parked at the very edges of the entire row. I think we can safely argue that they would likely have even more than two feet available. The assumption is that there isn’t anything blocking the ends of the row. Of course, this might not be the case and possibly a concrete wall or some barrier is there. Those that park at the end of such rows will decidedly be shortchanged, a sad fact of life.

Do today’s drivers achieve the purity of parking in the center or middle of their captured parking spot?   

Nope.   

Any casual glance at cars parked in a contemporary mall or movie theatre parking lot will showcase abundantly that people do not park that way with any semblance of consistency.   

On top of this, it is easy to justify not parking in the center of your parking spot if there is a vehicle in the adjacent spot that is not abiding with the park-in-the-middle mantra.   

For example, you drive up to a parking spot, and only you are in your car. Only you will need to get out of and later back into the car. You don’t need to worry about having any available space to the right of your car since you don’t have any passengers on board. You notice that the car to your left is hugging the borderline.  

What do you do?   

Indubitably, you would deduce that you ought to park as much to the right in your parking space, providing maximum distance between your driver’s side door and the border to your left. In essence, this creates two feet of space, entirely confined within your available parking spot. The dolt to your right has essentially forced you into doing this, due to their careless parking and not having obeyed the rule to always park in the center of a parking spot. 

You had no choice. The other driver made the choice for you. The moment they hugged the line on their right, it meant that any driver pulling into that parking space to their right is going to inevitably shift over to the right too, seeking to maintain a reasonable distance to get out of their car.   

A close observation of cars parked in a parking lot will oftentimes reveal this cascading effect. Once a vehicle opts to park to the edge of their parking spot, the car adjacent will necessarily opt to do the same. And the next vehicle will do likewise. On and on this goes, causing an entire row to end up being lopsided in parking close to the line.   

It just takes one weak link (instigator) in the chain to get all the other drivers to do the same thing.   

I’m not suggesting this is entirely undertaken. It all depends upon where the lopsided parking effort originates. It also depends obviously on the actual widths of the cars, and so on. The point overall is that this can happen generally and does in fact occur.   

A difficulty for many drivers is that they are not good at gauging where the center of the parking spot is, nor how to align their particular vehicle accordingly. It seems that a lot of drivers have no visceral comprehension of the width of their car. They do things by wild estimation. Even if their lives depended upon precisely pulling into a tight parking spot and had to be in the middle (else, say a menacing gorilla will leap onto the hood of their car), one doubts they could do so.   

In short, parking in the middle of a parking spot is just too much to handle for most drivers.   

Kind of a heartbreaking commentary about how we drive. Sad face. This brings us to the viral video.   

As though the video maker had just discovered the source of the Nile or the secret to those alien spaceships and UFOs, the video asserts that we should all park toward the left line of any parking spot and this would solve the world’s problems. At least with respect to parking. 

The notion is straightforward.

By everyone parking as close as possible to the left line, we would always be leaving open that roughly two feet to our right. Guaranteed (under the assumptions herein about the widths involved). Now, I realize you are thinking that you could do the exact same thing by everyone agreeing to park immediately next to the right sideline. Yes, that’s true. 

The basis for adopting the left line is that in a society wherein the cars are designed with the driver in the left side seat, presumably, drivers can easily align with the left line. Going back to how badly drivers seem to gauge the width of their cars, asking them to saddle up to the right line would seemingly be a disastrous proposition. Cannot be done, they would exhort.   

You would hope that most drivers could at least align their vehicles with the left line. Naturally, any country that has the driver’s side to the right side of the vehicle would probably want to use the right side line, leveraging the same logic already mentioned.   

That is then what got a viral spin going.   

Apparently, some people on this planet think that this is the best idea since the invention of sliced bread. Others scratch their heads and wonder why in the heck this simple idea should be so bandied about and get a buzz in the social media realm. One supposes that this does have a bit more complexity and weightiness than videos that show a cat meowing or a baby that spits up milk (please don’t harp on me about that, I love cats, and babies are certainly adorable too).   

Speaking of cars, the future of cars consists of AI-based true self-driving cars. 

There isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle. 

Here’s an intriguing question that is worth pondering: Would the left side hugging of a parking spot be more feasible due to the advent of AI self-driving cars, and if so, should this be implemented?   

Before jumping into the details, I’d like to further clarify what is meant when I refer to true self-driving cars. 

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving Cars   

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/   

The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Self-Driving Cars And Parking In Parking Spots 

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving.   

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.   

Why this added emphasis about the AI not being sentient?   

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.   

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.   

Let’s dive into the myriad of aspects that come to play on this topic.   

Programming of self-driving cars to always park at the leftmost edge of a parking spot would be relatively straightforward. 

You see, self-driving cars make use of various sensors such as video cameras, radar, LIDAR, ultrasonic units, thermal imaging, and similar devices to derive the nature of the driving scene. You could construe the sensor suite as somewhat akin to the eyes and ears of the AI driving system.   

In the case of parking in a lined parking spot, the AI driving system would receive data via the vehicle-mounted sensors that are scanning the surroundings, and then utilize computationally pattern matching techniques such as Machine Learning (ML) or Deep Learning (DL) to identify the specific parameters associated with a parking spot. Via the video being real-time streamed from the onboard video cameras, the painted lines would be hopefully detectable. The AI driving system would then issue commands to the autonomous vehicle driving controls to maneuver into the parking space accordingly. 

Generally, you could reasonably expect that this would be done with extremely high reliability.   

The odds are that the self-driving car would nearly always park in the leftmost portion of a parking space if that’s what it had been programmed to attain. Occasional exceptions might arise, such as if the adjacent parked cars prevented positioning in the leftmost portion, or possibly due to obstructions or other oddities about a particular parking spot.   

I might add that in the case of human drivers trying to always park toward the leftmost edge of a parking spot, there is a lingering doubt about the reliability of humans being able to do so. Though the earlier point was made that human drivers would presumably find it easier to park toward the left line and do so more consistently than parking in the center of a parking space, that omits the notion that humans innately have human foibles and therefore are not especially robot-like in their behaviors. 

You can imagine how things might go in the case of human drivers trying to adopt a left-line parking rule.   

Some people would flatly refuse to do so. They would potentially feel it is their constitutional right to park within a parking spot wherever they darned wish to do so. We would undoubtedly end up with some parking lots that had the left line rule, while others proclaimed you can park anywhere within the lines. This would draw some drivers to one of those parking lots and other drivers to the other ones. Of course, at some point, a left-line person would get irked that an anywhere person opted to park in the left-line parking lot, and fisticuffs would possibly fly.   

On top of this, it would seem overly optimistic to believe that human drivers would properly align to the left line, even if that was their desired intent. I’m sure that many would cross over the line while attempting to kiss the line. As a result, there would probably be some sizable percentage of parked cars that edged over into the left adjacent parking spot.   

Anyway, all of those complications would fall by the wayside with the AI driving systems at the wheel. No-fuss, no muss. Parking to the left line would be easy-peasy.   

Case closed. Wait for a second, maybe there is more. Of course there sure is.   

We are going to have both self-driving cars and human-driven cars for many decades to come. There are about 250 million conventional cars in the United States alone, and those regular cars are not going to disappear overnight. In short, we can expect that our public roadways will be replete with a mixture of self-driving cars and human-driven cars.   

This includes being in parking lots too. 

Though the self-driving cars could readily and consistently park to the left line, there would certainly be human drivers that violated this principle. It would then toss asunder the precept that all of the cars would need to park in the same manner. We are back to square one.   

You could have parking lots that are devoted exclusively to self-driving cars. In that case, the left line rule would be viable. Will human drivers possibly get upset that they are being kept out of the parking lots being used by self-driving cars?   

Possibly so, depending upon where those parking lots are located, such as near a convenient place to be able to park your car.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: http://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Conclusion 

Lots more twists and turns arise. 

A self-driving car could essentially park anywhere within the lines, consistently, regardless of whether we wanted this to happen on the left or the right. As such, it might not make sense to enforce the left-line rule. You could instead program the AI driving systems to center the car in the parking spot. This would have the same general effect as parking to the left. It would be done consistently and ostensibly smackdab in the middle.   

Some pundits insist there would never be a need to park a self-driving car in a parking lot, or anyplace else. They claim that self-driving cars will always be on the go, other than when getting fueled or undergoing maintenance. Self-driving cars will seemingly always drop off passengers at some suitable drop-off point, and likewise, pick up passengers at some appropriate pick-up spot.   

Thus, it is conceivable that mass parking of self-driving cars is not needed.   

Furthermore, when self-driving cars are parked, humans will not get into or out of the self-driving car while it is in a parked position. Instead, the AI driving system will bring the autonomous vehicle to the passengers. You could then pack self-driving cars together like sardines, assuming you did want them to park someplace.   

Quite a future awaits us.   

Meanwhile, the next time that you seek to park in a lined parking lot, look at how the other cars are parked. Probably will look like a full-on mishmash, having some cars toward the left, toward the right, on the line, in the middle, and so on.   

Human drivers are definitely quite a fun bunch.   

Copyright 2021 Dr. Lance Eliot  

http://ai-selfdriving-cars.libsyn.com/website