top of page
  • jeffreystefan

Patent Picks - Week of 10/17/2022

Here are my patent picks for this week in Machine Learning, Artificial Intelligence and Autonomous Vehicles. I've also added a new section which lists the top 5 patent assignees for each category. These are just a sample of the assignees at the top of the list of newly issued patents and do not contain assignee statistics. It's just to show a sample of who is active in each space. Also the format is changed. I'm putting the picks and the nutshell description at the top and the Abstracts and Backgrounds after the nutshell descriptions. This makes for a quicker read and any patent that you’re interested in can be looked at in greater detail.


MACHINE LEARNING

Top 5 Assignees

Apple, Inc.

Snap, Inc.

Splunk, Inc.

Amazon Technologies, Inc.

Wells Fargo Bank


My Pick:US 11474596 B1 Systems And Methods For Multi-user Virtual Training

In a Nutshell:

Executes a predictive machine learning model learned from user interaction in a virtual training environment and uses the predictive model to enhance measurement of user skill performance.



ARTIFICIAL INTELLIGENCE

Top 5 Assignees

Virtustream IP Holding Company LLC

Sprint Communications Company L.P.

SAMSUNG ELECTRONICS CO., LTD

SHANGHAI UNITED IMAGING INTELLIGENCE CO., LTD.

Boston Scientific Neuromodulation Corporation


My Pick:

US 11475770 B2 Electronic Device, Warning Message Providing Method Therefor, And Non-transitory Computer-readable Recording Medium

In a Nutshell:

Learns traffic accident patterns for particular driving situations and delivers a warning to a driver based on the possibility of an accident occurring in similar situations.


AUTONOMOUS VEHICLES

Top 5 Assignees

Allstate Insurance Company

State Farm Mutual Automobile Insurance Company

Toyota Research Institute, Inc.

NVIDIA Corporation

The Regents of the University of Michigan


My pick: US 11474255 B2 System And Method For Determining Optimal Lidar Placement On Autonomous Vehicles

In a Nutshell:

Takes a range of interest (distances in which sensor data is collected), segments it into cubes of the same size, and uses shape data to optimize placement of LIDAR sensors.


PATENT ABSTRACTS AND BACKGROUNDS

Here's a deeper dive into the Abstract and Background section of a few issued patents in each category including my picks.


MACHINE LEARNING


Generating More Realistic Synthetic Data With Adversarial Nets

DOCUMENT ID: US 11475276 B1

DATE PUBLISHED: 2022-10-18

ASSIGNEE INFORMATION

NAME: Apple Inc.

APPLICATION NO: 15/804900

DATE FILED: 2017-11-06

DOMESTIC PRIORITY (CONTINUITY DATA): us-provisional-application US 62418635 20161107


Abstract

A generative network may be learned in an adversarial setting with a goal of modifying synthetic data such that a discriminative network may not be able to reliably tell the difference between refined synthetic data and real data. The generative network and discriminative network may work together to learn how to produce more realistic synthetic data with reduced computational cost. The generative network may iteratively learn a function that synthetic data with a goal of generating refined synthetic data that is more difficult for the discriminative network to differentiate from real data, while the discriminative network may be configured to iteratively learn a function that classifies data as either synthetic or real. Over multiple iterations, the generative network may learn to refine the synthetic data to produce refined synthetic data on which other machine learning models may be trained.


BACKGROUND

Technical Field

(1) This disclosure relates generally to systems and algorithms for machine learning and machine learning models.

Description of the Related Art

(2) Annotated training data can be useful when training accurate machine learning models. Collecting such data using traditional techniques may be very expensive. When training models on synthetic data, a synthesizer may be able to deform and manipulate objects to cover a large space of variations that would otherwise be expensive and/or difficult (or even impossible) to collect in the real world. Additionally, when using synthetic data, annotations may be obtained automatically. However, learning from synthetic data can be problematic, such as due to differences in feature distributions between synthetic and real data (which may be termed a “synthetic gap”). For example, models trained on less realistic synthetic data may not work as well on real (e.g., not synthetic) data.

(3) Labeled training datasets (esp., large labeled training datasets) have, in some situations, become increasingly important, such as when using high capacity deep neural networks. Thus, neural networks may be trained on synthetic data instead of real data. A number of tasks have been performed using synthetic data, such as text detection and classification in RGB images, font recognition, object detection in depth (and RGB) images, hand pose estimation in depth images, scene recognition in RGB-D, and human pose estimation in RGB images, especially prior to the use of deep learning neural networks.

(4) However, learning from synthetic data can be problematic. For instance, synthetic data is often not realistic enough, possibly leading the network to learn details only present in synthetic data and to fail to generalize well with the real (e.g., non-synthetic) data. The terms ‘synthetic’ and ‘real’ are used herein merely to differentiate artificially (e.g., synthetically) generated data from data captured from the “real” world. One potential solution to closing the synthetic gap may involve improving the renderer. For instance, the use of photo-realistic renderers may help to improve synthetic data. However, increasing the realism is often computationally expensive, renderer design is frequently difficult and expensive, and renderers often fail to sufficiently model noise present in real images, thereby potentially causing neural networks to overfit to unrealistic details in the synthetic images.


MY PIC

In a Nutshell:

Executes a predictive machine learning model learned from user interaction in a virtual training environment and uses the predictive model to enhance measurement of user skill performance.


Systems And Methods For Multi-user Virtual Training

DOCUMENT ID: US 11474596 B1

DATE PUBLISHED: 2022-10-18

ASSIGNEE INFORMATION

NAME: ARCHITECTURE TECHNOLOGY CORPORATION

APPLICATION NO: 16/892911

DATE FILED: 2020-06-04


Abstract

Disclosed herein are embodiments for managing a task including one or more skills. A server stores a virtual environment, software agents configured to collect data generated when a user interacts with the virtual environment to perform the task, and a predictive machine learning model. The server generates virtual entities during the performance of the task, and executes the predictive machine learning model to configure the virtual entities based upon data generated when the user interacts with the virtual environment. The server generates the virtual environment and the virtual entities configured for interaction with the user during display by the client device, and receives the data collected by the software agents. The system displays a user interface at the client device to indicate a measurement of each of the skills during performance of the task. The server trains the predictive machine learning model using this measurement of skills during task performance.


BACKGROUND

(2) One of the more effective methods of skill acquisition is scenario-based learning. In the case of computer-based training, scenario-based learning may be achieved by providing realistic, hands-on exercises to trainees. Military forces, commercial enterprises, and academic institutions all conduct computer-based training to educate and train personnel. In virtual training, training is conducted when the learner and the instructor are in separate locations or in a virtual or simulated environment.

(3) Virtual training exercises are conducted in both individual and group formats. In the present disclosure, group formats for virtual training are sometimes called multi-user training. Multi-user training exercises may be conducted cooperatively or as competitions. In synchronous virtual training, training participants such as learners and instructors or multiple learners, engage in learning simultaneously and can interact in real-time. Conventional multi-user virtual training exercises can have limitations in presenting realistic training scenarios or tasks that effectively prepare trainees for real world situations. For example, presenting realistic simulations can require emulation of subtle effects in visual or auditory imagery. Conventional training simulations often lack realism, and can fail to engage users or can omit significant knowledge. Conventional virtual training has limitations in preparing users for real world scenarios that require planning for threats and responding to adversary actions. Conventional virtual training may not prepare users to make the most appropriate decision based on initial information and act quickly on that decision, while being ready to make changes as more data becomes available.



Machine Learning System, Method, And Computer Program To Predict Which Resident Of A Residential Space Is Watching Television For Content Targeting Purposes

DOCUMENT ID: US 11477531 B1

DATE PUBLISHED: 2022-10-18

ASSIGNEE INFORMATION

NAME: AMDOCS DEVELOPMENT LIMITED


Abstract

As described herein, a machine learning system, method, and computer program are provided to predict which resident of a residential space is watching television for content targeting purposes. In use, a login to a television service on a television device in a residential space is detected. Additionally, information defining a plurality of residents of the residential space is identified. Further, a profile determined for the login is identified, where the profile is associated with a particular resident of the plurality of residents or a particular resident group of the plurality of residents. Still yet, the profile and the information defining the plurality of residents of the residential space is input to a machine learning model to predict one or more residents of the plurality of residents that is consuming the television service on the television device.

Background/Summary

FIELD OF THE INVENTION

(1) The present invention relates to techniques for targeting content to users.

BACKGROUND

(2) Content targeting is a process by which the relevancy of content to a user is used as a basis for providing or recommending the content to the user. Typically, the targeted content may include advertisements, media, offers, etc. Current content targeting techniques consider various types of information when determining the relevancy of content to a user. This information usually includes demographics of the user and past behavior (e.g. content consumption) by the user, but also usually includes the current content being consumed by the user, especially when the targeted content is planned to be provided to the user during (or shortly after) the current content being consumed.

(3) However, when content is being viewed on a television of a residential space that is shared by multiple residents, it is unknown which resident (or residents) is consuming the content. This lack of user information restricts the content targeting process, as relevancy to demographics, etc. of the particular resident cannot be determined. There is thus a need for addressing these and/or other issues associated with the prior art.



ARTIFICIAL INTELLIGENCE


Wireless Communication Service Responsive To An Artificial Intelligence (AI) Network

DOCUMENT ID

US 11477719 B1

DATE PUBLISHED

2022-10-18


ASSIGNEE INFORMATION

NAME

Sprint Communications Company L.P.


Abstract

A wireless communication network serves User Equipment (UE) responsive to an Artificial Intelligence (AI) network. The UE transfers UE data that indicates user applications and their current status to a distributed ledger. The distributed ledger also receives past quality levels and locations from the wireless communication network. The distributed ledger stores the UE data, quality levels, and locations in a blockchain format that is readable by the AI network. The distributed ledger receives a future quality level and location and time for the UE from the AI network. The distributed ledger stores the future quality level and location and time for the UE in the blockchain format. The distributed ledger transfers the future quality level and location and time for the UE to an Exposure Function (EF). The EF signals a network control-plane to deliver the wireless data service to the UE at the future location and time and quality level.


Background/Summary

TECHNICAL BACKGROUND

(1) Wireless communication networks provide wireless data services to wireless user devices. Exemplary wireless data services include machine-control, internet-access, media-streaming, and social-networking. Exemplary wireless user devices comprise phones, computers, vehicles, robots, and sensors. The wireless user devices execute user applications to support and use the wireless data services. For example, a robot may execute a machine-control application that communicates with a robot controller over a wireless communication network.

(2) The wireless communication networks have wireless access nodes which exchange wireless signals with the wireless user devices over radio frequency bands. The wireless signals use wireless network protocols like Fifth Generation New Radio (5GNR), Long Term Evolution (LTE), Institute of Electrical and Electronic Engineers (IEEE) 802.11 (WIFI), and Low-Power Wide Area Network (LP-WAN). The wireless access nodes exchange network signaling and user data with network elements that are often clustered together into wireless network cores. The network elements comprise Access and Mobility Management Functions (AMFs), Session Management Functions (SMFs), Interworking functions (IWFs), User Plane Functions (UPFs), Policy Control Functions (PCFs), Network Exposure Functions (NEFs), and the like.

(3) A distributed ledger comprises multiple networked computer nodes that store data in a blockchain format. For the blockchain format, the distributed ledger executes a Distributed Application (dAPP) to execute ledger transactions that create data blocks. The distributed ledger redundantly stores the data blocks in the multiple ledger nodes. Each data block includes a hash of its previous data block to make the redundant data store immutable. The wireless communication networks use the distributed ledgers to store network usage data for the wireless user devices in an immutable format that is readable by the user.

(4) Artificial Intelligence (AI) networks comprise edges and nodes. An AI node performs logical operations of various type and complexity. The AI edges transfer data between the AI nodes and indicate traffic levels between AI nodes. An AI network can receive data that characterizes user behavior, and over time, the AI network can effectively predict some future user behaviors. For example, an AI network can effectively predict future user locations and activities with some proficiency based on the past user locations and activities.

(5) Unfortunately, the wireless communication networks do not effectively use the distributed ledgers to serve the wireless user devices in response to the AI networks. Moreover, the wireless communication networks do not efficiently use the distributed ledgers to transfer UE and network information to the AI networks.


Electronic Device, Warning Message Providing Method Therefor, And Non-transitory Computer-readable Recording Medium

DOCUMENT ID: US 11475770 B2

DATE PUBLISHED: 2022-10-18


Abstract

An electronic device, a warning message providing method therefor, and a non-transitory computer-readable recording medium are provided. Disclosed is an artificial intelligence (AI) system using a machine learning algorithm such as deep learning and an application thereof. Disclosed, according to one embodiment, is an electronic device which can comprise: a position determination unit for determining a current position of the electronic device; a communication unit for receiving accident data and a driving situation; an output unit for outputting a warning message; and a processor for learning the received accident data to establish a plurality of accident prediction models, selecting an accident prediction model to be applied from among the plurality of accident prediction models based on the determined current position, determining possibility of accident occurrence by using the selected accident prediction model, and controlling the output unit such that the output unit provides a warning message based on determining that the possibility of accident occurrence is greater than or equal to a preset value.


TECHNICAL FIELD

(1) The disclosure relates to an electronic device, a warning message providing method therefor, and a non-transitory computer-readable recording medium, and more particularly, to an electronic device capable of preventing a similar accident by learning a traffic accident pattern, a warning message providing method therefor, and a non-transitory computer-readable recording medium.

(2) The disclosure also relates to an artificial intelligence (AI) system simulating a recognition function, a decision function and the like of a human brain using a machine learning algorithm such as deep learning or the like, and an application thereof.

BACKGROUND ART

(3) An artificial intelligence (AI) system is a computer system implementing human-level intelligence, and is a system in which a machine performs learning and determination by oneself and becomes smart, unlike an existing rule-based smart system. As the artificial intelligence system is more used, a recognition rate is improved and a user's taste may be more accurately understood. Therefore, the existing rule-based smart system has been gradually replaced by a deep learning-based artificial intelligence system.

(4) An artificial intelligence technology includes machine learning (for example, deep learning) and element technologies using the machine learning. The machine learning is an algorithm technology of classifying and learning features of input data by oneself. The element technology is a technology of using a machine learning algorithm such as deep learning, or the like, and includes technical fields such as linguistic understanding, visual understanding, inference/prediction, knowledge representation, a motion control, and the like.

(5) Various fields to which the artificial intelligence technology is applied are as follows. The linguistic understanding is a technology of recognizing and applying/processing human languages/characters, and includes natural language processing, machine translation, a dialog system, question and answer, speech recognition/synthesis, and the like. The visual understanding is a technology of recognizing and processing things like human vision, and includes object recognition, object tracking, image search, human recognition, scene understanding, space understanding, image improvement, and the like. The inference/prediction is a technology of determining and logically inferring and predicting information, and includes knowledge/probability-based inference, optimization prediction, preference-based planning, recommendation, and the like. The knowledge representation is a technology of automating and processing human experience information as knowledge data, and includes knowledge establishment (data generation/classification), knowledge management (data utilization), and the like. The motion control is a technology of controlling autonomous driving of a vehicle, a motion of a robot, and the like, and includes a motion control (navigation, collision, driving), an operation control (behavior control), and the like.

(6) Meanwhile, conventionally, a person has directly performed a task of analyzing and classifying a cause of an accident at the time of occurrence of the accident. In addition, in the existing machine learning method, only a task of determining a cause according to a criterion classified by a person has been possible.

(7) In addition, there are frequent accident regions of various causes depending on driver's driving habits or surrounding factors, but the frequent accident regions are displayed only by street signs, and it is thus difficult for a user to recognize an accurate accident risk factor.


DISCLOSURE

Technical Problem

(8) The disclosure provides an electronic device capable of preventing a similar accident by learning an accident pattern using information that may be obtained from a vehicle at the time of occurrence of an accident and comparing the learned accident pattern and a current driving situation with each other to provide a warning message, a warning message providing method therefor, and a non-transitory computer-readable recording medium.



AUTONOMOUS VEHICLES

Systems And Methods For Computer-assisted Shuttles, Buses, Robo-taxis, Ride-sharing And On-demand Vehicles With Situational Awareness

DOCUMENT ID: US 11474519 B2

DATE PUBLISHED: 2022-10-18


ASSIGNEE INFORMATION

NAME: NVIDIA Corporation


Abstract

A system and method for an on-demand shuttle, bus, or taxi service able to operate on private and public roads provides situational awareness and confidence displays. The shuttle may include ISO 26262 Level 4 or Level 5 functionality and can vary the route dynamically on-demand, and/or follow a predefined route or virtual rail. The shuttle is able to stop at any predetermined station along the route. The system allows passengers to request rides and interact with the system via a variety of interfaces, including without limitation a mobile device, desktop computer, or kiosks. Each shuttle preferably includes an in-vehicle controller, which preferably is an AI Supercomputer designed and optimized for autonomous vehicle functionality, with computer vision, deep learning, and real time ray tracing accelerators. An AI Dispatcher performs AI simulations to optimize system performance according to operator-specified system parameters.


SUMMARY

(23) Safe, cost-effective transportation for everyone has long been a goal for modern societies. While privately-owned individual vehicles provide significant freedom and flexibility, shared vehicles can be cost-effective, friendly to the environment, and highly convenient. The modern English word “bus” is a shortened form of “omibus” meaning “for all” in Latin. Anyone who has ridden a bus on express lanes past rush hour congestion, used a bus to take them to satellite airport parking or to school classes, called or hailed an on-demand vehicle to avoid a long walk on a cold dark night or to get home from an airport or train station, or taken a bus tour of a new city or other environment, knows firsthand the economy and convenience shared vehicles can provide. Such shared and on-demand vehicles are especially invaluable to the unsighted, the physically challenged, those too young or old to drive, and those who want to avoid the problems and expense associated with owning their own personal car.

(24) While shared and on-demand vehicle operation often benefits from a human driver, there are contexts in which autonomous or semi-autonomous operation can be a tremendous advantage. For example, so-called “GoA4” automated train service has been used for some time in London, certain cities in Japan, and certain other places. The train between London's Victoria Station and Gatwick Airport is fully autonomous, meaning the train is capable of operating automatically at all times, including door closing, obstacle detection and emergency situations. On-board staff may be provided for other purposes, e.g. customer service, but are not required for safe operation. Copenhagen and Barcelona operate similarly-fully-autonomous subway trains. Other trains operate semi-autonomously, e.g., a computer system can safely move the train from station to station, but human personnel are still required to control doors, keep an eye out for safety, etc.

(25) However, designing a system to autonomously drive a shared or on-demand vehicle not constrained to a physical rail without human supervision at a level of safety required for practical acceptance and use is tremendously difficult. An attentive human driver draws upon a perception and action system that has an incredible ability to react to moving and static obstacles in a complex environment. Providing such capabilities using a computer is difficult and challenging. On the other hand, automating such capabilities can provide tremendous advantages in many contexts. Computers never become fatigued or distracted. They can operate day and night and never need sleep. They are always available to give service. With an appropriate sensor suite, they can simultaneously perceive all points outside the vehicle as well as various points within a vehicle passenger compartment. Such computers could allow humans to focus on tasks only humans can do.

(26) Some aspects of the example non-limiting technology herein thus provide systems, apparatus, methods and computer readable media suitable for creating and running autonomous or semi-autonomous shared transportation vehicles such as shuttle systems. “Shuttles” as used herein includes any suitable vehicle, including vans, buses, robo-taxis, sedans, limousines, and any other vehicle able to be adapted for on-demand transportation or ride-sharing service.

(27) Some example non-limiting systems include situational awareness based on machine perception and/or computer vision by a sensor suite that can rival and, in some aspects, even exceed perception capabilities of human drivers. Such situational awareness in many embodiments includes awareness (a) within the vehicle (e.g., within the vehicle's passenger compartment) and (b) outside of the vehicle (e.g., in front of the vehicle, behind the vehicle, to the left of the vehicle, to the right of the vehicle, above and below the vehicle, etc.). Such situational awareness can be supported by a sensor suite including a wide range of sensors (e.g., cameras, LIDARs, RADARs, ultrasonic, vibration, sound, temperature, acceleration, etc.) and may in some cases be interactive (e.g., the vehicle may interact with passengers within the passenger compartment and also may interact with pedestrians and other drivers).

(28) Some example non-limiting systems include a software suite of client applications, server applications, and manager clients for operating the system on private and public roads. According to some non-limiting embodiments, the shuttle may follow a predefined route, which may be termed a “virtual rail”, which is typically altered or deviated from minimally or only in specific conditions. The vehicle may generate the virtual rail itself based on stored, previous routes it has followed in the past. The vehicle in some embodiments is not confined to this virtual rail (for example, it may deviate from it when conditions warrant) but to reduce complexity, the vehicle does not need to generate a new virtual rail “from scratch” every time it navigates across a parking lot it has previously navigated. Such a virtual rail may include definitions of bus stops; stop signs, speed bumps and other vehicle stopping or slowing points; intersections with other paths (which the vehicle may slow down for); and other landmarks at which the vehicle takes specific actions. In some embodiments, the vehicle may be trained on a virtual rail by a human driver and/or receive information concerning the virtual rail definition from another vehicle or other source. However, in some embodiments it is desirable for the vehicle to calibrate, explore/discover, and map its own virtual rail because different vehicles may have different sensor suites. In typical implementations, the vehicle is constantly using its sensor suite to survey its environment in order to update a predefined virtual rail (if necessary, to take environmental changes into the account) and also to detect dynamic objects such as parked cars, pedestrians, animals, etc. that only temporarily occupy the environment, but which nevertheless must be avoided or accommodated.

(29) The shuttle may stop at any point along the route, including unplanned stops requested by an on-board traveler or pedestrians wishing to ride on the shuttle. In other embodiments, the shuttle dynamically develops a “virtual rail” by performing a high definition dynamic mapping process while surveying the environment. In one example implementation, the shuttle ecosystem described herein for use on a college or corporate campus provides a seamless traveling experience from any point A to any point B in a campus service area, which may include locations that are on a private campus, off campus, or a combination of both.

(30) In some non-limiting embodiments, the system uses a plurality of client applications, including human-machine interfaces (“HMI”), and devices that allow travelers to call for shuttle service, requesting pick-up time, pick-up location, and drop-off location. In non-limiting embodiments, the client applications include mobile applications provided on mobile or portable devices, which may include various operating systems including for example Android and iOS devices and applications and any other mobile OS or devices, including Blackberry, Windows, and others. In some embodiments, the system further includes a Web-based application or Desktop application, allowing users to summon a shuttle while sitting at their desk, in their home, etc. For example, the system preferably enables travelers to request a shuttle via a mobile app or kiosk terminals. The system preferably includes kiosks with large screen displays for implementing graphical implementations of Web Applications that allow users to summon shuttles and request service.

(31) Once on-board, the passenger is able to interact with the shuttle via an on-board shuttle client-interface application, Passenger UX. In some embodiments Passenger UX includes camera-based feature recognition, speech recognition and visual information, as well as 3D depth sensors (to recognize passenger gestures, body poses and/or body movements). In some embodiments, the Passenger UX includes interactive displays and audio systems to provide feedback and information to the riders, as well as to allow the riders to make requests. The on-board displays may include standard read-only displays, as well as tablet or other touch-based interfaces. In some embodiments, Passenger UX is able to detect which display device a particular passenger is currently paying attention to and provide information relevant to that particular passenger on that display device. In some embodiments, Passenger UX is also able to detect, based on perception of the passenger, whether the passenger needs a reminder (e.g., the passenger is about to miss their stop because they are paying too much attention to a phone screen) or does not need a reminder (e.g., the passenger has already left their seat and is standing near the door ready to exit as soon as the door opens).

(32) In the past, humans relied on intelligent agents such as horses or sled dogs to intelligently handle minute-to-minute navigation of a vehicle along a path, and the human driver was more concerned about overall safety. Similarly, in certain embodiments, one or more autonomous or semi-autonomous shuttles may include a human safety driver or other human attendant. In these embodiments, the shuttle preferably includes an on-board, integrated HMI comprising a Safety Driver UX, configured to inform the safety driver of the current vehicle status and operation mode. In some embodiments, the computer system pilots the vehicle and the safety driver gets involved only when necessary, and in other embodiments the safety driver is the primary vehicle pilot and the computer system provides an assist to increase safety and efficiency. In embodiments with a safety driver, the shuttle preferably includes an AI assistant or co-pilot system, providing multiple HMI capabilities to enhance safety. In preferred embodiments, the assistant or co-pilot includes features such as facial recognition, head tracking, gaze detection, emotion detection, lip reading, speech recognition, text to speech, and posture recognition, among others.

(33) The shuttle preferably includes an External UX for communicating with the outside world, including third-party pedestrians, drivers, other autonomous vehicles, and other objects (e.g., intelligent traffic lights, intelligent streets, etc.).

(34) In one aspect, the system preferably includes an AI Dispatcher (“AID”) that controls the system, sets and adjust routes, schedules pick-ups, drop-offs, and sends shuttles into and out of service. A system operator communicates with the AI Dispatcher through a Manager Client (“MC”) application that preferably allows the system operator to adjust system parameters and expressed preferences, such as, for example, average wait time, maximum wait time, minimum time to transport, shortest route(s), cost per person mile, and/or total system cost. The AI Dispatcher considers the operator's preferences, models the system, conducts AI simulations of system performance, and provides the most efficient shuttle routes and utilization consistent with the system operator's preferences. The AID may perform AI-enabled simulations that model pedestrians, third-party traffic and vehicles, based on the environmental conditions including weather, traffic, and time of day. The AID may also be used as a setup-utility, to determine the optimal location of system stops/stations for deployment, as well as the optimal number, capacity, and type of vehicles for a given system. The AID may be used to reconfigure an existing system or change the system settings and configurations for an existing system over a given timeframe.

(35) The shuttles according to the present embodiment system and method can operate in a wide variety of different lighting and weather conditions, including Dusk/Dawn, Clear/Overcast, Day/Night, Precipitation, and Sunny conditions. Preferably, the system considers time of day, weather, traffic, and other environmental conditions to provide the desired level and type of service to travelers. For example, the system may dynamically adjust service parameters to reduce traveler wait times during inclement weather or night-time or react dynamically to address traffic conditions.

(36) One example aspect disclosed herein provides a vehicle comprising: a propulsion system delivering power to propel the vehicle; a passenger space that can accommodate a passenger; first sensors configured to monitor an environment outside the vehicle; second sensors configured to monitor the passenger space; and a controller operatively coupled to the first and second sensors and the propulsion system, the controller including at least one GPU including a deep learning accelerator that, without intervention by a human driver: identifies a passenger to ride in the vehicle; controls the vehicle to take on the identified passenger; navigates the vehicle including planning a route to a destination; and controls the vehicle to arrange for the identified passenger to leave the passenger space at the destination.


System And Method For Determining Optimal Lidar Placement On Autonomous Vehicles

DOCUMENT ID: US 11474255 B2

DATE PUBLISHED: 2022-10-18

ASSIGNEE INFORMATION

NAME: THE REGENTS OF THE UNIVERSITY OF MICHIGAN


Abstract

In one embodiment, example systems and methods related to a manner of optimizing LiDAR sensor placement on autonomous vehicles are provided. A range-of-interest is defined for the autonomous vehicle that includes the distances from which the autonomous vehicle is interested in collecting sensor data. The range-of-interest is segmented into multiple cubes of the same size. For each LiDAR sensor, a shape is determined based on information such as the number of lasers in each LiDAR sensor and the angle associated with each laser. An optimization problem is solved using the determined shape for each LiDAR sensor and the cubes of the range-of-interest to determine the locations to place each LiDAR sensor to maximize the number of cubes that are captured. The optimization problem may further determine the optimal pitch angle and roll angle to use for each LiDAR sensor to maximize the number of cubes that are captured.


BACKGROUND

(2) LiDAR is widely used in autonomous vehicles for a variety of purposes such as object detection and navigation. One reason that LiDAR is popular for autonomous vehicles is that it is highly precise and the point clouds from LiDAR offer a rich description of the environment of the autonomous vehicle.

(3) While using LiDAR is popular, there is no agreed upon standard for the number of LiDAR sensors that may be used on an autonomous vehicle or where such sensors should be placed on the autonomous vehicle. For example, one manufacturer equips their autonomous vehicles with four velodyne-16 LiDAR sensors. The manufacturer places two LiDAR sensors at each side of the roof of the autonomous car with a roll angle between them. Another manufacturer installs one velodyne-64 LiDAR sensor on the roof of the autonomous vehicle.

(4) While many manufacturer use different LiDAR sensor configurations, there is no clear answer as to which LiDAR sensor configuration (and how many LiDAR sensors) is optimal for autonomous vehicles. In general, more LiDAR sensors on an autonomous vehicle may provide more precise information to the autonomous vehicle. However, such information may be redundant due to overlapping coverage areas. Moreover there is a cost associated with each additional LiDAR sensor due to the actual cost of the sensor and the computational costs associated with processing the data generated by the LiDAR sensor.


Method And System For Determining Correctness Of Lidar Sensor Data Used For Localizing Autonomous Vehicle

DOCUMENT ID: US 11474203 B2

DATE PUBLISHED: 2022-10-18

- no assignee data provided


Abstract

Disclosed herein is method and system for determining correctness of Lidar sensor data used for localizing autonomous vehicle. The system identifies one or more Region of Interests (ROIs) in Field of View (FOV) of Lidar sensors of autonomous vehicle along a navigation path. Each ROI includes one or more objects. Further, for each ROI, system obtains Lidar sensor data comprising one or more reflection points corresponding to the one or more objects. The system forms one or more clusters in each ROI. The system identifies a distance value between, one or more clusters projected on 2D map of environment and corresponding navigation map obstacle points, for each ROI. The system compares distance value between one or more clusters and obstacle points based on which correctness of Lidar sensor data is determined. In this manner, present disclosure provides a mechanism to detect correctness of Lidar sensor data for navigation in real-time.


BACKGROUND

(2) Autonomous vehicles may be equipped with various types of sensors such as, Lidar, sonar, radar, cameras, and other sensors to detect objects in its environment. The autonomous vehicles using Lidar as the main sensing device, requires correct Lidar data points all the time, for localizing the vehicle itself on navigation map. This is a continuous process, but while moving, the autonomous vehicles may face many uncertainties on its vision. The uncertainties may be due to a snowfall which blocks the Lidar vision partially and cause an error in received Lidar data points. Similarly, heavy drops on Lidar screen may cause lens effect or falling tree leaf just wrapped the lidar face for few seconds, may lead to an error in the collected Lidar data points. Further, performance of the Lidar sensor suffers when the weather condition becomes adverse and visibility range decreases.

(3) The existing mechanisms provide a comprehensive model for detecting objects in vehicle's environment. The mechanisms adjust one or more characteristics of the model based on the received weather information to account for an impact of the actual or expected weather conditions on one or more of the plurality of sensors. However, the existing mechanism does not disclose correction of the Lidar point data in real-time which has occurred due to natural things, or incurred any damage, or alignment issue, which hinders Lidar vision.

Also the format is changed. I'm putting the picks and the nutshell description at the top and the Abstracts and Background after the nutshell descriptions. This makes for a quicker read and any patent that strikes your interest can be looked at in greater detail. ail. il. l. . rior art already known to a person skilled in thes



24 views0 comments

Recent Posts

See All

MiscTechNews - Week of 11/1/22

Here’s a mix of what’s going on in tech this week. Consumer Space/Marketing GMC Sierra EV vs. Chevrolet Silverado EV: How to Pick the...

Patent Picks - Week of 11/1/22

Week of 11/1/22 Here are this week’s patent picks and nutshell descriptions. This week’s categories are Intelligent Sensors, Autonomous...

Comments


bottom of page