Teleoperation: A great deal more than hands on the wheel

Ottopia Team
Ottopia Technologies
5 min readDec 22, 2020

--

Self driving technology always seems to be just beyond our grasp. Every year the leaders of this technological revolution say “soon”. The reason is a seemingly never-ending list of edge cases, situations which autonomy is incapable of handling. Hence the critical need — now and always — for safe teleoperation

Teleoperation is the technology that enables a human to remotely monitor and control an autonomous vehicle (AV). It is based on a chain of high tech devices ranging from high speed cameras, to modems, to the control station itself.

Those who know of teleoperation think of it as direct control by a remote operator with “hands on the wheel” driving; providing direct steering, acceleration and braking commands to the vehicle.

However, in the words of Master Yoda, from Star Wars, “There is another”, indirect control. This method of control eliminates the need for direct commands. Instead, it allows a remote operator to issue commands at a higher level of abstraction like; “Follow this path”, or “Stop and wait”, and allow the vehicle to execute the action by itself.

It is shortsighted to think of only direct control for teleoperation. This technology must encompass both direct and indirect methods of control.

The scenario

Imagine hailing an autonomous taxi on your way home. You relax in the back and check unread messages on your phone as the vehicle drives up to a construction zone.

Outside the vehicle, it’s a busy afternoon as most workers leave their downtown offices and begin their commute back home. Pedestrians and cyclists zip around, not always, where they should. The construction site forces some pedestrians and most cyclists to weave around cars and cones. Orange cones, some mutilated or knocked over, haphazardly ‘create’ a new lane thereby contradicting old lane markings from last spring season, and newer, freshly painted lane markings from a few months back.

As a safety worker holds up a sign and gestures with an upraised hand to stop, the sign catches the setting sun just enough so that the glare renders the letters “STOP” effectively illegible.

The speed limit on this public road may be 30 miles per hour, but only a few yards past the construction site, a motorcycle zips by at 40 miles per hour. (In just 200 milliseconds, that motorcyclist will have traveled over 8 feet.)

Construction zones can be hard to navigate for a human and impossible for autonomy

Inside the vehicle, high-end hardware allows the vehicle to sense this entire environment. With the use of LiDAR sensors, radars, cameras, and multiple other sources of information, the vehicle is prepared to respond to almost anything that crosses its path.

The LiDAR generates 1.3 million data points per second. This effectively gives your vehicle the 360-degree ability to detect traveling objects more than 50 meters away, sometimes even before a human in the driver seat could. Right after, a System-on-Chip (SoC), capable of performing 320 trillion operations per second, processes this data and performs the complex calculations needed to instantly understand the situation

This technology can help the vehicle drive up to the construction site, and away from it, all on its own. However, it may not know how to navigate through the complex scenario described above. As it recognizes that fact, the vehicle begins to slow down, and triggers a request for help from a remote operator, who promptly establishes a connection to the vehicle.

The solution

At this point the inherent limitations of teleoperation make it exceedingly difficult — indeed dangerous — for assistance to be provided by a remote operator through direct control.

The challenge facing this autonomous vehicle has little to do with awareness of its surroundings. In fact, it is the awareness of the complexity of its surroundings that prompts it to request help in the first place. The system decides to err on the side of caution by requesting the assistance from a human operator.

In simple terms: it would be a mistake for this human operator to take direct control of the vehicle and negate the cutting edge visibility and command execution of the autonomy stack. The operator would easily handle this situation were it not for the added latency of the remote connection, affecting the ability to react and navigate the situation safely.

This is a time where indirect control methods are the safest and most effective way for the remote operator to resolve the situation for the autonomous vehicle. Therefore the operator does not just receive video and audio from the vehicle and execute controls. It requires a kind of translation, between what the vehicle sees and interprets — like the path that orange cones seem to be delineating for vehicles — and the remote operator.

By providing this translation, the remote operator can augment the vehicle’s autonomy stack with their own understanding of the situation. An example of this system is path choice, in which the vehicle sends all executable paths for the teleoperator to choose from. Another method is path drawing, in which teleoperators draw out a desirable path for the vehicle to execute.

Picking a new path is as simple as pushing a button

The vehicle receives these commands and combines it with the results of the data it has gathered. The result is a human-machine hybrid decision that when implemented can react in real time to moving or undetected obstacles with virtually zero latency while knowing that the choice it made is not negatively impacted by sensor shortcomings.

It may be clear by now how such a method follows the safety principles needed for proper teleoperation.

  • The vehicle’s execution takes place in the edge. Although commands come from a trained remote operator, all calculations required for navigating the vehicle past the construction site come from vehicle-side hardware and software.
  • System degradation is minimized: all sensors and safety measures remain engaged at all times. .
  • Throughout the intervention, the vehicle’s algorithms are allowed to respond to any obstacles, moving objects or new events occurring in real time, regardless of the initial input from the remote operator.

These three principles make indirect methods of control the best way to provide remote assistance to an autonomous vehicle. Autonomous vehicles, in use cases like the one above, face highly dynamic, yet structured environments. Public roads come with higher levels of complexity, unpredictability, and liability.

It is no surprise that for such scenarios our customers and partners prefer an integration into their autonomy stack to enable indirect methods of control even if there is an unparalleled network and video layer available. Fortunately, some companies do “have it all”.

--

--

Ottopia Team
Ottopia Technologies

Ottopia Team is the voice of the dedicated group of professionals behind Ottopia - https://ottopia.tech