Industry sign banking nevada emergency contact form safe
Hello, everyone. And welcome to Robotics Today. My name is Luca Carlone. And I'm excited to introduce
Naira Hovakimyan as our speaker today. Naira's going to talk about
a very hot topic, which is self-learning and control. After her talk,
we are also going to have a panel discussion. And today, we're very excited
to have two wonderful guest panelists, Claire
Tomlin from UC Berkeley and Jonathan Howe from MIT. Naira is currently a professor
of mechanical science and engineering at the
University of Illinois at Urbana-Champaign. She got her PhD in
physics and mathematics from the Institute of
Applied Mathematics of the Russian Academy
of Sciences in Moscow. And before joining the
faculty at UIUC, in 2008, she spent some time as
a research scientist at Stuttgart
University in Germany. She was [INAUDIBLE] in
France and at Georgia Tech. And she also-- was also on
the faculty at Virginia Tech. So in 2015, she was also
named inaugural director of the Intelligent
Robotics Lab at UIUC. Naira has been a pioneer
in adaptive control. She's been providing a number
of [INAUDIBLE] contributions to control optimization,
autonomous system, neural networks, game theory. And she also did extraordinary
contribution to applications in aerospace
robotics, agriculture, biomedical engineering, elderly
care, among many other fields. She has co-authored
two books, six patents, and more than 400 publications. Naira has received
multiple awards. I will try to
sample a few of them just to give you
the flavor here. In 2015, she got
the AIAA Mechanics and Control of Flight award. In 2014, she was awarded
the Humboldt prize for her lifetime achievements. In 2015, she got the IEEE
Control System Society Award for Technical Excellence
in Aerospace Controls. In 2019, she got the AIAA
Pendray Aerospace Literature Award. She's fellow and life-member
of AIAA and fellow of IEEE. And her work in robotics
for elderly care was featured in the New York
Times, and Fox TV, and CNBC. Beside being an
excellent researcher, Naira is also an
excellent mentor. In 2015, she was awarded the
UIUC Engineering Council Award for Excellence in Advising. And she's also co-founder
and chief scientist of IntelinAir, which
is a company which is working on drone
technologies to redesign the future of
agriculture and farming. Naira, thanks for being here. And welcome to Robotics Today. Thank you for hosting me today. I'm very honored to be here. So let me share my screen. OK. So I guess I'm good to go. Thanks, again, for
having me here. It's a great honor for me
to have this opportunity to present some
of our recent work that may have impact for
robotics for this audience. So I will talk today about
safe learning and control with L1-adaptation. And to get started, we just
have these two animations here that show how the recent
applications of reinforcement learning methods have created
great impact across not just our community but
widely across the globe. Like, the animation on
the left from DeepMind-- I checked today-- has more
than 10 million views. That basically
shows how the agent learns to run, jump, climb
without having any prior model. It learns from its own mistakes,
collects data, fails again, collects data, learns. It has a reward function
which keeps it going-- moving forward and so on. On the right, you see
the Rubik's Cube-- that I'm sure many of us have
played in our younger days. We did-- so it can be,
today, reconfigured with a robotic arm. So while these applications
seem very impressive, and they show what artificial
intelligence methods or machine learning methods, like
we would like to say, can achieve from having data,
having learning methods, obviously, we cannot
afford having these methods on safety-critical systems. Safety-critical systems
will not forgive mistakes. We cannot allow [INAUDIBLE],,
collecting data, trying, learning, failing. So every crash here
can be catastrophic. Human deaths are not allowed. So accidents can
be very expensive. And we have experience
here of flying the Learjet. We are not just talking
here artificially. The airplane on
the left you will see through this
presentation numerous times. The drone here is a
picture taken in our lab. On the right, we
have the [INAUDIBLE] my student is working
in the company. So the Safety-critical
applications can punish us
severely if we afford playing with them assuming
that we can have a mistake, collect the data,
learn on the go. So let's look what's
happening in a typical setup when we try to collect
the data and use an optimized
controller, like that. Typically, we have
a model learning blog that learns the
model from the data. There is an optimization
that produces a controller. And the controller
drives the system. So what can happen-- that
the external disturbances or the modeling
errors can basically destabilize the system. What one needs to do--
one needs to understand that the safety must be built
into the control architecture from the beginning by design. So this means that
we need to have an augmentation with
a safety controller that would ensure the
safety of the system throughout the learning process. So no matter what are the
mistakes-- error of data collection, learning--
the safety must always be there so that every
new knowledge acquired in this process
subject to errors, failures, and so on, will not
let the system be destabilized. So looking through this type
of development-- so what do we need from the
safety controller? With the saf-- from
the safety controller, we need typically certificates
of performance and robustness, which include
transient performance, steady-state performance,
time-delay margin, and disturbance
rejection-- something that we learn in a
junior-level control class, like the alphabet of
control technology. So L1-adaptive
control architecture that we developed
over the last 15 years has proven its work already
on a variety of big platforms. As I said, I'll show
some flights of Learjet. We have commercialized it
for Evolution autopilots of-- for Evolution
autopilots of Raymarine. For some drone technology,
it went into hydraulic pumps of Caterpillar. We have tests at Statoil on
drone technology of IntelinAir and many other
industrial applications. It has an architecture in which
estimation would be decoupled from the control loop. So we are able to tune for
performance and robustness in a very systematic way, and be
able to quantify its robustness and performance mar-- robustness margin and its
performance bound a priori, and be able to be
in hold of those throughout its performance. So when [INAUDIBLE] the
L1-adaptive controller [INAUDIBLE] type
of a model learning controller that I described. So what we are
able to do-- we are able to retain the key features
of performance and robustness of L1-adaptive controller
yet at the same time benefit from the
versatility offered by these machine
learning methods. So this is what we will explore
through this presentation. And this is what kind of makes
up most of our current research program at Illinois these days. So L1-adaptive control
theory, as I said, provides some type of decoupling
between estimation of control and helps us to establish
the type of performance bound that we show here. So there is a desired
system trajectory that one would like to follow. There is a reference
system, which is a hypothetical
reference system. It's non-implementable, but
it describes the best type of performance that
one would achieve with L1-adaptive controller if
the system uncertainties were known. So it's non-implementable. So between this system
and the actual system, the performance can be
quantified inverse proportional to the square root of
the adaptation rate, while the performance of this
reference system with respect to the desired system can
be quantified proportional to the filter bandwidth and
augmented with an error that would be exponentially decaying
dependent upon initialization error. So with this decoupling, we are
able to tune the performance and robustness of the
system in a systematic way. It is this architecture of
versatility of L1-adaptive controller that helped us
to achieve quick tuning across different applications
in different industries and to achieve transition to
different industrial platforms. Here is a timeline how
our development went. The first papers appeared
back in 2006 American Control Conference. And in the same year, in AIAA
Guidance and Navigation Control Conference, we had
the first flight test with Naval Postgraduate
School and their Rascal UAVs with [INAUDIBLE] autopilot. We were doing augmentation
of their [INAUDIBLE] to fly aggressive
path-following trajectories. Later, in 2007,
we got NASA grant to fly their AirSTAR
subscale commercial jet. We were able to successfully
[INAUDIBLE] plane into [INAUDIBLE] to give
the pilots [INAUDIBLE] recovery opportunities from
[INAUDIBLE] conditions. That led to joint
publication opportunities with Boeing-Raytheon
coauthors, giving lots of visibility
and opportunities to write papers with
other companies. So here is where we got
collaborations with Statoil, Raymarine, Caterpillar,
Eurocopter, [INAUDIBLE]---- all these companies got
attracted to the technology. And there were a lot of
transition opportunities. Then we got the opportunity
also to test on real airplanes. So while NASA was
testing it on 5.5% subscale commercial
airplane-- so Learjet F-16 were already the real airplanes
that you see with pilot inside that we were able to
fly in 2015, '16, and '18. And now with the explosion of
this machine learning method robotics industry,
we were able also to take this framework of
safe learning and control and to move into
robotics applications. So before I will go on and show
what we do with safe learning and control for robotics, I want
to show some of these flight tests of Learjet, because
these are very interesting and will keep everybody
entertained somehow. So with Learjet-- so when
we go to fly the Learjet-- so we have basically pilots
and flight test engineer. The Learjet is a Calspan vehicle
[INAUDIBLE] Variable Stability System configuration,
where they can basically inject some accidental
configurations that they design with us carefully to validate
these robustness margins that we claim theoretically. And they are able basically
with their safety switches to take over in case-- if our theoretical claims
don't get verified, they can save the aircraft
with their safety switches. And that's why they
take the risk to flying through these configuration. So they're going to test-- in the next movie that
I'm going to show, they're going to test for
handling qualities, flying qualities. Basically, when you fly
the airplane in a normal configuration, you
have [INAUDIBLE] side-- that's flying
qualities level one. When you have some
[INAUDIBLE] and you have [INAUDIBLE]
already degradation, it's flying qualities
level two, and so on. So the pilots are put
into a configuration when it's very shaky. And they have the [INAUDIBLE]
rating scale in their hands, where they need to read and
assess whether it's flying qualities level one, two, four-- from which configuration,
how much they recover. And because the
situation is very shaky, they are not able to read
from the piece of paper. They're asking the L1 to come
on so that the airplane can be stabilized so that
they can read basically in which configuration they
are and how much they recover. So just listen. So I gave a little bit preview
so that when you're listening you manage to catch up,
because it goes pretty fast. It's shaky. It's accidental. So it goes a bit fast. That's why I
[INAUDIBLE] preview. [VIDEO PLAYBACK] [ENGINE HUMMING] - Constantly overshooting
my desired bank though. Jason is probably
hating life right now. - Yep. - You're right around 27. I'd recommend you just do
the task with L1 on and-- - OK. - --don't do both tasks. Yeah. - Got it. All right. So you-- - You don't have enough time. - You did not get
adequate on that one. - OK. - OK? So if you want to run
through a CHR real quick-- you did not get adequate. So we're starting in-- - Can I get L1 on as we do the-- [LAUGHS] - Yep, sorry. - [INAUDIBLE] please. - Yep. Correct. - There's your answer. - No. - There's your answer, right? - All right. L1's coming on in 3, 2, 1-- now. L1 is on. - Thank you. - Rush, if you didn't
ask, I was going to. [LAUGHS] [END PLAYBACK] And now it's a landing
scenario, basically. Landing is very challenging. For example, NASA never
decided to land with L1. But with Learjet, they
agreed to land with, again, some type of
abnormal conditions. [VIDEO PLAYBACK] - 200 feet. - Copy. - 100 feet. - Copy. - 50 feet. Shallow. Looking good. - There's the ground effect. You see that? - Yep. [BEEPING] - I've got the airplane. - Your aircraft. - Looked uneventful to me. - Very uneventful. [END PLAYBACK] So basically, this
was 2015 deployment, upon which, in 2016,
we got the 20-- in 2016, we got the F-16,
which we cannot, unfortunately, show and talk
because of the F-16. In 2018, we got
the Learjet again. And here, they
implemented an accident from 1967, which
is a lifting body incident, when the aircraft
goes into [INAUDIBLE] mode configuration. And here, in this
accident of 1967, luckily, the pilots survived. They had the test data
from this accident, which they are able to inject
into the Learjet and test it. So these tests are extremely
valuable and extremely kind of prescience, in some sense. For training the pilots,
training the students-- this experience is extremely
invaluable for everybody involved in this process. Now just listen
to the recordings. [VIDEO PLAYBACK] [BEEPING] - I can't even
control the airplane. - I got the airplane. - You have the airplane. [END PLAYBACK] The student on the left is
my student sitting there. [INAUDIBLE] [VIDEO PLAYBACK] [ENGINE HUMMING] - Hey, task in 3, 2, 1-- now. So you can feel there, in
high frequency, I made it-- some high frequency
inputs excited the roll. Lower frequency-- the roll
overshoot or oscillation tendency is less. - Sounds good. - There's still some there. - Pitch-- I'm having no issues. Fine tracking and gross
tracking are very good in pitch. - All right. And I have [INAUDIBLE] complete. - Now they're going
to engine out test. [ENGINE HUMMING] - All right. Recording on in ready,
ready, [INAUDIBLE].. - Running. OK. Power is coming back in-- - My hands are free. - --3, 2, 1-- now. - Control is fixed. - And recover, please. - OK. And recovering. - The power is back. [ENGINE HUMMING] - I'm ready to try
this now with you-- with a pilot correcting for it. So-- - OK. - And I'll give it about
[INAUDIBLE] reaction time. I am on conditions. - All right. And then recording on my call. Ready, ready, [INAUDIBLE]. - On. - OK. Power will be left throttle-- - OK. - --in 3, 2, 1-- now. - One potato. And recovering. - And recording off. - OK. Recording off. Matching throttles. - And with L1, they come
to three-degree degradation always. - OK. I'm ready for the
recording on ready. Ready, [INAUDIBLE]. - On. - OK. Left throttle coming
back in 3, 2, 1-- now. - All right. Now I'm touching the controls. - Nice demonstration. - Yeah, check that out-- about
three degrees over the left bank. - Speeding in the
rudder to match. I can feel that. It is descending a
little bit, but it's-- - Yeah, I'm descending
a bit, but that would be easy to compensate for. - Yeah. OK. I think you got some
good data there. - All right. Recording off. - Matching power. Recordings coming off. [END PLAYBACK] So this is what I tell the
students-- that in real world, there is no zero, right? Three degrees' pretty good
after what we saw with 20 degrees without pilot; with
pilot, it was 12 degrees. We did one without any
pilot-- it was three degrees. So zero is just an
artificial number, right, making all the math work. So three degree in real world
is a pretty good achievement. And we got some press,
obviously, from it. So having these demonstrations,
naturally, the next thing that was coming already,
historically, into our lab were the robotics applications. So the first
robotics application we got-- it was interesting. It was this elderly
care grant from NSF that got us also some
press into New York Times. It's interesting that this was
kind of the first opportunity that brought us--
and it nicely rings a bell with a funny cartoon that
would now come and entertain everybody. So deploying these drones
at home environment was interesting because we could
kind of bring in VR technology, and study people's
perception of it, and kind of compare it to
Cinderella, that was maybe filmed a few decades
ago, and ask questions-- how safe she feels in
the presence of birds showering her or helping
her with household tasks, like setting up her
bed or something. Because this brings up lots of
interdisciplinary research-- how peoples feel safe in the
presence of these robots, because now we talk about
package delivery tasks. So these interdisciplinary
research problems make our life very
interesting, very re-- the questions that we can
ask and train our students become very important
and far reaching with their applications. So as I say, the-- for this seminar
series, I wanted just to pack a number
of problems where we look how to do safe learning
and control simultaneously. So the first problem
that I want to show here is what we did in collaboration
with Evangelos Theodorou from Georgia Tech. We integrated
L1-adaptive controller with his model predictive
path integral controller that provides a framework
for solving nonlinear model predictive controls with complex
constraints in near real-time. So we integrated an
architecture here and used data in his
AlphaPilot project environment. Here is the environment
that I want to show. The paper is accepted in
this year's IROS Conference. Let me run the movie here and
explain what's happening here. So what you see here in red is
when they run the AlphaPilot just with MPPI. Basically, it takes
them this long time-wise to finish the lap. When they added the
L1 on the top of it, they finished the lap
in a much faster time. And the green l-- segments just show
that in these cases the MPPI didn't
survive without L1. And with L1, they were able
to take these few other cases as well. So here is that drone
racing environment. And now, as we are talking,
actually, Georgia Tech has reopened their
campus on June 18. The students are working
to fly this L1 MPPI architecture on a real drone in
their [INAUDIBLE] environment. Since the campus opening
was happening slow, we didn't get the real drone
footage from their lab. Otherwise, most likely,
we would have had, today, the real drone footage
and not just the Goggles environment. But we are very excited by
this work with Evangelos. And hopefully, we will have the
real drones flying very soon. If not for this pandemic, we
would have had it, for sure. Similarly, we have
integrated also, again, with Evangelos this
L1 with differential dynamic programming
that model learning. So basically, the model
learner continuously improves the knowledge
of the model. Based on that, the
trajectory optimization does a better optimization. And in that process,
as the model learning and the trajectory
optimization improve, L1 ensures this safe control
and safe guaranteed performance without losing the
robustness and so on. So we demonstrated this in
a simulation environment of this inverted cart pull, as
you see-- so prior to learning and after learning. So when L1 is on, you see better
performance in both cases. Here is the cost function
plot on the right, where you see that with the
help of learning, actually, by the end of the process,
you achieve the same value for the cost function. But what the learning--
what L1 does-- in the process of
the learning, L1 helps you to have better
robustness and better performance, while
without L1 you have basically much higher
value here of the cost function. So it's-- the contribution of
L1 for the transient here is crystal clear-- that during the
transient, it does its job by ensuring this safe,
guaranteed performance. So moving forward, we want
to show, so for example, if Gaussian processes
could be safely integrated with L1 architecture, right? Now, why do we do that? For example, if the data
is being accumulated-- can we simultaneously use
this accumulated data to learn the model without any
persistently exciting signal? And the Gaussian processes
say the Bayesian learner can learn the model with
a few data points, right? If we store them in the kernel
metrics, can we learn it? And can we use this
learned model, for example, for better planning purposes
without any prior knowledge? Again, this paper was published
and presented recently in the [INAUDIBLE] Conference
that we lost our opportunity to travel. We simulated it for
a quadrator model. So what we show here
as a demonstration, basically-- that
in the beginning, where we don't have enough data,
we see the L1 contribution. The minute enough data
has been accumulated, the learner takes over. And there is no need for L1. So the L1 element in the
control signal dies out. And the learner takes over and
acts as the main controller. By doing so,
basically, you can save some of your robustness margins
already for other purposes inside the system. While the robustness
margins are defined through the L1 architecture,
they do not change. But when they are not used
already for your uncertainty compensation because you
have learned the system, you can use it for other
purposes inside your system. You can use it for
better planning, better-- for just other purposes that
your mission may require. So for example, in the middle,
you can have change of mass, center of gravity, and other
things for package delivery and so on, like a disturbance. But it implies L1
will kick up again-- kick on again to pick
up the uncertainty and to compensate for it
until the learner again picks up enough data to
learn and compensate for it. Once the learner picks
up enough data to learn, L1 contribution will die out. This is a benefit
of the architecture. So how to synthesize
an architecture that would work in a way-- when you don't have enough
detail, it won't work. The minute you have enough
data, the learner takes over and the L1 goes
into passive mode. So this is the benefit of
that architecture that's detailed in that paper
of [INAUDIBLE] that can be downloaded and studied. Next, I want to
talk a little bit about navigating robots
in confined spaces in between different
obstacles-- how to build safety cubes around those. And this is relevant
to our work with Marco. We have a project, again,
from NRI with Marco. So we use here
contraction theory. So imagine we have
a nonlinear system with modeling uncertainties. And again, we have
safety-critical applications. And we have-- we want to have
a planner-agnostic approach to certify safe tubes
around desired trajectories; that we want the robot to
remain inside these safe tubes and navigate inside
of these safe tubes in between obstacles. This paper has been
submitted to CBC. It can be downloaded
from archive. So we designed a
contraction-based controller that would keep the robot
inside the safe tubes and augmented it with an
L1-adaptive controller that would give us multiple
knobs for tuning between safety performance
and robustness, right? So here are the
multiple obstacles. And say we want the robot
to follow this orange path-- it would be safe in
between these obstacles. But the blue would run into,
for example, an obstacle, right? If we design a tube
like this orange-- and it's not sufficiently
conservative-- it would run into these obstacles. Obviously, we would like it to
be tighter around the desired path so that it doesn't
run into these obstacles. So what contraction
theory does-- it looks into these Riemannian
energy as a control Lyapunov function and tries to minimize
gamma, which is the d-- geodesic path between
the desired path and the actual path, by
using Lyapunov function as the energy of the shortest
path of the manifold, right? And we augmented with an
L1-adaptive controller. And due to the architecture of
this L1-adaptive controller, there is this natural, inherent
decoupling between performance and robustness. So we have these
three tubes now that are inserted one into another. So the first tube
would be just the-- purely due to the initial
initialization error. And it's like a funnel. It will be this exponentially
decaying performance due to the initialization error. The second tube, which
is the green tube-- it will be tuneable based
on the filter bandwidth. And the last tube,
the orange tube, would be tuneable based
on the adaptation rate. So here is a toy example,
where we can simulate and show this effect. So basically, having the
three tubes inserted one into another, if we increase
the adaptation rate-- so the orange and
green tubes will col-- collapse. And when we increase
the bandwidth, it will make the tubes narrow. And we'll just get closer
to the desired path. So here is just the
contraction controller here that you see with the blue line. It may collide
with the obstacle. But when we put the L1
and we tune it tighter, we can get closer to the desired
path by appropriate tuning. And what this framework
allows us-- as I showed, that we can incorporate a
Gaussian processes' Bayesian learner, we can use this
previous architecture together with a contraction controller
to learn the uncertainties for better planning. By doing so, we can
make the tubes tighter around the desired paths and
have one more knob for tuning. Here is a race track simulated. And this paper
will most likely go to the [INAUDIBLE] Conference
within this next month. So if we have a
beginner driver, we would like to give
him a wider track. For an intermediate driver,
we will make the desired path with intermediate
width of a tube. For an advanced driver, the
tube can be very super narrow, right? So the rediscovery metric and
the re-tuning of L1 parameters will not be required. So-- under some mild
assumptions, of course. So the model uncertainty-- as
we learn, it will be updated, but the controllers
will not be re-tuned. So once this paper is
submitted to [INAUDIBLE],, everything will be
downloadable, including the software and everything
from our [INAUDIBLE] sites. And now I want to move to some
of these big projects that exist in our group, where
these type of controllers have been motivated and
they can be used most likely over the next few years. So this is the project
we did with Marco. This is an NRI project from NSF. on the last-mile delivery. So the underlying
concept is that, for the last-mile
delivery optimization, one can take advantage of
the ride-sharing vehicles and drop the
packages and pick up from these ride-sharing vehicles
for the last-mile delivery optimization. And that's the part of
the city where already you have more stop signs,
that's low speed limit. And you optimize just over the
random network of vehicles. Obviously, the cars
have to be retrofitted with appropriate magnetic dock. The technology has to be there. Now, with pandemic, we see
more and more use of these UAVs for this purpose. There are some already
preliminary results, both in our group and in Marco's
group, that we cite here. So what we show here is an
animation from our group. Paper was submitted
to [INAUDIBLE].. So this is an animation showing
a point of no return, when a drone is trying to
approach a vehicle for a drop in the parcel. And there is a
point from which-- a point of no
return, we call it. Here, we're trying
to look now how to embed a deep-learning
type of architecture that would have computational
optimization for energy savings to maximize the flight time. We call it "safe learning" here. Another project in our group,
where we're trying, again, to use safe learning
with control, is related to this
[INAUDIBLE] project. All of you remember,
I believe, the landing of American Airlines
in Hudson River. And we know that it was
the captain's decision to land in Hudson River. So what the captain did-- with his 40 years of experience,
he debated his options between landing in La Guardia
and Teterboro or Hudson. And he took the correct
decision to land in Hudson. So it's his experience, with
all the numerous [INAUDIBLE] in his brain, to land in Hudson. And that was the
correct decision. So can our learning and
optimization algorithm today reproduce similar block
in our autonomous system that will take the right
decisions every time-- to endow our autonomous
systems with similar safe path-planning/safe mission
replanning objectives so that at every moment a safe mission
replanning can happen to ensure safety, and [INAUDIBLE],, and
save the vehicles from crashes, and path-replanning, and
execute everything naturally. So we call this
multi-level adaptation. And this is another NSF project
kind of, again, in our lab. Another project
that we have again-- and we're very excited it's
going on-- with Evangelos. This has to do also with
resource-aware uncertainty and resource-aware control
architecture, where we refer to computation as our budget. And we would like
to understand how should we budget the
computation for control, for perception, for navigation. So this is very interesting. And we have another collaborator
from Georgia Tech here [INAUDIBLE] so that
Evangelos works with-- so this work is also partially
supported by NASA Langley. And we have here kind of a
[INAUDIBLE] computational algorithm for collision checks
that was published last year in Robotics-- Science and Systems Conference. All of these add to our
portfolio of methods for safe learning and control. Finally, I want to
give a brief overview of the co-operative path
generation and path following framework that we have developed
in our lab for [INAUDIBLE].. This work was funded also
by Air Force and NASA. We're-- through decoupling
of path generation and path following, we have enabled
multi-vehicle missions. And we have
implemented it in NASA in a very challenging
environment in NASA's Langley's Autonomy Incubator. Let's go through this-- how two drones can fly. The [INAUDIBLE] drop out here. And they have the
model of the maze, but they still do silouhette
informed trajectory shaping as these two drones
go through this maze. And they coordinate
with each other to achieve simultaneous landing. So this is a
time-critical mission, where they coordinate with
each other their arrival time. They exchange their
relative air positions. And they coordinate
on their arrival time. So as a kind of
next step, we plan to bring this contraction-- control contraction metric
augmentation approaches to these to enable
multi-vehicle missions in these type of
constrained environments to enable more agile,
collaborative missions. And with that, I guess I
would like to acknowledge my current collaborators. We have very successful,
interesting meetings all the time. My past PhD students. And all of those who
are my collaborators in my current group. All of the people at Air Force
who enabled all these flight tests on the Air Jet F-16. And my student,
[INAUDIBLE],, who compiled this presentation for me. Thank you all very much. I'm happy to stop my slide
share and go back to this mode. Happy to take any
questions, if you have. I don't know how
I did with timing. You can tell me [INAUDIBLE] No, it's great. Yeah, thank you very much for
your very interesting talk. I really loved how you gave
a nice historical perspective on L1-adaptive
control and to see how such a control
theoretical technique has been used very successfully in
the context of safe learning and control. So that was really
interesting and insightful. So today, we are going to have-- we're fortunate to
have two great guest panelists, Claire
Tomlin from Berkeley and John Howe from MIT. So as usual in
Robotics Today, I would ask them to kick off the
panel with some questions. And then we'll
take it from there. So maybe, Claire,
you could start? Yeah, I'd be happy to. Thank you, Marco. And thank you, Naira,
for a wonderful talk. As Marco said,
really talking about the historical perspective
of L1-adaptive control and the research
that you've done in your group, and a beautiful
set of theory and experiments, and then bringing that together
with these very popular and new methods of learning, and
then really bringing the two together nicely. So I thought-- I
had three questions. And I thought I'd start
with the more detailed one and then maybe go to the
more broader questions. And the first
question is something I know you've thought
about a lot, which is, as control theorists,
we're very careful about models and about developing these-- and you have shown
this in your work-- developing and, in
your work, decoupling these bounds on
performance and robustness, and the deltas that you get,
and the certificates that you get out of L1-adaptive control. And then you bring in learning. And learning-- and you showed
very sort of elegant frameworks and-- with your MPPI work
and your DDP work, how you can really
marry these together. But what-- maybe you can
talk about this kind of piece at that intersection,
where, very simply put, you have a model. You've developed your
L1-adaptive control framework. And then you're applying
that in a system where all of a
sudden you're faced with an unknown
environment, where the uncertainties
and the things that are coming at you from
the environment just violate those restrictions. That's the kind of dichotomy
that I think we're faced with. And you've been able to
maneuver that beautifully. And I'd like you to
just comment on that and talk about how you do
that, with perception-- the learning and perception. How do you deal with these
big uncertainties that come and violate what we've already
developed as control theorists? How much time I have to
answer that question? [LAUGHTER] It took us six months with
Evangelos to make it work. So that MPPI with L1 that you
saw in that Goggles slide-- it was six months' work. It was how to put
an architecture that would make it work because-- First is the
architecture, right? So it took us really six months
to make that L1 MPPI and the L1 with GP that was in
[INAUDIBLE] to work. Because, first, we-- first-- and I have to give here
Evangelos lots of credit because he pushed me to do it. He said, why don't you
do it-- because it's very important for the community. If you don't do it,
others will do it. And they may not do it right. You better do it. And you will do it right. I said, OK, well, let me do it. So first-- the first
question is that, can GPs be integrated with
L1-- just visibility equation? And at first, we
looked at that-- can GPs be integrated with
L1 as a visibility equation? [INAUDIBLE] to achieve
something more, but can we have a GP
inside L1 that can learn and this whole
learning will be safe? So-- and there are a few ways
you can put this GP inside L1 like it's an
architecture, right? So how to put it right
so that it can learn and when it learns, for example,
L1 can go into passive mode because it's
learned already; you don't need it to do anything. If I know-- because L1 is needed
for adaptation for robustness. And if I know, then I
don't need it, right? So how to have this
correct architecture? So using-- what I always
like to say and emphasize-- that the most important thing
are the architectures, right? You can-- so you can put one
fixed gain control architecture and struggle all your life
how to compute your control gain-- how to solve
your optimization problem to compute your control
gain so that it does the job. And then, all your life, you
are solving your optimization problem in a better
and better way. Another parallel
philosophy is how to synthesize the
correct architecture so that it does the job better. So in this process,
we were struggling how to make the
correct architecture. And the correct architecture,
according to me, is L1-- so that when the learner
learns L1, it dies, right? And it dies correctly. It dies as much as I
have learned, right? So how to come up with
this architecture? It's work. It's six months' total of work. OK? Then L1 MPPI-- where was
our challenge, right? MPPI is super fast, right? It has its sampling requirement. L1 has its own way of being
fast estimation/slow control. And how to make all
these samplings work with each other and work
robustly so that it works? It's six months' work, right? And there are postdocs
involved, students involved, day and night, talking and meeting. So if some of your students
want to work with us, we can have them
during our meetings. But it takes persistence. It takes work. I'm thankful to Evangelos
for pushing me to do it. It was work. The journey was work. Now we have opened a whole
new set of opportunities. And we'll take it further. It's not one quick answer. Yeah. Yeah. OK. Thank you. That's maybe now leading
to a broader question that you've also thought about,
I know, is deep learning. So as we integrate perception
into autonomous control systems, we're going to be
using deep-learning mechanisms, right? That's what 99.9% of the
computer vision community is using. What are your
thoughts about that? We-- what are your thoughts
about analyzing or verifying deep-learning components
within control loops? So it's going to be hard. It's not going to be trivial. But one thing I know is
we want these systems ever to be certified. The thing that I've learned-- and this may change
over the years, but I know that
any software that gets modified on the flight
will not be certified today. At least this is what I
learned from Lui Sha, who is my colleague at
Illinois and who is great authority for
certification community. He always says any software
that gets modified on the fly-- then this is-- these are some of the
top lessons learned also from the 737; that cheap,
quick certification solutions may not work. So one has to be
very careful when you talk of deep learning going
into safety-critical systems and not being very thoroughly,
carefully analyzed. So again, the architecture
has to be correct. And the architecture--
"correct" implies you need to have
some type of switch; that there is this expert
controller that's always there; that whenever your uncertainty
estimation threshold gets violated, right, you can
have a deep learning there that takes raw inputs/outputs
a controller in a very benign environment. But when your environment
is not benign and it gets-- it becomes very adverse. Basically, you have
uncertainty-type estimation that gives you thresholds
that are very violated. And that has to be pretty
conservative and safe for your operation. When that gets violated, that-- you have to kind of have
an expert controller that takes over, overrides
everything, shuts down the system, and
navigates safely. I would say architecture,
architecture, architecture. What makes your system safe-- architecture has to be right. Yeah. Thank you, Naira. And then maybe one question
before we move over-- Sure. --to John's question. Model 3 learning-- what
is the place in all of this for model 3 learning? Is there a place? I don't think there really is. [LAUGHTER] I don't agree with you. That's why I'm asking you. [LAUGHS] Well, model 3 learning-- you
can play with this cubic-- Rubik's, OK, but not with
safety-critical systems. Model 3 learning--
I can build a toy, give it to a five-year-old
kid to play with, but not with safety-critical systems. I'll do my own due diligence. So K-12 outreach-- we can give
toys and go do K-12 outreach. It's also valuable. We can engage the smarter
kids into our community and then help them do system
ID and model-based controllers. That would be my approach. Thank you, Naira. My pleasure. Well, [INAUDIBLE] perhaps
controversial statement, we can lead the discussion
to our other guest panelist, John Howe. Great. Thanks, Marco. Naira, thanks for a great talk. It's great to see the work
you've been doing on the L1. You and I have spoken
about it before, but it's always fun to see
the videos of the things that you've been
able to do recently. I-- so my-- I got an open-ended
question that was similar, I think, to what Claire was
just asking in terms of-- as we begin to get
close to deploying these types of systems
in the real world, you start getting these sort of
unexpected sort of consequences in the sense that you
mentioned-- sort of maybe in a sandbox, where
these algorithms start learning and going
outside the box of things that you maybe had
thought about before. So from a performance
perspective, that's good; but from a sort of certification
perspective, maybe not so good. And that type of uncertainty
and how it's going to behave translates into conservatism. And then you start seeing
people talking about, well, maybe we shouldn't
put that on there. Now, we've faced this as a
community all along thinking about adaptive control. But I'm just thinking in
terms of, for the student audience out there, is there
perhaps advice you could give on types of research
directions and things that they could be thinking
about to address this problem, where maybe for the past
decade we've thought about how to make things better-- for the next decade,
maybe the focus is on not just
better but actually saying a lot more about what-- how it's actually
going to behave, can you actually
give [INAUDIBLE] certification and
things like that? And so just thinking in terms
of advice for researchers-- what types of things should
they be thinking about as they move forward in their careers? Yes. That's a very good
question, John. And what I think we should do-- maybe you, me, Claire,
and other senior people here, together with
the junior people-- we should maybe form
a type of consortium and invite [INAUDIBLE]
to talk with us how the modern paradigms
for certification needs to be form
that would not depart the conventional paradigm but
would leverage the existing practices, yet allow
room for modern methods to make their way there,
along with practical evidence, and simulations, and
all these experiment. Because as you say, I'm
trying to build up the way we have worked to build up. So there are people who
are going to come and start a conversation. I'm ready for that. But we need to have a
consortium of people who are ready to get together,
to support each other, to negotiate. That requires a big room
with lots of people. Even it can be a Zoom room. Obviously, in a physical
room would be easier. But it requires-- a
certification is not a one-person game. It's lots of people in one room. That's industry. That's government. That's FAA, NASA, Boeing-- I don't know. MIT, Berkeley,
Stanford-- I don't know. So it's lots of
people in one room. That-- certification can
be done only that way. Yep. Next-- one last question. It won't be quick, because
it's open-ended as well. But as you look at these
[INAUDIBLE] conferences and you see just how many papers have
the words "deep learning" in them-- which I think is
bordering on more than half-- one of the concerns that came
up in one of these debates about the future of these
types of conferences was that we'd be starting to
generate a lot of researchers whose answer to every
problem is deep learning and that we start
losing an ability to solve some of these problems
using other techniques. And any advice on sort
of moving forward? It's-- as a field, we're-- it's like if you have "deep
learning" in the paper title, you increase the probability
of it getting in. On the other hand, it's
not always the solution. And so it's a
question of how do we retain the skills as
a robotic community and yet still recognize the
value of this technology but also its limitations? Well, there is always rigor. There is always ad hoc. There is always a
proportion, right? I always say there is always-- 30% are good work;
30% are mediocre; and the rest should not exist. So-- [LAUGHS] So it's the
same, I guess, bleak-- I guess we just have to be
critical and constructive with respect to
each other's work; and try to be supportive in
our critical comments; to be constructive and
helpful for juniors; to be good role models. And some people just
use the deep learning to be in fashion, and to get
attention, and to be published. Human factors
always play a role. People become friends
sometimes just to get votes. [CHUCKLES] So just a little
bit more careful and rigorous approach to reviewing
peer-review work-- everything matters. OK. Great. Thanks, Naira. Thanks for your presentation. My pleasure. Thank you very much,
Clare and John. We also have quite a few
questions from the audience, along with several comments
actually complimenting you for the talk. Thank you. Maybe Nima, you could
start with your question? We have-- [INAUDIBLE] We have three students that
are doing the heavy-lifting of distilling the
questions from the audience and asking [INAUDIBLE]
Nima, go ahead. Yeah. Yeah. So the first question
is from Hamid Reza. He asks, what is
your main reason for using L1-adaptive control
over other robust control methods? OK. That's a very good question. That's true that L1-adaptive
control's input-output map is identical to internal model
controller's input-output map. But L1-adaptive control does
not have a model inversion block to it. So it's forward method. It does not invert. So it's very easy to implement. And it's easy to accommodate
all kinds of model knowledge updates that you
acquire on your way. So in that sense, it's
tuning knobs are very easy. It decouples its estimation
from the control loop. So any new knowledge you
acquire about the system, you put it into your
system predictor. And it helps you to get it
closer to the main system. And its robustness you just
tune with a filter bandwidth. So its tuning is
just much easier. While if you're using
internal model controller, every new knowledge that
you acquire about the system will require you to do model
inversion again, and again, and again, and again, which
makes it very complicated. And for non-linear systems
and more challenging classes of systems, actually, it's
not even clear how to do it. OK. Then we have a
question from Nia. Yeah. So I have a question from Blake. Would you mind elaborating
on your collaboration with Raymarine? What unique constraints
of marine autopilot design are well addressed by
L1-adaptive control? My collaboration with
them-- that was in 2012-13. What would you like
me to elaborate? That was kind of a
consulting arrangement. I can't talk too much about it. But that was their autopilot-- Evolution autopilot. And whatever is on their web
page-- that's all I can say. [CHUCKLES] It was-- we couldn't publish it. This was unfortunately a little
bit consulting arrangement. I can't talk just too much. OK. That's obviously fine. We have another
question from Rachel. Yeah. Hi. So [INAUDIBLE]
asks-- or mentions that bringing
contraction with learning in the disjointed architecture
that you mentioned seems to be key for a
lot of significant future developments. I was wondering if you had
any comments about that or kind of what you see bringing
into future developments? Oh, I see lots of
potential there. Because if we make
all this work, we'll have a complete
framework from planning to low-level control, enabling
more agile and versatile missions for autonomous systems. We look forward to
making it all happen. So this is still a
work in progress. The first papers will go to
this [INAUDIBLE] Conference. And then we'll see how
it develops in future. Just follow our website,
our archive postings. You'll see how it develops. Awesome. OK. And actually, there is
a follow-up question from Nima in terms of robust
trajectory generation. Nima. Yeah. Thank you. So [INAUDIBLE] follows
up with, have you compared your proposed
trajectory generator with other robust
generation methods? So which trajectory
generation method do you mean? We have a few ones. Yeah, we compared. So there is-- what
is of interest here? So we have Bezier curves. We have this DDP here. We have MPPI. We have so many methods
in different cases. And it depends upon the
context upon the application. So when we had
this MPPI with L1, it's because Evangelos
had it in this AlphaPilot. He wanted just to put
L1 on the top of it. When we had the same
DDP, putting L1 on it-- it's-- again, it was
his interest there. We have-- in our NASA project,
we have the Bezier curves. We have [INAUDIBLE]
so they're involved. So in every case, we
have something different. It's not like we have one
trajectory generation method and that's it. We have a variety of different
things in different places. And now we have this also
contraction metric coming. So. No, [INAUDIBLE] do kind of-- we never wrote a
paper on comparing different trajectory
generation methods. In some sense, we haven't
done such analysis. OK. Thank you. OK. I do-- actually, I
have many questions, but I always start with one. One of the attractive features
of L1-adaptive control are the sharp
theoretical guarantees. So I was wondering if
you could elaborate a little bit on
to what extent you were able to lift those
theoretical guarantees in the context of augmented
MPPI, or augmented GP, and so on? Lift the guarantees-- or what? Basically, provide those
theoretical guarantees in those contexts there to build on top
of the traditional L1-adaptive control guarantees. So in the contraction paper
and in the L1 GP paper-- you can download
those from archive and see the proofs are
done, completely provided. In the L1 MPPI paper
and the L1 DDP paper, these proofs are
not provided yet. The L1 MPPI paper that went
to IROS and the L1 DDP paper has not been yet
posted anywhere. But the framework from
this contraction L1 can be adapted to provide
the proofs also there. We're just hopeful
that it's doable. But for the contraction,
the paper is on archive. And for that-- L1 GP
is also on archive. So those proofs-- we're hopeful
that they can be adapted to those papers as well. OK. Great. There is also a question
from [INAUDIBLE].. Mm-hmm. Sorry. I have to unmute myself. So thank you for a
great talk, Naira. That was really interesting. And you have answered
strongly to Claire's question on model-free
versus model-based; that you're strongly
for model-based because you have the ability
to basically introduce a lot of guarantees. But even in a
model-based approach, there are lots of
opportunities for learning, right, and lots
of different ones. You could learn some state
representation if you wanted. You could learn the dynamic
models, maybe cost functions. Where do you see are the most
interesting opportunities for learning? And where should you
keep, yeah, maybe non-learning-based methods
in the overall system architecture? OK. That's a good question. So learning is not
common for free. You need to allocate
computation for that, right-- CPU, GPU. Today, it has to be like, what's
the beauty of these MPPIs, because it's
parallelizable, so it can be implemented in real time. So there is a price to be paid. Nothing comes for free. So this is very important
to keep in mind-- the minute we deploy
these autonomous robots, like the delivery drone-- the
project that we're with Marco, right-- it has to carry a payload. The minute you put
a payload on a UAV, it reduces your flight time. So you have to budget. If my UAV was to fly 15 minutes
to deliver a package, right, and it has to deliver a package
that's two pounds, for example, and it has to fly
15 minutes, right, how much learning
it can do, right? So it's all-- you
have to budget. So it has a certain amount
of CPU, GPU, whatever it has. It has a certain amount energy
based on its batteries, right? So that's why we are now
exploring, for example, this deep sense that's provided
by one of our professors, [INAUDIBLE] that has
energy-efficient computation, which is deep learning. For example, we go
already to that level-- anything we can
explore for energy optimization to
maximize the flight time so that we can pick up more
packages for longer distances. So computation is your budget. OK? So think how much
learning I want to do versus what
distances I want to cover, what robustness I want to have. They are all in tradeoff. If-- 20-30 years ago, my only
tradeoff was p plus s equals 1. Today, my tradeoff is not
limited to p plus s equals 1. OK? Today's tradeoff is a lot more. It's this computation. It's learning. It's everything. So everything gets into
one big equation that-- none of us have yet
maybe figured it out. But the tradeoffs are
very complex in today's learning-plus-control
environment. So the control tradeoffs were
performance plus robustness. In learning-plus-control
environment, the tradeoffs have not been
yet completely figured out. And those have to be
figured out before we can answer the questions
that you're raising. And these are actually
very good questions. And they can lead to lots
of good, interesting PhD dissertations. Thank you. Next, Luka. Naira, thank you so much for the
talk. it was very interesting and was an incredible
perspective, I think. So I had a question,
which I believe is a follow-up on what
John was hinting to, and Claire as well, which
is in general certifications for robotics and
autonomous system, as well as the
role of perception. And so I [INAUDIBLE] you
had at the beginning, this Learjet system in
which you inject failures and disturbances and
essentially evaluate with the system and the
response of the system with and without L1. That will certify
the performance or validate the performance. And my question for you is,
do you envision a similar technique to be useful
for, I don't know, certification of self-driving
cars, certification of robots-- this kind of
disturbance injection? And I guess the
broader question here is, what is the main
takeaway on your side out of the deployment of these very
complex and real-world systems? What do you want to share
with the young researchers? Young researchers--
I would suggest get your hands dirty
with real-world systems. Give yourself the
opportunity to experience the real-world systems, right? The real-world systems will
give you the type of experience that the simulation
environments don't give. So the learning experience that
you get from touching the car, touching the drone,
going, taking the data, collecting, coming, trying,
going back, and coming shapes you as a
different thinker. The-- that thinking that gets
into your brain after that experience-- it's invaluable. You can't get it otherwise. You just can't get it by proving
theorems, publishing, going to conferences, presenting. That's a different experience. You want to publish papers, to
go to conferences, to present, get peer reviews,
comments, criticism. But getting under the car,
loading your software, coming back, testing,
going back again, and doing that for
months and years-- it gives you a different muscle. That's a different experience. You want to have it. And wh-- it creates
a different thinker. And that's very important. I highly recommend all of you--
don't lose your young years by just being analytics. Get the practical experience. Because today's reality is the
reality of autonomous systems. And it's very important to
understand it from end-to-end-- what does it take? What is the epsilon? What is the delta? Go and try to understand that
five is greater than four. It's not like five is greater
than four-- we all know, but it's different when
you sense it with your-- this is what it means. This is what it means. And you'll get it
when you go there. It's important. [LAUGHS] Don't miss your chance. And any thoughts about the
self-driving cars certification plus anything else that you
want to share about that? Well, self-driving
cars and airplanes are different in
some sense, right? While they all have
control systems and they all have
certification challenges, the challenges of airplanes
are different from the cars. Because for airplanes,
it's the stability. It's-- it's the
stability mostly, right? For cars, it's more the
navigation in confined spaces. It's the perception. It's-- it's the close
contact with obstacles that the airplanes don't have. So these are different. And therefore, the certification
challenges are also different-- for the cars, how you would
integrate the perception, the close env-- close contact with pedestrians,
and different obstacles around. So even the communities
of certification will be different. But one can understand what's
common and what's different. And leverage what is common. Share the lessons learned. Understand the differences. And try to work
on the differences with different communities
and the common things with the common communities. I think that's very
important, that-- be partners with the right
industries who are pushing it the right way, right? Because in self-driving cars,
I guess the level of autonomy that people are trying
to reach is the Level 5. But today, at the best, we
have Level 3, right around. I haven't heard of Level 4 still
being in the streets somewhere. So the partnership today is
the most important thing; that you need to have industry
partner and government partner when you want to
go through certification. And if you don't have
all three in one room-- industry, government,
academia-- certification may be just too far
and unachievable. [INAUDIBLE] OK. Thank you. Thank you, Luka. Rachel has another
question from the audience. Yeah. I do want to say I
love the advice of, like, get your hands dirty. I think it's fantastic. We have a couple of
anonymous questions that ask what are the open
problems or limitations to L1. What's an example where an L1
scheme might fail, for example? Well, yeah. There are. The open questions of L1 are
the same, like, open questions of control theory, right? If you talk of
non-minimum phase systems, output feedback and so on,
this question goes on-- exists [INAUDIBLE] along. We did not solve those problems. We just have an
architecture that, within the existing
limitations, within the existing assumptions, gives us an
implementable architecture with easily tuneable
knobs for which we can quantify the performance of
the robots as a systematic way. We can predict the margins
and the performance of the [INAUDIBLE]
for those, right? So the open questions existed-- if you say output feedback
phenomenon of phase systems, we have very limited cases where
we have solutions for those. So these questions exist. So if people want to
work on this problem and they want to
reach out to me, I am happy to point them
to the very last paper and the last
dissertation of our group where we couldn't
make further progress. And they can start from there. Thank you. And we have another
question from the audience. Nia. This question's
from [INAUDIBLE]---- have you looked into
extending L1-adaptive control to a hybrid setting to address
hierarchical architectures? No, I have not. OK. So I have one more generic
question related to also what-- the points that
John was making and so on. For our students,
what resources you suggest in order to get a
better appreciation of control theoretical tools that
need to be accounted for even if they are now using
more computer science tools, such as AI? So what resource or
what techniques you suggest that everyone
should absolutely know? I think before learning AI,
they have to fundamentally learn estimation. They have to learn
back propagation. They have to learn
the foundation. The mathematical foundation
is very important. Never use or apply
anything blindly. There are so many tools
today in AI that-- like with [INAUDIBLE],,
this and that-- you can download, apply, use. But don't do it
without understanding. Try at least to understand
some of the basics-- what are you applying? How are you applying? Get a simpler version of that. Try to understand
what's happening. And then maybe once you're
familiar with the tool, then you can maybe get an
advanced tool and try to apply, see what you get. But the mathematical
foundation is very important, very important. Actually, what you see in my
background is my Alma mater. I always say [INAUDIBLE] I
learned in this university. So it's Yerevan State
University in Armenia. So that epsilon-delta
proofs-- the math, the underlying foundation-- is very important. You can't engineer
safety-critical systems without the right
level of rigor. So it-- be rigorous. Otherwise, the
safety-critical systems will punish you in a bad way. I totally agree. And Luka, you also
have a question? We had a question about
something that you mentioned. Right at some point, you
mentioned [INAUDIBLE] and understanding of factoring
in human perception of risk-- the perception of
risk from the user. I was just curious
about how do you factor in that kind
of perceived safety into the mathematical model? Yeah, that's a good question. So what we did-- we worked
with a psychology collaborator at Illinois, Francis Wong. So psychologists
know, apparently, how to measure humans'
perceived safety. If the humans-- they measure
[INAUDIBLE] phasic driver [INAUDIBLE] GSR signals-- so their skin conductance,
heart rate, and head tilt they measure. From that skin conductance
signal, they decompose. They get this
phasic driver, which measures humans' anxiety level. If it has a certain
level of activation-- so they build the machine
learning model. And they can judge whether
the human is scared, excited. So the anxiety level--
they can measure. But their machine
learning level appear to be very [INAUDIBLE] giving
lots of false-positives. Basically, we have to
become more sophisticated-- for example, build the
machine learning model with a Latin variable
using human attention state to eliminate lots of
their false-positives to get a more reasonable human
anxiety model for path-planning that we started using
in a post function to do path-planning for a drone
for its package delivery task or flying around humans so
that the human won't feel stressed when the
drone flies around. So that's maybe a subject
of a separate talk. But we have a paper from
last year ICO workshop that you can download
maybe and check. Actually, it was published in
an ACM transaction [INAUDIBLE] human robot interaction. It would be downloadable
[INAUDIBLE] from archive. [INAUDIBLE] out. Thank you. Mm-hmm. [INAUDIBLE] Yeah. [CHUCKLES] Yeah. I-- we're now at the
end of today's seminar. And I would like to thank
Professor Naira Hovakimyan again for a very interesting
talk and the great Q&A. And your message to the students
about getting your hands dirty with real robots was actually
also brought up last week-- or two weeks ago, actually,
by Scott Kuindersma from Boston Dynamics. And it seems to be a theme here. I would also like to thank
our guest panelists, Professor Claire Tomlin and
Professor Jonathan Howe, for their great questions. And thank you to the audience
for coming and submitting all the questions. I hope you're all
coming back on July 24 when Sidd Srinivasa from
the University of Washington will give a talk
on his research. So thank you, everyone. Goodbye. And have a really nice day.