Help me with industry sign banking kansas presentation secure
[MUSIC PLAYING] SANDRA GUO: Hello,
everybody, welcome. Thanks for getting up so early
and be here bright and early for our session. My name is Sandra Guo. I'm a product manager
in Google Cloud. And here with me is
Juan Oviedo, also a product manager in Cloud. And we have Rakesh Garala,
senior product owner at ANZ. Here we're going to tell you all
about the end-to-end security and compliance for your
Kubernetes deployment. In today's talk, I'll start by
going over a few key concepts of supply chain security-- well, how does it work,
how it used to work for VMs and how containers
changes things up a bit. Then Rakesh is going to
talk about NWOW at ANZ. Who is ANZ and what is NWOW? Then we'll introduce
a few tools on GCP that will help you secure
your software supply chain. And ANZ it's going to go over
a few key use cases they have and a demo of how
it works in action. So I will get it started. First, supply
chain security 101. When we talk about supply
chain, the most common type of supply chain is
food supply chain. Crops are planted,
harvested, milled into fours, baked into bread,
delivered to our dinner table. And then we ask questions
about our food-- is it organic, where did does
come from, is it FDA approved? Very similarly, code go through
a similar type of cycle. Developers write code,
code get checked in, get baked into images, goes
through tests and verification, and go through quality controls,
then deployed to our production environment-- having access
to super sensitive security resources. And then we have security
stakeholders and compliance officers asking questions--
where does this code come from, is it vulnerability-free,
does it need my compliance requirement,
and most importantly, with all the
requirement that I have to meet, how do I
meet all of them without losing speed
of my development? So traditionally for
VM-based workloads, many customers run big,
monolithic applications that gets released only
a few times a year due to its scale and
the complexity involved to release it. For each release, security
and compliance stakeholders have to manually inspect
every component of the release to make sure that it's
approved and it's up to date, it's vulnerability-free. And because VM is
a opaque format, this process is cumbersome
to say the least. And, eventually, when
the code is finally released to production,
a number of people needs access to the
running applications fairly regularly to
apply ongoing patches, ongoing maintenance,
and fine tuning. So this is a very
manual process. And many different roles
in the organization have direct access to
the running application because they need
to react to problems discovered during runtime. So it's a very reactive
security model as well. So it's a process that we're
probably all familiar with, but necessarily love. How does container
change things up? So Juan is going to
tell us how that works. JUAN SEBASTIAN OVIEDO:
Thank you, Sandra. Hello everyone. So as we've seen with
VMs, typically, there are many manual steps
that sometimes take a significant amount of time. This is manageable
because of the number of components involved and
the time between releases. Those two variables are
very different in the world of containers. Microservices
multiply the number of jobs an enterprise
can run in production and current CI/CD systems
enable teams to have multiple deployments a day. Ensuring that the
proper security checks are performed in
each of these deployments is very challenging. Historically, the
process of restricting what can be deployed has
relied on human trust, and operational knowledge,
and that doesn't really scale to a volume of deployments
that we see with containers. So now that we know about some
of these challenges provided by containers, what are
some of the properties that they offer that we can
use to help us apply security while providing velocity? Containers have some
unique properties. First, they are
declarative, which means that as you build them,
the set of libraries, versions, and components are
clearly laid out. This also means that because
of the declarative nature, they're also a statically
inspectable, which means that you can scan them for
known security vulnerabilities as soon as they are
built without needing to run the container image. And other key property is
that they are immutable, because they are uniquely
identified by a digest that, once created,
can't be altered. This enables you to track
a particular container as it moves through your
solver supply chain. So given these properties,
what are some steps that we can take
to apply security while enabling our teams to
create and move with velocity? First, we can build from
trusted sources, which means using a set
of pre-approved and regularly patched and
maintained base images. These images should also have
the minimal set of dependencies that are needed by
your service, and it's something that
should be controlled by a-- have a common base
for your organization. Also, you should have
streamlined security check points as part of
your release process. This means applying
vulnerability scanning and other security-related
test as your artifacts move through a supply chain. And you should also
define policies that enable you to
apply consistent rules across different
deployments performed by the different teams
in your organization. So, as we've seen, containers
have some properties that present challenges,
but also create opportunities to apply security
while maintaining velocity. Now, I'm going to hand
it over to Rakesh, so he can tell us how ANZ
is handling this space. RAKESH GARALA: Thank you. Morning, everybody. So before we talk
about what ANZ's doing, let's just explain
who ANZ actually is. So we're a financial
services institution based out of Australia. We are approximately-- or
probably the third largest bank within the region,
based on market cap, about 50,000 employees,
10 million customers, and we operate in probably over
30 to 32 countries, depends on how you classify that. So I just want to give
you a bit of a view of the size of our
enterprise's scale of what we're trying to
do, and how big we are, and where we're based out of. So if you look at
what NWOW is, there are two key things
that we identified. So one was the technology
landscape was changing. And what that opens up to
was a different competitive landscape. We found that within
Australia FinTechs [? new ?] banks are really starting to
actually disrupt the market. But what we also found
was our customers had a different expectation. So in a digital world where
technology has actually evolved and we're able to offer
digital capabilities and digital services,
we found our customers wanted something very
different from us as a banking institution. So we knew we needed to adapt. And what we decided was-- we were already building
great customer experiences, but we felt that we needed to
do is build them much faster and build them at scale. And NWOW, or New
Ways of Working, is our way of basically
trying to achieve that. So NWOW is a organizational
transformation. It starts with us
reorganizing ourselves across [INAUDIBLE]
[? tribes ?] domain, so actually we're
going to redesign and we have been redesigning
our organization. So you can see there
our CEO, basically, said he wanted to blow
up our bureaucracy and actually get the
organization much more customer-centric,
and product focused. It doesn't stop there. We're also streamlining a lot
of our organizational processes and our governance approaches. So what we wanted to do was
strip and streamline as many of them as possible to allow
our employees to actually focus on the consumer and embrace
agility to deliver value to the consumer faster. Third and finally, and
probably most importantly, technology was something
we want to tackle and have been tackling. So we know as an organization
that is over 180 years old, as part of this
transformation, our technology landscape had to also
evolve and change to shift-- to make advantage of the product
centricity and the agility we're trying to embed with
some of our organizational and process redesign. So what are we actually doing
from a technology perspective? We're doing a whole
range of things. I'm not going to
talk to all of them. There's probably two key things
that I want to highlight. So one is, we want to
continue our journey with containerization, which
we started several years ago. So we felt then and we still
believe that containerization gives us pace, it gives
us greater stability, but, probably, most
importantly, it allows us to build good
technology with good practices. As part of that, though,
what we've also identified is another opportunity. So what we started to identify
across the organization and what you see on the left
hand side the picture is-- we have very bespoke
release practices-- some of that related to the
things the guys have already talked about, but some
of that related to-- as teams started to
embrace containerization, they started to develop
their own release practices, and their
own automation, in pockets and silos. So what we wanted to do was
actually give the organization and give our engineers
a way to develop, engineer, and actually
push code to customers in an industrialized
way utilizing an industrialized CI/CD
capability, which we're calling the paved roads or, as we've
referred to here already, is our supply chain. So the idea of this is that
it doesn't just only help our engineers, but it also
helps our customers get access to our products, and features,
and value add a lot earlier. ANZ is a great place to work
for a whole range of reasons. And I say that as
an ANZ employee, but one of the key
things for us is that we're a
continually evolving and a learning organization. So we don't like to
rest on ideas that we're starting to develop and push,
but we look for opportunities. And that's inbred in
our culture and our DNA. So as we started to
progress on these roads, we started to ask ourselves
some open questions to ourselves and said, what we're doing
is good, but it's not great. So what we're doing opens
us up for speed in pockets, but then how do we scale that? So the guys have already
talked a little bit about containerization
gives of scaling challenges. So we started to
acknowledge what can we do to actually
scale the work we're doing across the entire
breadth of our enterprise. It also then opens up
to actually thinking about your
operational processes. How can we govern, run, operate,
and think about security better? And how do we actually
start to embed that into our organization? And then as we
progress past that, we started to think about
how we automate governance. So can we automate
a lot of the things that we would normally do
as part of a supply chain manually? Can we automate that using
some of these capabilities or using some of the new
technologies available to us to make that even
more streamlined? And the final bit for us was-- I mentioned earlier we operate
across 30-odd countries, so the key thing
for us is-- what that means is we have 30
different regulators that manage financial institutions
across those countries that we have to answer to, we
have to show compliance to, we have to work within
the breadth of remit that they give us
within their country. So we're not just
an organization delivering financial
services in Australia, but because we work
across 30 countries, we have 30 regulators
to work with and we continually are
asking ourselves is there a way that we can start
to demonstrate compliance in a different way
that would make our lives a little bit easier? So I've talked a
little bit about what ANZ is, what we're
trying to achieve, the reasons for what
we're trying to do. And what I'm going to do
is hand back over to Google to see if they can give us some
insights and some products that can help with that journey. Juan. JUAN SEBASTIAN OVIEDO:
Thank you, Rakesh. So let's take a moment to
remember a great product experience that you
had in the past. Remember the excitement you felt
as you were opening that box and turning on that device,
or as you were signing up for an account and
that new service, or as you were being handed
the keys to that new car. For you to get to
that point, you had to make a decision to trust
that that product was going to deliver on its promise. And for you to continue
to use that product and continue to get
the value from it, you had to continue to
believe that that trust was going to be fulfilled. So, in essence, for customers
to have a great customer experience, there
has to be trust that the product will
deliver on what it promises. And in order for your customers
to give your product a try, they have to have
that trust, but also for them to have the
potential to eventually become loyal customers, that trust
needs to be maintained. And security is key to
maintaining that trust. And that's why at
Google Cloud we are focused on
giving you solutions that enable you
to apply security while enabling your
teams to create and take advantage of the velocity
provided by containers. So looking at the different
stages of the [? server ?] supply chain, we provide
different solutions. Starting with the Cloud
Build, our CI/CD platform, you can use it to build
different kinds of source code from different repositories. From there, you can make use
of the managed base images that I will go into
more detail soon. Then as the different
artifacts are built and the container images are
pushed the Container Registry, we scan them for known
security vulnerabilities. This gives you the ability to
identify those issues early on and the opportunity to
address them, if needed. Then you can use
binary authorization to define deployment
time policies to ensure that only artifacts that
meet your organization's requirements can be deployed. And then once your
service is running, we continuously
analyze the artifacts and if there are any new
security issues uncovered, via known vulnerabilities,
we'll make this information available to you. So now let's go to detail
with some of these products starting with Google
Managed Base Images. Managed Base Images are
maintained by us at Google. They are available for
Debian, Ubuntu, and CentOS and can be downloaded or
obtained from our marketplace. They are scanned for known
security vulnerabilities using GCR vulnerability scanning and
we regularly patch these images and keep them up to date. We use these images in our own
products such as App Engine and they are a great resource
that's available to you to start your development
process from a secure base. Now, let's talk about Container
Registry Vulnerability Scanning. Container Registry Vulnerability
Scanning checks the images that you push to a registry for
known security vulnerabilities. It supports scanning of images
based on Ubuntu, Debian, Alpine, RedHat, and CentOS. We continuously ingest data
from the corresponding vendor security feeds and if
any new vulnerabilities are found that apply
to your images, we make this information
available to you. The product also has
integration with [? pops up ?] notifications, which enables you
to automate certain processes. For example, you
could automatically notify certain
members of your team when a new
vulnerability is found or you could
automatically file a bug when vulnerabilities are found. Now, let's talk about
Cloud Build for a moment. Cloud Build is our platform
provided CI/CD solution. With Cloud Build you can build
from different source code repositories. You can build different
kinds of artifacts, such as Maven packages
and container images. As I mentioned earlier,
as the images are built and they're pushed
to GCR, they're scanned for known
vulnerabilities. You can also set up
your corresponding tests as part of the
Cloud Build process. And as the images and
different artifacts are deployed to your
service, the whole experience is covered as part
of the platform. So it's a great resource
that's available to you. And now let's focus on
the deployment piece. And I'll hand it over to
Sandra so she can tell us how binary authorization
can help you apply security at that stage. SANDRA GUO: Thank you, Juan. JUAN SEBASTIAN
OVIEDO: Thank you. SANDRA GUO: So
binary authorization. Binary authorization
is a deploy time policy enforcement [? tool ?] point. So what does it mean? It integrates with
GKE deployment API and it examines every
deployment request that come through
the API to make sure that the image has
proper signature that meets your customer-defined
policy before it can be admitted into a
production environment. The policy is configurable. You can define the
right policy that works for your organization. So it may mean that
something passes a particular security scan. It may mean that images are
blessed by QA engineers. It may mean that image is viewed
by your trusted builder or all three above. The binary authorization
uses standard signature verification, so it can be
plugged into your existing CI/CD tools, if you don't choose
to use a GCD-provided products. It just means that you have to
put a signature on the image before it gets deployed. So the way it works is binary
authorization integrates was your CI/CD pipeline and record
a signature as the image passes through each of the
stages in your CI/CD pipeline. And that signature
signifies that the image has satisfied the requirement
for that particular stage. For example, the builder put
a signature on your image saying, I am the secure
builder, I built this image. Or the scanner would put
a signature on a image, basically means this image has
satisfied my scanning policy requirement. And the other times
the image comes to be deployed at deploy time. There are a number of
signatures already available for verification. Now, binary authorization takes
a look of those signatures, compare with the policies
that you've defined, and said, OK, this
is a good image. You set aside all
the requirements in addition to
just being deployed by a authorized person. So, yes, it's a two-level check. The right person's deploying
it, and the right content is being deployed. If a image does not have the
satisfactory signature on it, then binary authorization would
block it and record a audit log for future reviews. There are break glass
mechanisms available. So if your developer is trying
to push a change that does not need a policy, but needs
to be applied anyway, they can break glass and this
would generate a audit log as well for your security team
to review this incident later on. Some of the popular policies
to be applied at runtime, as I mentioned
earlier, is making sure the image is actually
built by my organization. Make sure that image has passed
security checks, QE checks, integration checks, and so on. Or making sure that when
I deploy a third party image that I did not build,
it needs a explicitly set white list that is
managed by my security team. So we're releasing GA at
this year's Google Next. So we're very happy that
our products is going to be generally available. There are a few new features
that we're pushing out with our GA release. One of those is a
dry-run feature. Dry-run, it means that
you're able to now set a policy in
non-enforcement mode, so that you can try
out new deployment policies without risking
production interruption. Again, we integrated
with Cloud Audit Log for any would-be-blocked
deployment attempt. And so we would
record it in Audit Log and can review that
later on to make sure that everything is
working fine before we turn on enforcement for real. And another new feature
that is coming out is system content policies. So when you are running
on GKE, there are a number of Google-owned system
containers that are running in your cluster as part of the
[? orchestration ?] system-- things like containers used
for logging or monitoring, for example. So even though these
system containers are managed by
Google, users could deploy a alternative version
of the system containers to your cluster. And it's a attack surface. So how do we-- before customers,
basically, have to make sure that,
yes, I'm deploying the right versions of the
system container-- making sure that those systems
containers are not being [? overwritten. ?] With
binary authorization GA, we try to automate that for you. We have a default
global policy that we created by working
with the GKE release and build team to make sure
that every system container that is build by Google, that
build by GKE team is signed, and that we're able to
verify that signature for you at deploy time using this
global system container policy, if you opt in. So with this on top
of your existing customer-defined policy and
third party image whitelisting, you will have your entire
cluster workload accounted for. So in your cluster all the
customer in-house viewed images will be signed by
you-- will have a signature requirement from
your customer-defined policy. All the third party
images will be specified in the
image whitelist that can be managed by your
security team centrally. All the Google-owned
system containers will be signed by
the GKE release team and verified at
deploy time as well. Last, but not least, we also
have Cloud KMS support in beta. So now you can sign your
images using keys manage-- asymmetric keys managed
in the Cloud KMS service. You don't have to
have the hassle of managing your own keys
anymore, if you don't want to. And along with this, we also
have generic PKCS key support. So if you have keys
managed elsewhere in your preferred
PKI, you can use those with binary
authorization as well. So so far I talked about the
hosted panel authorization product and other vulnerability
scanning base image product that we host on
the GCP platform. But we also want to
create a open platform, and that is why we created
open source project Grafeas plus Kritis that are
open source reference implementations of the
container analysis metadata API and binary
authorization that are available for any Kubernetes
deployment anywhere. So we work with a number
of industry partners-- JFrog, RedHat, IBM, Black
Duck, Twistlock, and more-- to create the standard
metadata format so that they can be plugged into
our existing CI/CD pipeline. You may not have to use
the Google-provided product to get the benefit of
the secure framework. We want to create a
knowledge base for you-- a metadata API, where you
can record findings and facts about the containers
that it's going through your CI/CD
pipeline, that is going to run on your
security infrastructure-- a knowledge base where you can
plug in metadatas, signatures, vulnerability findings that
you can later on browse through and use for
enforcement reasons or just for additional context. It plugs into the Kubernetes
admissions controller, so you have Kubernetes
deployment anywhere. You can use Kritis and it would
enforce signature-based policy or venerability-based policy
for you at deploy time. As I mentioned that we
have a number of support from partners and it's open
source built with community. Contributions are welcome. Check it out, if
you're interested. So with this, we hope to
create a open ecosystem where you can pick and choose
which component you want to-- which vendor you want to use
for each of the component. You may have a different
build process using CircleCI or [INAUDIBLE] and you may have
a different scan process based on Black Duck or Twistlock. And you may choose to deploy-- use Spinnaker or you
want to deploy manually. It does not matter. We want to create this general
framework where you can still record the metadata, the
signatures, and the findings in the database that you can
centrally access and manage. We still want to
give you the ability to control what goes into a
production environment using the Kritis admissions controller
to enforce that policy centrally as well. So here with open
source tools that we've published along
with our product, we hope to achieve that. All right. So we talked a lot
about technology. So how does it work
for a real customer? Rakesh, do you want to show us? RAKESH GARALA: Thank you. So I'm going to do two or three
things in this next segment. So I'm going to talk to you
about what we're actually doing as a bank, and we're then
going to talk to you about why we're doing it, and then show
you a small snippet of how we've put that into
practice and what does that mean for our organization. So let's start with the what. Now, this-- it looks
like a very busy slide. It's there deliberately. It's there to show you
the complexity of where our organization sits in terms
of engineers getting code out the door to customers. So what we've been
doing is we've been working with
our engineering teams across the organization to
say, what would you like? What would this
process look like if you wanted an industrialized
CI/CD capability? How would it look and
what are the key elements you require to make that journey
to the customer as seamless and as quick as possible? I'm not going to talk
through all of it, but what you're basically
seeing is from left to right is a typical CI/CD pipeline. We do a bit of code scanning. We're using check
marks at the moment, or [INAUDIBLE] check marks. We're doing CI with Cloud Build. I can see the guys
in the audience were actually using Black
Duck for code scanning. One of the aspects you're
not seeing there is we're doing vulnerability
scanning with Twistlock at this point in time. But for us, it's
the ecosystem model where we can pull and plug tools
and actually integrate them with Google's offering. And as you progress towards
the right-hand side, we've chosen a continuous
delivery tool, Spinnaker, to help us with our deployment
capabilities to do-- at the moment, just do
deployments, but maybe move towards some clever items
such as canary deployments or blue-green to
wherever we may end up, and what our
engineers want to use, and what our teams need to
make their lives easier. The other bit to
call out here is-- what you're seeing in terms of
the complexity of those numbers is where we plan to
build these attestations, or as Sandra called
them, signatures. So the idea is that as things
progress through these pipeline or this pipeline
capabilities, what will happen is these attestations or these
signatures will be created. And then binary
authorization with Spinnaker will enforce that they are
available and present when we try and deploy a container. So as a container
progresses through this and gets to Spinnaker for
deployment, if it doesn't see all the attestations and
binary auth basically says that this does not fit with
your organizational policies that we have defined, it
will block the deployment. So, for us, the value add
here is a range of things. But, for us, it ensures
that the organization, when it's deploying
containers into production and goes to our customers,
it's met a set of requirements that we are comfortable with
as a banking organization. So why use this capability. So I've already
referred to earlier on around the engineer's
experience, which is critical for us
as an organization. We want to ensure that the
engineers have a seamless way to get what they're building
into our customers hands. I've already highlighted
the speed aspect. In a digital world and
with the transformations and the changing competitive
landscape, speed to customer is key. As we build a new
feature, we want to get into our customers
hands as quickly as possible. But then we start to move into
the real value add for me. What I've described as
CI/CD pipelines and stuff we've been doing as an
industry for a while. But the critical
value add, for me, is how we then use
binary authorization to automate our governance. So as engineers utilize these
pipelines, as an organization in terms of our [INAUDIBLE]
for 50,000 employees, we can be sure that anybody
there pushing workloads to production meet a set
of organizational policies that we can enforce,
change, and redesign on the fly using this
capability, which gives us the ability to do technology
governances as code, essentially. But that's not enough for us. So what we then started
to think through is to say, well, if we're
enforcing technology governance as code and we're
using these capabilities to ensure that we don't
break our appetite from an organizational
perspective around tech, well, why couldn't we do that with
some of our other requirements in terms of our organization? What about our
business governance? What about our risk
appetite as a bank? Can we not start to codify
some of those aspects and start to enforce them
via these capabilities? Now, that's just something
we're starting to think through and starting to adapt, so it's
very much an open question mark. But that's where
we're heading towards. It starting to look
at, as an organization, how do we govern ourselves? How do we manage
ourselves in terms of risk in an automated way and using
something like binary auth to enforce that where
we do something, it doesn't change
our core values, or it doesn't change
our core requirements, or it doesn't change
what appetite we have as an organization. And, again, just
to draw off, as we start to progress
[INAUDIBLE],, how do we then use this to show compliance
across 30-odd regulators? We want to do that
in an automated way. We don't want to
be demonstrating a set of documents or
a set of requirements in a different format for-- whether it be [INAUDIBLE],,
or [INAUDIBLE] in Singapore, or whoever. We want to ensure that
they have the ability and they have the
confidence that they can explore these capabilities
with us and give-- that we can give
them the confidence that we can demonstrate
compliance differently. So that's what we're
trying to set out to do and why we feel that
there's good business case and good appetite for
us to go and do this. We're going to show you a very
short video as we hit play. So I want you guys to bear in
mind, this is just our MVP. It was what we started off
with, but it should give you a feel for how
some of these tools are coming together to
make what I've described and what the team have
described come to life. So what you're going to see in
this first part of the video is basically Cloud Build
being used to execute some CI. So this is the aspect of where
we could also add complexity by integrating with Black Duck,
integrating with Checkmarx, and actually starting
to actually validate that our code that
we're building and our CI capabilities
are fit for purpose. What we then will do is, as
we've built the artifact, we'll then try and deploy it. So what you're seeing
here is Spinnaker attempting to deploy the
artifact we've just created. What you'll see at the
bottom there, or just about, is that the-- is
the build failed. And what you can see here
is, as we highlight-- oh, I think we just
moved it around. Bear with us. SANDRA GUO: Yeah. I want to make sure that
we have the high quality because it's really blurry. Excuse me. RAKESH GARALA: It looks
good on this screen, so sorry about that. We can just quickly
rewind back a little bit if people haven't seen-- SANDRA GUO: Yes. RAKESH GARALA: So
we use Spinnaker to attempt a deployment. So the key aspect to see
here is that our CD tool is trying to deploy the
artifact we've just created. And what you'll see there at the
bottom is that that's failed. And when you look into
the information about why it's failed, you'll see
there it actually says that no attestation was found. So we've configured
Spinnaker to look for a particular attestation. So as I described earlier,
that can become more complex. This one is just a
single attestation, but it can become numerous. What you see in
here is us manually creating an attestation. Now in a normal
pipeline, we want as many attestations to
be created in an automated fashion, but we still
want to keep the ability that where we have
manual intervention, we can manually
create an attestation. And that's where we're
going to sit probably in the short to medium term. And we know this is an
evolving journey for us. And then what you'll see is, as
we progress and the attestation has been created, we'll trigger
that deployment again using Spinnaker. Again, you've seen
a manual deployment, but we've actually
now automated that. And this time what you'll
see is Spinnaker will execute and that deployment
will be successful. And the reason it's
been successful is, this time, as
it's tried to deploy, it's actually looked
for an attestation, it's found that attestation, and
actually, then, basically said, yep, it meets the
organizational policy that's been defined previously. And then the video just
finishes with us basically refreshing the console. And what you'll see this
time, hopefully-- well, I know it's hopefully. It's a video, so it's
been predefined-- that it's there,
it's successful. And there it is. Green, available for
consumers to use. So we wanted to just
give you a short video-- one, because I'm not
overly technical, so me trying to
do that on the fly would have failed
miserably, but two, it it's just an MVP that
we've developed a while back. So what I want you
guys to take away from this is that
complexity I showed you, in terms of our
overarching CI/CD pipeline, the different types
of attestations, and this is just a way to bring
all of that to life and to say, this is how we're
going to continue to add complexity as we
evolve that capability are in the bank. So I think it was just a
couple of final comments from me, really. So it's been hard and
challenging, right? So you're trying to
change an organization from a number of
fronts, and you're trying to embed some of
these practices, these ideas, and it's challenging for
a whole range of reasons. I think if anybody here has
done transformation programs, you'd attest to how
difficult trying to push some of this change through is. We have basically
said it is definitely worth the investment
and the effort. So it's been hard work, and
it's been a bit of a slog, however we're already
seeing the benefits of the work we're doing
and we're not even fully all the way
through our journey. We're got probably two or three
iterations of our pipeline, we've got more to
do, we want to start to tackle the
organizational stuff, and we've already started to see
the benefits and the business case for how we
continue and accelerate the work we're doing. So the first bit,
for us, was is just stick through that
initial challenge phase. Two, interestingly
enough, what we found was, as we started to
embrace some of this and put some of
this into practice, it actually started
to inform us of how we needed to evolve some of
our technology landscape wider afield, whether it be on
premise or some of our thinking or some of our
overarching processes. We've managed to shift
left some of that work. So we've managed to
actually identify things that could have been
a problem two or three years down the line by actually
embracing this world and actually getting on with
this aspect that then informs the other things that
we need to change. And thirdly, it's often
said with a change program, bring people on the journey. And it's usually said
because, you know, it's the change process
and you get people on. But what we've found
is actually the reason to bring people on
the journey early is that they started to
get involved in this, and they actually really loved
it and enjoyed it, and actually started to wanted
to be part of it. So we've started to see
a real cultural shift and a real coalition around
the organization of people wanting to adopt the work
we've been describing. So not only spread the message
and bring people on journeys to help them embrace the change,
but actually what we've found is we've just found helping
hands across the organization to make this a reality for us. So I think we're at
the closing remarks. So the only bit for me to
close with is hopefully what you've seen
is ANZ's approach to how we want to change
some of the things we do. I've described a little bit
of the why we need to do that, and hopefully I
brought together how we're using some of
Google's capabilities to bring that to life. And we have two main outcomes. One is to make ANZ an amazing
place to work for engineers, but two, as quickly as possible,
get consumer value out the door and get it into our
hands of our customers. And we see the
work I've described is a critical aspect to
achieve both those goals as we move forward. So I'll hand back to Google. JUAN SEBASTIAN OVIEDO:
Thank you, Rakesh. So as we've seen,
they're applying security in coordinating supply
chain has some challenges, but also creates
great opportunities to both apply that security,
enable your teams to create, and to take advantage of that
velocity, and, as a result, to provide your customers
with great experiences and maintain their trust. So here's a summary of
some of the products that we talked about today. They are available,
as Sandra mentioned, a Vulnerability Scanning
and Binary Authorization are going GA soon. Managed Base Images
are available for you to use right now. So these are some
great resources that we make available to
you, and we look forward to how you use them
and give us feedback. And here are some
related presentations on the topic of
security and containers. You can find the videos for
these presentations on YouTube now or in the near future. [MUSIC PLAYING]