Optimize your deal pipeline for security with airSlate SignNow
See airSlate SignNow eSignatures in action
Our user reviews speak for themselves
Why choose airSlate SignNow
-
Free 7-day trial. Choose the plan you need and try it risk-free.
-
Honest pricing for full-featured plans. airSlate SignNow offers subscription plans with no overages or hidden fees at renewal.
-
Enterprise-grade security. airSlate SignNow helps you comply with global security standards.
Deal pipeline for Security
Deal pipeline for Security
With airSlate SignNow, you can save time and money by digitizing your document signing process. Benefit from the ease of use and cost-effectiveness that airSlate SignNow offers, allowing you to focus on what truly matters in your Security deal pipeline.
Sign up for a free trial today and experience the convenience of streamlining your Security deal pipeline with airSlate SignNow.
airSlate SignNow features that users love
Get legally-binding signatures now!
FAQs online signature
-
What is the DevSecOps pipeline?
DevSecOps pipelines enhance security, reduce vulnerabilities, and speed up deployment cycles. They enable early detection and remediation of security issues, foster collaboration between teams, improve compliance with security standards, and ultimately lead to the development of more secure and reliable software.
-
What is pipeline as a code?
Pipeline as code is a practice of defining deployment pipelines through source code, such as Git. Pipeline as code is part of a larger “as code” movement that includes infrastructure as code.
-
How do you implement security in a CI/CD pipeline?
Next, let's look at best practices and methods to boost your CI/CD pipeline security. Code repository access restriction and using audited code. ... Reviewing code efficiently. ... Maximizing testing accuracy and test coverage. ... Image scanning and repository auditing. ... Implementing safe deployments by using deployment strategies.
-
What is CI CD pipeline for cyber security?
CI/CD security is used to safeguard code pipelines with automated checks and testing to prevent vulnerabilities in software delivery. Incorporating security into your pipeline helps to protect code from attack, prevent data leaks, comply with policies, and ensure quality assurance.
-
What is a pipeline in software?
In software engineering, a pipeline consists of a chain of processing elements (processes, threads, coroutines, functions, etc.), arranged so that the output of each element is the input of the next. The concept is analogous to a physical pipeline.
-
What is pipeline in security?
A secure CI/CD pipeline architecture integrates security controls at each stage of the pipeline. Use secure repositories for source control, conduct security checks during the build process, run automated security tests, and ensure secure deployment practices.
-
What is pipelining in network security?
Pipelining is the process of storing and prioritizing computer instructions that the processor executes. The pipeline is a "logical pipeline" that lets the processor perform an instruction in multiple steps. The processing happens in a continuous, orderly, somewhat overlapped manner.
-
What is a pipeline in DevSecOps?
In a nutshell, DevSecOps pipelines are automated workflows that incorporate security practices throughout the development lifecycle. These pipelines integrate security controls, testing, and monitoring at every stage, ensuring that security is not an afterthought but an inherent part of the development process.
Trusted e-signature solution — what our customers are saying
How to create outlook signature
all right so next up we have sean mcculla and sean is a friend of mine um we met through the sands masters program we've even had a couple virtual happy hours uh to help cope with the whole covid lockdown and we worked together on a variety of different group projects and so forth so sean got his computer engineering degree from virginia tech and then of course his ms ise from the science technology institute sean is a technical director of red and blue operations for a government or government agency i should say and then there's also a consultant with h a security solutions so um i'm very glad to see that sean presented uh this topic for our audience so with that i'll let sean take it away thank you ken appreciate it um i before i start i got to tell you a little story about how i actually met ken um we were both in the master's program at sans at the same time and ken i think it was like five years ago maybe uh we were both part of a group project we got thrown together we didn't know each other and it was a 30-day group project where we had to come up with a whole process for how an organization would move from on-prem into the aws cloud right i mean it sounds familiar we've been a lot of us do that we had to come up with a cost analysis and a management plan security plan of course and also we had to demonstrate how you would build and destroy and operate um the actual applications from this you know this big company and uh i remember one night ken and i are trying to debug this script that i broke and um i was like man ken this is a lot of work for this this class i'm just surprised that sans is having us do that and that's when ken tells me that the whole what we were doing for this project was actually his idea he had pitched the idea for moving a company to the cloud and so all the work was his fault um but you know i can i gotta tell you i'm really thankful for being on that project because not only did i get to meet you and became friends but i really did learn a lot it was one of my favorite classes that i had in sans um so um uh it kind of brought me to this topic today and it's one of the things that we talked about when we did that class was um if you are building you know if you're figuring out how to move into a cloud environment you're probably spending a lot of time trying to figure out uh what applications do i just lift and shift and which ones do i actually have to you know rework somehow and we talk about the applications in s3 and lambda and all this really cool stuff but we don't talk about the infrastructure we use to manage our virtual machines a lot of times we just lift and shift that we just do what we've been doing on-prem so what i wanted to do is talk to you guys about how maybe you could approach uh via management in the cloud a little bit differently and in a way that not only will improve your security but maybe give you some new operational capabilities that you didn't have before and one of the reasons why this is such a big deal is because if you've ever worked in it operations you know it takes a long time and a lot of manpower to manage the patch cycle process for a virtual machine if we were live right now i would ask you how many of you have done this kind of process and a bunch of you would probably raise your hand and it would be very slowly because it's a very sad difficult thing that you have to do i've done it i know it and um the reason is is because we go through this process where you know on the slide here you look we start at 12 o'clock at discovery it's patch tuesday there's a new patch that's come out we've got to deploy it to all our environments all right so first we're gonna inventory our environment what what systems need to be patched which ones have to be off limits because of other reasons uh we analyze those patches and then we plan for how we're going to do the deployment then we implement the patches and 10 of those break so we've got to remediate them somehow we validate it works and then we report back and then we just do the cycle all over again and it's frustrating uh at the end of the three days two weeks however long it takes in your organization you've got virtual machines i mean i mean you have the same thing you started there may be a little bit more secure and that's great but it's still the same thing and i kind of look at it this like when um i you know i had to get a new roof put on my house there was a bunch of shingles that had blown off during a storm and i had you know somebody come out and they said yeah it's a new roof uh and i remember writing a check and going to work and i remember coming home that evening driving into my driveway and looking up and like oh it looks like a roof like it looks like the same roof i had before even though i wrote this check it wasn't exciting like a new tv or something like that and i kind of feel like that's what happens with our patch management process so what i really want to do is figure out you know can we speed this up can we make it more operational capable in the cloud and even make it so that we add some um you know operational security and business processes to it that we couldn't do before so i'm going to walk through a four-step pipeline uh or four steps that you take to build this pipeline to manage and run your your virtual machines normally in a talk like this i would start with step one then two then three and i ended stop step four but today i'm i'm actually going to tell you the end result because i feel like we need to know where we're headed and so i'm gonna i'll tell you what step four is then i'll jump back to step one and we'll go back through the process all right all right so our end result what we want to be what our big reveal will be is that we want to be able to redeploy every one of our production virtual machines every single day i know i know i lost some of you there right i mean that's like why would i do that every single day all of my virtual machines i can't do that all right so how about this instead of all your virtual machines how about uh we throw off the database virtual machines right or or how about we we just focus on web applications and computational systems docker clusters what you know certain things like that and maybe not every day uh what if we did it every three days or once a week it doesn't matter what that frequency is what we want to be able to do is be able to deploy them at a moment's notice and do that so that every one of them every time they're redeployed they're as secure as possible and it's cheap and easy and we don't actually have to think about it that's really what our goal is all right so that's our fourth step let's go to step number one first thing we have to do is we got to start with a solid baseline we've got to figure out where there are virtual machines that are out there that we can start with that we know are relatively safe um and they're not going to change a lot underneath us like they're they're pretty stable so that's what we need to do there's really four places in where we can go and look for these virtual machines the cloud service provider gives you a set of virtual machines that they are either managing or they have provided some kind of um you know some kind of process in place where they say yeah these are virtual machines that we approve and we think you should use amazon if you go in and create an ec2 instance you'll get a list of amis that you can pick from there's the amazon linux version uh there's a centos and an ubuntu and some windows and some things like that so you could use the cloud service for created and provided virtual machines as you need to uh the major service providers also have a marketplace and the marketplace is where you'll have um companies come and say here's a virtual machine and usually they've got some kind of application attached to it right so they're pitching some product that they that they run that they want you to buy and they'll provide you the virtual machine already configured to use it so you could go to the marketplace and get those virtual machines if you trust the company that you're buying it from or getting for free from it's up to you to make sure that that that system is good microsoft amazon google they're they're probably only doing cursory evaluation of those um marketplace virtual machines mostly they're just validating that the company will pay them really is what it is um you could also go and get a virtual machine that is just shared and you don't know who it is but you can go and search for these virtual machines that are out there put out by strangers um this is a bad idea i would not recommend doing this uh it's possible if you know who the person is if it's from a company that you uh are working with or you you feel like they are safe and they secure and they're they have a standard baseline that's not going to change out from underneath you but you can use that sharing process but i really wouldn't recommend doing this unless you know who it's coming from fourth what you know what let me say this about that um if you are let's assume that you're part of the security team and you're managing infrastructure that your dev teams are pushing out there for your products if you are hosting and managing a node based application 60 to 80 of the code that you're you're serving was not written by your dev team so you're already using uh software and tools from people that you just don't know who they are but i don't recommend you do that with your virtual machines okay third place or fourth place you can go to is on-prem you could build your virtual machines build an image package it up and import it into the cloud service companies all right into the into the cloud so um the three major ones have the ability to do this it's fairly easy if you already have a process on-prem to do that use the same virtual machines on-prem and in the cloud you could definitely do that all right so we have this baseline and we're going to go from left to right on the space in this pipeline excuse me and we start off with our baseline so in this particular company uh they're going to have amazon's linux version 2 uh some windows server uh centos seven and centos eight uh there's two different dev teams and for whatever reason one of them can't use eight and they're used seven uh there's an ubuntu version and then some strange thing from the marketplace you gotta make sure that these are fairly secure they're probably not where you want to want them to be that's our next step but you got to make sure that they are pretty stable that they're not going to be you know libraries you depend on that'll be ripped out of them the next version that's released from wherever you get these from okay all right that's our first step first step is to start with a solid baseline the second step is we want to use some kind of orchestration service to patch and prepare these to be given to your development team here's what i mean we're going to use an orchestration service uh ansible uh hashicorp's got a couple products you would use uh the cloud providers have some i'm actually going to talk about one at the end of this from amazon um chef puppet you know whatever it is it doesn't really matter so much bunch of scripts that are cobbled together you could use those but you want to take that baseline virtual machine that you've gotten from your your your provider and i recommend that you patch it and then perpetually attach the operating system patch all the standard libraries anything that is required from an uh it operation or security purpose so if you are required to follow the cis benchmarks apply those benchmarks there is code out there to do that um if there are certain security policies that you're implementing uh you can do that here so for instance logging has to be turned on or you have a host intrusion detection system that has to be installed on all your virtual machines or maybe all your windows virtual machines you can do that at this point and then what you'll do is you will spin up your original virtual machine that you got from this you know from wherever you're getting it from let's say the amazon linux 2 you have you've spin up to an actual running virtual machine you apply your patches your cis benchmarks you delete uh users that you don't want in there whatever the preparation is and then you take a new snapshot now you have a snapshot for your dev team in which they can safely build whatever the application is that they're building and hosting they can build off of that secure uh new image so it would look like this we start on the left hand side with our baseline image the gray here your is spitting up a virtual machine running some scripts to patches host intrusion detection systems whatever it is and then creates a snapshot and makes that available to your dev teams so that they now they can take it and implement their code on top of it all right so that is our step two um step three the third thing you might want to do you would definitely want to do is you want to build a new image that's not being deployed maybe but is deployable there's a big difference this is where the development team would take that image and would let's say let's say the dev team is the uh for the marketing team there's the web web developer and the web developers managing the marketing website right we'll just use a simple case there uh they would nginx they would set up configurations they would do additional things has to be done they'll drop the marketing website maybe onto that virtual machine and then they create a snapshot so they're installing the dev team is installing application specific libraries and configurations and possibly the code to run it the reason i like splitting it up between the two we'll do links page the reason i like spinning up between the two is that um your dev team is probably going to change the code fairly often uh and it may be faster than you're changing your pat that you you would patch the systems so they would always be able to in the yellow here they would always be able to take a new snapshot from the the standard secure uh it approved uh image and be able to test their code on it and they'll be able to run it through their automated testing hopefully you have the automated testing uh and they'll be able to test and say yeah this is actually gonna work if there's a change that happens to that those gray systems it's probably a patch it's probably nothing significant in terms of that would mess up the development team that is the hope the dev team is usually different than the operations team but even if you're the same team i recommend breaking it up this way because you're probably going to have a person that's more focused on the security of the systems and someone that might be more focused on deploying the code and since there's two different people you can even split them up this way also what you're doing is you're probably going to have multiple teams grabbing from the same virtual machine so if we're going to look at like you know the linux 2. so there's a linux 2 ver you know amazon provides a linux version you've patched and prepared in the gray and you've got two different teams that are making use of it for whatever their product whatever their application is they know they're always pulling from the latest now how often would we run these patches we could run them all the time i mean it's just code so going from the blue to the gray we could run it every day you could run it every hour it wouldn't really matter the idea though is that you're writing these scripts and you're writing this code so that um they're installing the applications the the configurations the logging whatever is needed and then can easily test it you want to make sure you do that once you've deployed your uh once your dev team has taken that code they've built it they're going to create a new snapshot and what you've created is a deployable system amazon microsoft google these cloud providers they are really built around this idea of elastically creating and having an environment that you can easily and elastically grow the virtual machines and shrink them at a moment's notice uh in an on-prem environment you probably were building a let's say you have an application it's a web app and you know between 10 a.m and 4 p.m is when all the heavy uses so you deploy a virtual machine that will handle that max load and then it just runs like all night so at midnight it's the same size system you don't really want to do that in a cloud environment because you're paying for that right so what you really want to do is instead of having one gigantic virtual machine running your web application you have multiple applications or web applications running on multiple virtual machines behind a load balancer and you use something called an auto scaling group and i'm using amazon terms but they're very similar in azure and google but that auto scaling group will spit up those systems when it knows it needs to and spins them down it does that by going and grabbing a virtual machine ami or whatever whatever the case in azure but they're grabbing that virtual machine from somewhere that's ready to go but we saw in our pipeline that we can deploy those virtual machines at a moment's notice so our auto scaling group is going and grabbing it with this process if you have elasticity if you've built this for elasticity you could scale up rapidly and scale down really easily and you're always going to be scaling up with the most recent system but you never have to patch that live system let's say that process we were looking at we run this every hour maybe going from blue to to gray and then gray to yellow we could run that on an hourly basis that's automated we could run it on a daily basis however often it runs that is the system that is at the most recently passed so you may never have to go patch a live system again so that whole loop we were going through before we're trying to figure out all right what's out there how do i you know what do i do with it um how do i patch it what happens when it doesn't patch properly and remediate we don't have to do that whole process anymore for at least for patching and updating um we're taking advantage of the provider's elasticity and because we're taking advantage of the provider's elasticity we can do things we probably couldn't easily do in an on-prem environment if suddenly there's a big you know it's a it's the holiday rush and a lot of people are hitting your site it's fine your system is built to elastically go up and down if you find out there's a major vulnerability in your code that's been deployed and you've got to fix you can go out and destroy all the currently running virtual machines and it will reload it with the new one that you've patched you can destroy and reply on schedule or you could have just let the cloud provider elastically grow and shrink so in production systems that we're running uh they'll spin up and spin down during the course of the day it always will destroy the oldest first so in a 24-hour period most of our virtual machines have been reloaded anyways at least in three days we've rebuilt all of our virtual machines so now some interesting things come up if i don't patch a live system do i need to actually ssh into it ever maybe not maybe if there's debugging uh remediation but certainly not for daily maintenance maybe you never have to do that again do you have to scan a live system you might have to because you're you're required to for whatever reason but if you have systems that are growing and shrinking at a moment's notice you never have to really patch a system because you just destroy it and build a new one you maybe you don't have to do do vulnerability scans on a regular basis so we've gotten to the end of our little pipeline here we've taken from our baseline here on the left-hand side we have built some kind of code that is going to patch it and prepare it for the development teams this is where we can codify our security policies which i love that idea when we talk about things like infrastructure's code and how we can leverage that in the cloud you now and have um teams that can say this is the security policy and we implement that at the infrastructure level uh you have your development teams that will grab the the gray box or the gray ami will implement their own code on top of it and then prepare it and provide it so it's available for your auto scaling system which is the i guess that's orange on the right hand side and now amazon or google azure can take those amis spin them up spin them down based off of the need and you now have this elasticity you've built into the system you don't have to patch it uh you may not have to do much remediation which may reduce your your threat uh on that on those systems itself on those live systems makes it really nice all right a couple things you can do to simplify this make this easy because i probably and you know i spent i don't know 20 minutes here kind of walking through slides it sounds easy but it's not there's a lot of things that can go wrong when you do this um you're taking a lot of your effort and putting it in the front of your process you know in the beginning you're doing a lot of development and design and deployment and all that kind of stuff once it works it usually is pretty smooth assuming that your baseline images are pretty standard i've had situations where i've used baseline images that were from another organization and they made changes and it blew me up in the middle of the day but um if as long as your baseline images are safe or and pretty steady then it sounds like it could be easy but you're having to do a lot of work up front so here are four things that i believe will make this a little bit simpler to build and to operate first one oh i should read it um first one is we simplify the process where possible one of the big problems you have in this you know this picture here of this this company they've got a centos 7 and a centos ace and ubuntu and windows and each one of the gray boxes and each one of the yellow boxes is code right someone has written scripts that do some kind of evaluation or some kind of installation uh hopefully doing some testing and preparing and any one of those codes something can go wrong so let's figure out how to reduce it so this is a good time where you can go back to your organization and say hey it's going to be centos 8 or more likely centos 7 uh or windows i'd like to get rid of windows but let's say you have here's our windows version here's our centos version uh we're going to use linux 2 for amazonish kind of things and then maybe there's some marketplace images that are very specific significant reduction in your amount of work you have to do in your pipelines does probably mean that your development teams might have to change their code but they'll change that code one time to move it over from centos 7 to centos 8 but that'll keep you from constantly having to manage additional pipelines so it actually works out um in the long run so simplify where possible another thing i recommend is to build monitoring and response to problems early in your process uh i've had cases where we've built code application code pipelines things like that and we get to the end like okay how do we know if it goes down and we have to go back into the code and you have to add checks and balances my recommendation when you do any kind of infrastructure's code any kind of event-driven architecture design inside the cloud or any of this kind of stuff you build your ability to monitor and alert and respond at the beginning so if uh let's say our gray boxes that we were looking at before um oh let's see that's where i wanted to go let's say our gray box let's say when we run our code here in the gray box and something fails or maybe uh there's a failure at the testing of the developers maybe when they run a test they see something they can identify the problem right away before it gets to the orange all the way on the right hand side in deployment all right i also do recommend offloading as much as possible to the cloud service provider uh i realize a lot of people don't like that um i i've worked a lot of organizations i've seen a lot of organizations they have awesome engineers and they're like i want to build this widget this way and i know amazon has a similar widget but it's not as awesome and what happens is uh sometimes you can end up managing this widget forever and it's not really core to your business i like trying to offload as much to the cloud service provider as much as possible because you've already decided to go in with them you're already paying them a fair amount of money using you know instead of using rabbit mq you use sns and sqs is is not a huge leap and you don't have to monitor a lot of stuff you don't have to build and maintain things same is true in uh this process we're talking about amazon has this thing called image builder uh i will state this up front i like image builder it's interesting i haven't put it into production yet so i would assume there's some limitations with it it's fairly new but image builder is really designed to replicate each of our boxes here the whole idea that i'm going to take the amazon aws linux 2 i'm going to spin it up i'm going to a whole bunch of patches and prepare it and then i'm going to snapshot it oh and i got to test it too and then snapshot it and then if it fails i have to alert somebody you have to write that code like that's something that somebody has to write the amazon has something called the image builder which is intended to help you do that and what you do is you create what they call uh components and each component is is really like a yaml file and the email file will specify how to run some kind of scripting put specific code in there like in this particular example i have you know hello which is not a really good test but you could do a command that does uh yum update right and that will run that command and will update the uh the packages on that system you could have one that installs nginx you could have a component that installs your proprietary code it could go and pull you know more detailed scripts from s3 or your you know your code repos whatever it is and it can build these but what you do is you build a component which is an individual installation and you can also build a test which is another component which is just a testing of it so i can say here's my nginx installation component and then you can build a tester for that nginx and you can run those the image builder will run those on a particular image and if it succeeds if the test succeeds it will snapshot the ami uh an alert if it fails it will go back and and it will give you an alert uh you could take a whole bunch of these components and uh you put them together in a pipeline so exactly what we were talking about before each one of those boxes we were looking at before with the lines around them you can implement that in image builder now there are some nice things in image builder that are a little clunky to do but it looks like it's what's been possible in the future you can set version numbers you can see i have 1.0.0 for my snapshots you can create the pipelines so that if a version gets updated or maybe my code repo gets updated it automatically kicks off a rerun of the image builder pipeline and so let's say let's say i've decided there's a new way i'm going to my intrusion detection system for linux 2. i could update my component here in the image pipelines that says you know that makes whatever change is required and amazon you can alert amazon's uh uh sns service to say hey if this happens go and kick off a new version of the pipeline it reinstalls now all the images that are out there ready for deployment has the updated intrusion detection system this is definitely possible it is some work i mean it's not easy but it's definitely possible all right the last thing on here that i wanted to mention is i really do recommend that if you do something like this you do it and you fail really quickly and you learn from it and you keep building it is i always tell when so when i i when i teach some uh security 545 we always have this conversation about infrastructure's code and how much do we actually bring to infrastructure as code versus having people administer their environments and i always recommend that i always say it's going to take a lot of work in the beginning it's going to take a ton of work in the beginning but if you build it once it's super easy to build a hundred or a thousand of them that's the nice thing about the cloud and the service that are provided with the open apis same in this case once you build one pipeline of one virtual machine it's a lot of the same mechanisms it's just maybe some different scripts and so building one figuring it out will make it easier for the build and then 20. hopefully not 20. that's a lot but you know at least as many as you need so i definitely recommend failing fast and learning from it um i also recommend and this kind of goes into the second bullet here of being able to monitor and respond quickly you need to be able to be to monitor your base your your bill pipeline identify if there's a problem at any point and if there is a problem you can remediate it before it gets to our step number four um just for a visualization here go back to this uh i have had cases where uh there was a change in the blue and that broke the gray which was fine we were able to detect it we could tell from our scripts that something was breaking what was really bad is if we didn't catch it until it got to the orange until it got ready for deployment because now amazon's trying to deploy an ami that's that's failing and it it'll it'll fail and you it may not detect that it's failing but somebody can detect it's failing your customers might be detecting so it's really great it's really important that you have tests throughout the process which is one of the reasons why i like the structure of the image builder you could also do that with um ansible you can do that with other tooling uh another thing i would recommend on this as i try not to do configuration at deployment time so when we go to the orange part all the way on the right hand side and your auto scaling group spinning something up you could have the first thing it does is run a script to do an update but it's doing that to a live system and if it fails it's failing in a live system also if you're doing something like updating your source code let's say you have an application every time you deploy you want to go get the latest you have to have some kind of access to your source control environment or maybe an s3 bucket and now you have a live running production system that has access to s3 or or your code repos and that's not so great it's really great to do that in the pipeline create a snapshot of ami now when that runs in production it doesn't need access to all that kind of stuff so that's why i do like the idea of having multiple snapshots in a pipeline in a row for those particular cases all right and that is the end of this um i think there's a couple minutes for for questions that i think can let me know but i do appreciate your time and i will be hanging out in the slack for a half an hour or so and i can answer any questions beyond the talk thank you
Show more










