Introducing our pipeline tracking tool for Life Sciences
See airSlate SignNow eSignatures in action
Our user reviews speak for themselves
Why choose airSlate SignNow
-
Free 7-day trial. Choose the plan you need and try it risk-free.
-
Honest pricing for full-featured plans. airSlate SignNow offers subscription plans with no overages or hidden fees at renewal.
-
Enterprise-grade security. airSlate SignNow helps you comply with global security standards.
Pipeline Tracking Tool for Life Sciences
Pipeline Tracking Tool for Life Sciences
Experience the benefits of using airSlate SignNow for your pipeline tracking needs in Life Sciences. Simplify your document processes and save time with our efficient eSignature solution. Sign up for a free trial today and see the difference!
Get started with airSlate SignNow and optimize your document workflows now!
airSlate SignNow features that users love
Get legally-binding signatures now!
FAQs online signature
-
Is life science the same as pharma?
The Pharmaceuticals industry is often considered a Life Science industry with its focus on the development, creation, and distribution of medications to aid the health of living things.
-
What are the life sciences technology companies?
Biotech Companies to Know GRAIL. Benchling. Helix. ArcherDX. VieCure. Paige. Flatiron Health. PWNHealth.
-
What is the difference between biotech and life sciences?
In some instances, biotech and life sciences are used interchangeably, however, biotech is a subset of life science. Life science is a broader term that encompasses all the scientific disciplines that study living organisms, including biology, biochemistry, genetics, and microbiology.
-
What is considered a life science?
What are life sciences? The life sciences are made up of the sciences that study living things. Biology, zoology, botany, and ecology are all life sciences, for example. These sciences continue to make new discoveries about the animals, plants, and fungi we share a planet with.
-
What is a life science company?
The life sciences industry consists of companies operating in the fields of pharmaceuticals, biotechnology, medical devices, biomedical technologies, nutraceuticals, cosmeceuticals, food processing, and others that dedicate their efforts to creating products to improve the lives of organisms.
-
What is a pipeline tracker?
The sales pipeline is a comprehensive term for every stage of your sales funnel. Once an opportunity enters your sales pipeline, they move from stage to stage until they either purchase or decide not to buy. Therefore, a sales pipeline tracker is a tool that tracks every opportunity's progress through the pipeline.
Trusted e-signature solution — what our customers are saying
How to create outlook signature
first off thank you very much for attending this talk definitely appreciate it after a whole night's of hard drinking and partying so as the title suggests will be talking about things that you would be doing in your daugher or container pipeline depending so we'll be talking about the threats that you face while you're playing with docker container the pipelines and integrity odds to push images through your various environments monitoring these activities and of course making sure things are happening securely through incident response planning digital forensics vulnerability management and hardening hopefully we'll get some time for doing a live memory forensics demo so let's cross our fingers for that threats container threats are not that quite different compared to a traditional IT environments we got the cloud environments with the VMS come in and now we have containers to deal with quite similar in the runtime we have container exploits that can cause your container to run malicious code and this could result in resource exposures such as your file system and databases on other resources attached to it as a result of this compromise the attacker might be able to break out of your container scary then you they have access to your host and credentials stored there and other things that could lead to lateral movement cross container attacks yes so you have other resources and other containers on the same host or different hosts this can lead to more information being compromised and of course you're now the service attack that can take down your systems and cause some reputation damage or even loss of revenue if you're not countering for that or planning ahead for your resources to be allocated properly so of course you need to be careful about your data whether it's at rest our transport if you're dealing with compliance frameworks regulation regulatory bodies and what's not which you probably are you have to make sure things are secured encrypted a when they're stored when they're passed on to others resources so you need to watch for the images themselves now so you'll be pulling pushing images you'll be rolling new images having your devs create new images so it's going to go through lots of meat grinders and the end product will be sometimes unpredictable if you're not keeping track of things so I let's go and dive deeper mitigations we've talked about the threats so what do we do to make things more secure visible and traceable so talking about things so I always like to use this unknown known and not quote from Donald Rumsfeld it was used yesterday again by the great talk done by Aaron the ncc group I really like his talk if you haven't watched it make sure you catch up with it later but so in security we always like to talk about known owns what we are knowing what we deal with right so a lot of things we know what's going on but certain things are known but ignored or accepted risks and there are the unknown unknowns that you don't know that you're accepting at all so a lot of companies say hey we're secure because we haven't been well maybe you don't know about it so a lot of computer companies come came out last week saying hey most companies at some point have been compromised or in the process of being compromised and they have no idea about this so what we are trying to do here is to counter all these threats especially unknown unknowns by applying security at different levels so to do this we have basic steps or or stages that dr. security revolves around so first off we want to make sure we run on a secure platform our infrastructure needs to be hardened and monitored and properly accessed so we have access controls that we were talking about content security for making sure images are properly tracked and validated and of course we need to monitor all these things that are going on store the logs monitor them and be scalable about this in the event of a compromise of course it's going to happen it's not a question of if but when the classical mantra so to that to do that obviously you need to be prepared with an instant response plan policies and testing which we'll talk about a bit but not in much detail so talking about the pipeline as you're all aware that it all starts with the base OS image and then it proceeds through various stages including going to the devs so the base image gets supplied to the developers the devs consume these they add their apps onto it and then they push it over to the dev registry or repository Brel ang runs continuous integration tests make sure things are proper and then decides to push it to production or other environments so all these steps that we're looking at here have different requirements all of them require monitoring we want to know what's going on in every step we also want to know what the steps are doing right are they signing things properly are the devs making sure that they are signing their images as they are adding more components into the base OS image if they're signing it and if they're using a signed base image we know that things haven't been compromised well think well in transport we want to make sure that devs are who they are we don't want an attacker randomly picking up an image using may be compromised certificates and pushing the images over to production the monitoring should be picking up that also relevant and other various layers of mitigation should be keeping an eye on it image vulnerability scans are important as well because although you're doing a good effort in hardening your infrastructure your images your hosts things are going to slip and you want to be doing these vulnerability scans in an automated fashion whilst looking at the content of your containers your hosts what libraries are exposed what network ports are exposed and also what third-party libraries are being integrated into your code that you have no idea that about their vulnerabilities and that's always what gets people they have third-party libraries included they don't know if it's patched or not because it's not within scope they only pay attention to their own code base but but you'll have these dependencies that will cause a major pain later down the line if you don't keep an eye on them after all these of course being prepared incident response is always the last stage digital forensics which is part of it which is discovering what really happened if possible attribution the root cause and the eventual lessons learned and how do you integrate it back into this cycle to improve this cycle it's a continuous improvement cycle that you have to go through it it's it's it takes a lot of work lot of planning automation and executive buying that's the key word so let's go in a bit more detail access control authentication what we want to do in these environments is keep track of who has access to what and they can access things in a given time frame so leveraging ldap over ssl for image transactions weathers push-pull becomes important because you want to have the devs the the devs that really need access have the access and rel ang have access to and visibility into all these images also the service accounts need to have proper access to do their automated continuous integration testing and other security service related access as well for scanning and it's a integrity testing and so for environments that you won't be really supporting ldap authentication you can leverage mutual authentication based on t other certificates which in production for fast let's say transactions this turns out to be more efficient so in the diagram well just going to showing a basic flow the dev systems as they push to the dev registry they have to do it over ldap authentication they won't be able to do so without authenticating and also this whole push and pull happens over HTTPS all these different levels except the last one involves HTTP ssl and LDAP based authentication that also happens over a ssl the continuous integration solution also conducts its transactions over secure channels so by using secure channels what you're doing here is making sure that data and transport is not a that's a compromise so the classic method man in the middle attack is feasible if you're not securing your transports so somebody could modify your binary or your image and inject some sort of code into your container and execute their own malicious software or malware and you won't know about it and hopefully which we'll be talking about in the next one you're signing will take care of that man in the middle attack even if you're exposed so if the images you're producing are signed and properly tracked in a drugger trusted registry you will be able to have an account of the transactions related to these images so most companies like Salesforce or the Salesforce subsidiaries are going to be using on-premise registries and most transactions that almost all of them are going to be done over ldap and specifically SSL so we also have a separation of images so we have based based on users dev images and prod images are kept in separate registries and they are part of different stages of the life cycle that I had showed too and when images are pulled they will be validated by this automated platform docker truck of docker provides a great let's say tool that provides a transparent image validation based on signatures so it will only take one line of config for you to run and which is the darker content trust and it will make sure that the images that are being transacted are properly signed by people who are known to the environment so the darker notary handles this works with the registry and it provides accountability and hopefully attribution as to who is created image when things happened and as the images flow through your environment I haven't been compromised so the notary master here acts like the broker make sure all your transactions are accountable so the continuous integration environment builds and tests your apps and your other services that you're pushing it signs it pushes it to the notary master and then the prod are other DMZ services consume these images and of course after validating them so this gives a higher level picture of how all these stages work with each other I would like to thank Andre Falco Salesforce also for putting this infrastructure together it becomes it's challenging especially for having docker infrastructure fit in with what's already existing and the different rules we need to work with within the environment so as you can see the dev systems push images to the dev docker registry over ldap SSL and later this gets consumed by the continuous integration platform bugs get reported to the ticketing system and then release engineering makes sure these are cleared and then pushes us over to the master docker registry also at the same time images are signed and posted to the notary master and as they get consumed by the DMV and prod environments they are validated of course the whole transactions are done over secure channels to prevent tampering of various parties that might be in your environment so let's talk about hardening this involves the host and the containers the host is pretty traditional everyone should be familiar with how to make a host more secure compared to what comes up out of the box or after a fresh the most important part that people miss is frequent patching a lot of people have various levels of SLA czar various levels of timelines to patch them with containers though it becomes a really fluid environment where you need to be sure that what you're running is what's on disk and what's running is properly patched so host patching the host is the first step if your base is secure if you know it's up to date and you're not open to the latest vulnerabilities reported publicly of course you can't do much about the old days and you should be ahead of the the crowd but to protect for the unknown unknowns what you would like to do is minimize your attack service the surface by minimizing the amount of components you are including on your images so things like compilers or other types of shells or a pearl or any other type of scripting language should not be included in your host if possible most of the compile time things should be isolated to your dev environment you probably would have some script into do admin related tasks but that should be lonely limited to certain binaries and you shouldn't be really exposing your scripting languages to regular users or for example the docker user so to protect your memory and Colonel on the host you it's recommended by a lot of security professionals to use gr security packs so what this does is it basically prevents your runtime from being tempered by attackers in the event of a compromise it makes it really difficult for attackers to inject code that they want to run that you will give rise to further compromise of your infrastructure another thing that attackers do is they like to replace your binary such as your SSH service and what they do is they dump your credentials or your your keys and they use this to further expand their hold on your environment by spreading into your other systems also obviously we want to use or leverage the Linux isolation capabilities talked already provides lots of tools for you but you should even go further in making sure things are not exposed or easily available two attackers that eventually are going to get on your system so containers the main challenge that we sometimes face is and the speed of patching I keep coming back to this but it becomes really important to include the latest patches and the latest software on your systems because it's there several benefits once performance sometimes sometimes not obvious but you'll get the benefit of having a more secure platform and not worry about the low-hanging fruits and it makes it easier for your investigators knowing that you're running the latest code base and even if something happens they won't have to deal with all these different use cases or different attack trees that would be possible if you're running vulnerable applications or software so you should also leverage user name spaces to have privilege separation and make sure you're running as the lowest privileged user on your hosts so that also minimizes the amount of damage that an attacker can do as on the host you should generally speaking not include things such as compilers or SSH access or any kind of shell environment in your container if you can avoid doing so it's sometimes tough because people or devs like to hop on the images and test off well that is great in the dev environments in production they should not be allowed so production images should not include all these different components that might increase the attack surface other things to do is as we already mentioned is using the privilege flag and the read-only flag it might think make things a bit more difficult for you to run but it's it would be great if you could do read-only and to provide immutability of the containers also limiting access to the darker user in groups that seems to be the most a lot of questions we get about this is a we have devs that have darker group membership is this secure well you know we might not want to give that level of access to your dad or even better they shouldn't have access to your prada environments to begin with so as you know docker group access provides you it really high privileges and that could provide access into your containers and the resources they have also you want to limit the amount of access the containers have to your host resources or any other network based resource and they should only have access to what they need to have access to it need-to-know basis as a lot of security folks like to say docker has been doing a great job in providing guidelines and security measures for their users so one of the tool they have is darker bench for security it provides a baseline assessment of your environment and will provide a long list of things that you should fix if you haven't these are suggestions some of them are really let's say strict things to that you could do some of them are things that you should be doing to begin with like users isolated to non docker groups so if you follow these best practices generally speaking you should be in a safe place from Harding's perspective so the other aspect of hardening is making sure you have a vulnerability management program so a lot of people think that vulnerability and vulnerabilities and their management is doing a scan or running through a scanner and pretty much checking off whatever's been reported that's only critical well there's more related there there are more things related to that so but going through the basics you should be scanning your image images as to making sure that the codes the base image and the code running on the base image has been properly attached right you don't want libraries included in there that have boner abilities or any other OS level vulnerabilities there was recently a publicly hosted image on the doctor registry it still had a lip seal neural library included there so anyone running that potentially was exposed to the serve owner ability so even if things are publicly hosted don't trust them run it through your own scanners make sure things are what you expect them to be so trust but verify that's what I'd like to say always so scanning the hosts and containers for what they have inside is important also as they run you want to make sure that what they're exposing to the network is also something that you're expecting certain let's say devs might include extra code in there that are not obvious and the first look that they are actually launching network services and exposing ports which you might not be monitoring that could be problematic so network scans it becomes important for making sure what's running in your right environment is what you expect and also sometimes rogue services or containers can be detected this way going back to the source code you also should be keeping track of the source code itself doing manual and static code analysis well automated static code analysis this obviously is a whole different topic set to talk about but code signing becomes really important as well you're signing your container images you should have your code signed and make sure it's properly passed on by debs and users you know about where we talked about the scanning aspect of it but the remediation part becomes important where you want to make sure things are patched timely so that delta T there becomes pretty variable depending which company which environment urine it could be four months after a critical vulnerability are being reported or it could be a day so people or companies have different priority or reputations to protect people who really worry about it have a shortened shorter SLA and prioritization plan based on the level of risk that's been reported so you want to have a system and people looking at this tracking them making sure things are being pushed through and patched properly you want this to be documented and owned by a high-level group that can keep or hold people accountable it's it's it's accepted in the community that a certain groups of people in the in companies will prefer to hold back their patches because it will break a certain let's say services well that's people would always prefer to have a higher up time but sometimes having a higher up time means you're running exposed risky components that might cause you more downtime so it's always a trade-off that you need to have their do you want to have a short down time to patch or you want to have a longer down time so once you have patch things you want to make sure that the containers or the base images are the ones that are running on your hosts so we also so as a security professional I've seen interesting cases where okay you patch the base image you patch the code itself but then you also have containers that's been running there and haven't been relaunched so that's problem why you might think you're secure because you patch but what's in memory doesn't match with Watson disk so that's it's something that you need to keep in mind after you've patched systems or updated systems so we've made sure things are properly secured hopefully we're hardened and running things through secure channels and securing things on at rest well we want to know still what's going on right you want to do monitoring you want so what's going on your network what's happening on your end points what's happening within the memory so network infrastructure is one place where you'll see everything that needs to talk outbound or inbound so what you want to first do is make sure you have the proper infrastructure in place to be able to see the transactions going between the resources you're exposing so generally speaking in the environment that we support we have the networking plumped through our physical infrastructure as much as we can so we can have more accountability of the containers and see what traffic they're processing and responding to and make sure we are analyzing this traffic properly so as you can see we're capturing traffic both on the physical Network and on the host that the Dockers are creating piping it through an ideas netflow engine generating behavioral network information so net flows pretty much as it is an accepted behavior ille and behavioral way of looking at network traffic the IDS is provide your baseline it's like the network AV but you should have it regardless because sometimes you will have things happening before a real attack an attacker will scan your network if they've compromised the host but they will just see if there's any low hanging fruit IDS's will catch that most of the time with that before the advanced attack happens obviously then you need to pipe all these logs into a sim or a log aggregating monitoring solution where you can process this data and make it more human readable and actionable so things that you're looking at on your network you're looking at what the containers are talking to each other what you're what the containers I sent to your hosts or receiving and also how your containers interact with your resources your databases Hadoop cluster is you name it knowing and having a baseline on this traffic a gives you an idea when a delta happens it gives you the opportunity to say hey this is an outlier we should take a look at this resource monitoring has helped lots of companies to catch attackers before they've dumped a database and leaked it outbound or just seeing a spike on your network traffic actually can give you some what of a predictor as to there's something funny going on here let's take a look but if you don't have any monitoring place you're not piping your network traffic through various solutions such as net flow or ideas or full packet capture you won't be able to see what's going on or dive deeper sometimes you want to know okay we've seen the spike what is that traffic is it a tack this guy's like SSL is it an SSH going over port 443 somewhere so that becomes an interesting way of looking at it so most of the time attackers are going to compromise your system and there will be exfiltrating or talking about for further commands so a bit more detail what you should be looking for or look capturing on hosts the logs you want to have all logs some people say oh we'll just have authentic Asian logs no that's not really going to be enough you want to have any log possible to have a full picture of what's transpiring on your hosts if you have the budget or resources you would like to analyze these this data with machine learning and Emily detection is a is a phrase being used a lot nowadays in the Security Committee but you want to be doing or scaling properly to be able to monitor and act on these things all right and the under containers you want to do the same thing capture all the logs both the OS component logs and the application logs application logs also do authentication they also have logs as to hey somebody ran this query or somebody access this resource or this query resulted in a million rows of a customer data and we need to know why a user or something is dumping a million records that becomes the interesting question to answer once you come it across and also at from an attribution standpoint having containers tracked on the network basis gives you better attribution as to what's being compromised and gives you saves time from a root cause analysis perspective other things that you should be looking at is disk activity monitoring at the file system integrity with various tools that are available there free as well as paid because you want to keep an eye on the config files the binaries make sure they're what you expect runtime layer monitoring can be provided by different vendors also they're open source solutions which I'll talk about one aspect of it later in the digital forensics part which is the memory monitoring so you can actually look at the memory in runtime and make sure things are transpiring as they're supposed to meaning the binaries match what's the memory or the processes that are emitting network traffic are actually talking well there when you look at the network layer or versus what's on the host or container you might see discrepancies so this discrepancy based on detected based on either memory forensics analysis or memory analysis can be can provide you a good starting point in forensics and let's hop into it in forensics the most important question is where do we start at you usually get a a say a call in the middle of the night in the morning saying hey we got an incident bad stuff happen and you're like okay what's the bad stuff and then a if you don't have a starting point or something a lead it becomes really finding a needle in the hay sex so we have proper monitoring in place and visibility into what you're doing it provides your incident response guys a good start a good starting point and it lets them mitigate things much faster so things are moving towards incident mitigation as in we want a shortened an amount of time it takes for this incident to be closed and to be able to do that you have to have the capabilities to respond and the capabilities include you know having visibility into your memory whether it's real time or close to real time or having periodic memory dumps in and analyzing them the capability to do this forensic so nowadays nobody is dumping full disk images anymore we're everyone's focusing on specific disk artifacts so you won't really do in full disk forensics but you should be able to do disc level artifacts and generate timelines so we already talked about the networking side will briefly talk let's talk about this forensics will be based on the disk artifacts will be building super timelines what are super timelines it's a big picture most people are familiar with these log aggregators monitors and I provide you a holistic view of what's going on but from forensic standpoint you need more data as to figure it out when a certain incident started so you want to know for example the file system access times hashes you want to know about the events as they correlate to your file system application events as they correlate your file system events so you want to be able to see all the different levels of activities as they transpire to be able to see what got compromised where did they spread if any data has been exposed so the tools that you would use traditionally sleuth kit and a placer to build a timeline that integrates all these artifacts and if you have if you need to get access to the raw disk and carve out let's say malicious binaries you could use DD or a scalpel or other types of forensics tools memory forensics which is my passion because why nothing can hide in memory anything that needs to run needs to exist in memory at some point even if things are garbage collected they're going to be there briefly in memory probably for I don't know five to ten minutes so if you're fast enough you can probably extract an artifact and one of my talks that I gave a couple of years ago I was able to dump a Bitcoin keys and other artifacts from memory even after a certain time the transaction had occurred so that gives you a good starting point from attribution standpoint also it's quite faster compared to disk forensics and so you won't actually have to sift through terabytes of data nowadays you only have to focus on a smaller representation of what's running and what's doing things so the diagram kind of shows the basics you have the here ram that doctor is running out probably and then you have the tools that lee that can help you to dump that memory into a sample or provide access to live memory and then analyze it and let you access this information as if a database you so you can create and see what's running in memory and do a or decide on how you want to act so access you could access memory as memory samples by just dumping it running tools such as lime or lint p.m. or you can access it life by other memory other modules that expose your memory as a slash that or a device and the for analysis my favorite tool is the the volatility framework the voltage the framework is an open source project that's been going on for years and lets you make sense of the memory dump it currently supports windows and mac android linux and other arm based pork platforms and it's a really useful toolkit if you're doing if you need to do forensics so just talk about what memory forensics does for you so this is quite familiar it's a ps3 view of things but when you're looking in memory the memory structures you were accessing can actually provide you a historical view of things so processes that might not exist in and if you run ps3 from your command line are going to be visible in memory because if there's a rootkit on your host running it might have hooked your API calls or your sis calls and might be hiding the processes in there so any other rogue for example a container running there or a rogue process kind of masquerading as docker containers can be detected this way by looking at the parent-child relationship other things we talked about are looking at resources what's available the tmpfs plugin or module in the vault of the framework lets you view the temporary file system and sometimes when you're running your container based on the temporary file system you might want to know what's going on right have a list of things that are exposed and know what container is accessing the resources exposed on the disk or memory another thing is that you want to counter for is a your components that are being used or consumed on the host or about your containers so you want to know for example if you're if you're being side-loaded meaning is Kobe injected in a form of a library so this tool by just being able to list what's in memory as to what libraries are executed are being executed just lets you detective the injections and manipulations so process integrity monitoring also becomes interesting because nowadays most exploits they won't write to disk and they'll be doing everything in memory and since things are not being bounced a lot it or containers they will be doing a lot of processing they won't be bounced as frequently as you would imagine they'll be a catching the memory doing their malicious acts excellent data or spreading laterally most probably so what you this solution provides it for is a way to compare what some disk and memory make sure things in memory haven't been tampered with so I just modified the docker binary in it a quick and easy one just for the demo and running this plug-in just show that there is a difference between what's on disk and in memory so the top one is this bottom one is memory so instead of 90 to 91 trivial but if other things are modified it becomes problematic I was planning to do a demo but I don't have enough time for that unfortunately I apologize but just to summarize what it takes to secure your pipeline right you want to focus on four main topics platform security content security access controls monitoring and response so platform security obviously you went to isolating your resources and containers you harden them make sure they're secure not running any unpatched vulnerable code and you're following recommended best practices by your container for either especially docker and you're doing your vulnerability scans you're making sure also what you're signing providing to your image your devs and to production are accounted for you're not pushing you're not pulling tempered images also your prop you're using what's being provided by the ecosystem to track all these things such as the doctor registry doctor notary and so on so forth access control is really important do you want to know who is acting in your environment and make sure all your transactions are happening over TLS or SSL this will bring up the attack the bar for the attacker to compromise your environments much more than you would expect monitoring response you have to always plan ahead you have to make sure that you're ready to act if something happens you need to be planning and have an incident response plan and tested regularly right a lot of people have a plan on paper but they don't run through it they don't know who is supposed to be included in it right the roles need to get updated people who are acting in different capacities need to update it and and test it quarterly at least right this is something that's missed a lot but being pushed by the compliance frameworks nowadays so it's tough to avoid but it still happens vulnerability management it's not only about patching it's about making sure things are happening in a timely manner there's attribution patterns are detected as to why Boehner abilities are being included in your image whether it's code or through the base image creation and you want to be able to know what's going on in your environment network is a must because everything that needs to be that is compromised we'll talk outbound weather frequently or infrequently in one case there was a compromise that will stick talking emitting traffic just every two months if you're not really looking for that traffic type of traffic in frequent or frequent traffic you won't see it and your logs are not going to help you but having logs are going to really help you with the low-hanging fruits having baselines and being able to see the Delta forensics will help you in root cause analysis and gauging the amount of damage that has been done by the potential attacker and it will definitely provide a good lessons learned for the devs infrastructure groups and other executive level people so they can allocate resources and spend more time for example for spent more resources for depth to write secure code and just not push them to deliver in X amount of days and so well I would like to thank everyone who has attended I would like to thank my colleagues from Salesforce they've been great support and the company as well and if you have any questions feel free I have the references for this talk listed on my slide deck and thank you so there's not actually another session directly after this we are out of our time slot though but because there isn't anything directly after we could maybe take five minutes if people do have questions you do have a question there is a microphone in the aisle there we are recording so please all questions into the mic when five minutes is up I'm gonna have to call it though okay okay w by I'll be quick thank you I learned a lot from this okay so some of the questions is i'm currently using iron I oh I'm deploying lots of apps and containers one of the things you didn't address is I'm pulling thought stuff from third-party api's what can I deal with security from pulling from third-party api's I'm also sharing information using a cash that is so iron i/o has a caching solution so other containers can pull off that cash I do it offline just curious about those two questions well that's the difference it's an interesting question not directly related to containers obviously but from an API standpoint you want to make sure all the traffic is going over it and secure channels authentication using keys or credentials right and logging becomes really important as to what's being queried when your caches hopefully not everybody has access to those a right the containers have access so curious if I have a cache is going across containers is that considered a vulnerability within within a talker can do I could have gone more so the containers that have or need to process the cash have access to it and no other container has access to it that's fine so that's a problem right you don't want all the containers to have access to that you could either do that through network level and that's a fire welding it depends on your how you do your network plumbing obviously or you can do a resource dr. provides a good interface for limiting who can talk to who and what port so you can leverage that as well obviously we have a large amount of containers running that becomes problematic hence automation and I'm sure there's a vendor that has a solution for you thank you hi in the talk you mentioned keeping an eye on third-party libraries if the developers have access to the darker file it seems pretty easy for them to do that kind of you know I'm not intentionally malicious but like how in practice do you do that how do you know if someone adds something new or what do you have out there and how many copies in it what versions yeah so first off you don't let you wouldn't actually let people download random base images and and push them into let's say the dev registry we have a you know tight control over that so if a dev downloads a public image modifies it adds its own coat and tries to push it it's not going to work because of the signing that's going on right Sophie more like it like adding apt-get live something oh that kind of thing so they're rolling at their own docker file well well that was my question like do your devs are your des allowed to do that and if so how do you know if they do so the basic images are controlled by the infrastructure team so the devs don't have access to those so they won't be able to add anything to the base image they can add their own layers of code that's only at the Cape based on the capabilities provided by the base image and rel ang makes sure that the code that needs to run has access to only the things for example Java right nothing else so you can't really roll your own base image and push it over there are checks in the way to prevent that from happening you always need a gatekeeper okay thanks thank you a quick question on the networking part you mentioned about the firewall and the ids and all of that are these appliances container aware or they're just inspecting all the traffic which is coming out of the host so yeah it's tough to scroll back to that slide but so network monitoring is happening in both the network the regular physical layer and also at the container let's say bridge that the Dockers or the containers talk to each other so we're monitoring pretty much everything that's transpired within the environment both within the physical realm and containers talking to each other possibly on the same host how are you inserting a firewall in that pot so for containers talking to each other they're different that it always depends on your plumbing right so if in specific environments you can make the traffic actually go outside of the host and then come back and then in your firewalling you can provide different security zones right and that's one approach and if you really want to be more granular you can do it in your container environment as well that is leveraging docker in that case you are using iptables rules for that's possible as well yes but I was interested in in your a particular example with Salesforce what is the approach you guys are taking and how are you guys part1 so we can take that question up flying Chuck yeah I can only speak about so much last question so it's kind of a continuation of what he was saying I talked to David Lawrence it's talker and basically he's on the security team and he had this view that applications should have SSL connectivity all the way into the application so you had a diagram out there that you're going to show him but basically it shows your network structure is doing monitoring before it gets to the container you have an opinion on that view more of a philosophical view I couldn't really hear your question I'm sorry Oh the last part yes so basically if you have end to end encryption with an application which might be a philosophical view what's your stance on that with I noticed you had some network monitoring where you watch the traffic as it comes into your network yeah sure as far as encrypted network traffic that you're looking at it so if you don't if you're not leveraging your private key to decrypt the traffic on the fly what you can do is they look at the net flow behavioral aspect of it and look at patterns of traffic that's not regular within your environment so whether it's talking unusual ports or whether it's traffic spikes or whether its endpoints that are not usually talking are talking to each other that's a giveaway as well so you might not have visibility based on what's provided to you but you have visibility as to what's going across the network and in your view of that's a good way of doing it or offloading the ssl certificates is that a good way to the well application application inside your network or to the energy should use ssl certificates application to application ok it should be encrypted and if you want to see what's transpiring and most of the time it's a lot of traffic to look at that say if you want to look at your head of traffic that's going to be close to impossible to really store all that information right you're too much have a copy of your Hadoop cluster then sitting somewhere yeah so you want to really look at the patterns of network traffic rather than what's going on inside of it and in case of an incident you will have that stored somewhere maybe your full network packet captured then you can do a deep dive having your ssl certs to decrypt it okay so it all depends on the level of contingencies and how deep you want to dig and your sls as to what's your responsibility and as you root cause analysis and the responsibility is how much information you need to provide to your end users as to the level of compromise so different institutions have different responsibilities and you know stories to tell to their people thank you thank you thanks gem thanks everybody few questions
Show more










