Deal pipelines for Life Sciences

Transform your Life Sciences deals with airSlate SignNow's user-friendly platform. Experience great ROI and superior support for your business.

airSlate SignNow regularly wins awards for ease of use and setup

See airSlate SignNow eSignatures in action

Create secure and intuitive e-signature workflows on any device, track the status of documents right in your account, build online fillable forms – all within a single solution.

Collect signatures
24x
faster
Reduce costs by
$30
per document
Save up to
40h
per employee / month

Our user reviews speak for themselves

illustrations persone
Kodi-Marie Evans
Director of NetSuite Operations at Xerox
airSlate SignNow provides us with the flexibility needed to get the right signatures on the right documents, in the right formats, based on our integration with NetSuite.
illustrations reviews slider
illustrations persone
Samantha Jo
Enterprise Client Partner at Yelp
airSlate SignNow has made life easier for me. It has been huge to have the ability to sign contracts on-the-go! It is now less stressful to get things done efficiently and promptly.
illustrations reviews slider
illustrations persone
Megan Bond
Digital marketing management at Electrolux
This software has added to our business value. I have got rid of the repetitive tasks. I am capable of creating the mobile native web forms. Now I can easily make payment contracts through a fair channel and their management is very easy.
illustrations reviews slider
Walmart
ExxonMobil
Apple
Comcast
Facebook
FedEx
be ready to get more

Why choose airSlate SignNow

  • Free 7-day trial. Choose the plan you need and try it risk-free.
  • Honest pricing for full-featured plans. airSlate SignNow offers subscription plans with no overages or hidden fees at renewal.
  • Enterprise-grade security. airSlate SignNow helps you comply with global security standards.
illustrations signature

Deal pipelines for life sciences

In the rapidly evolving field of Life Sciences, managing deal pipelines efficiently is crucial for success. airSlate SignNow offers a seamless solution for streamlining document signing processes, saving time, and improving workflow productivity.

Deal pipelines for Life Sciences

With airSlate SignNow, businesses in Life Sciences can easily create, send, and eSign important documents, ensuring compliance and security. The platform's user-friendly interface and cost-effective solution make it a valuable tool for managing deal pipelines effectively.

Streamline your document signing processes with airSlate SignNow and take your deal pipelines to new heights!

airSlate SignNow features that users love

Speed up your paper-based processes with an easy-to-use eSignature solution.

Edit PDFs
online
Generate templates of your most used documents for signing and completion.
Create a signing link
Share a document via a link without the need to add recipient emails.
Assign roles to signers
Organize complex signing workflows by adding multiple signers and assigning roles.
Create a document template
Create teams to collaborate on documents and templates in real time.
Add Signature fields
Get accurate signatures exactly where you need them using signature fields.
Archive documents in bulk
Save time by archiving multiple documents at once.
be ready to get more

Get legally-binding signatures now!

FAQs online signature

Here is a list of the most common customer questions. If you can’t find an answer to your question, please don’t hesitate to reach out to us.

Need help? Contact support

Trusted e-signature solution — what our customers are saying

Explore how the airSlate SignNow e-signature platform helps businesses succeed. Hear from real users and what they like most about electronic signing.

Electronic signature for business
5
Judy D

What do you like best?

Much easier to have electronic copies of sales contracts - no more paper. My products are often shipped so many times do not see clients face to face. This enables me to still have a valid signed contract.

Read full review
4 years great exper
5
Tiffany J

What do you like best?

The platform is extremely user friendly. I’ve been easily able to navigate the app with no issues.

Read full review
airSlate SignNow has made the transition of our Executive Director to a remote worker smooth
5
Terry S

What do you like best?

The user friendliness of the software. It make it easy to attached forms to be singed and get a quick response and approval.

Read full review
video background

How to create outlook signature

thank you so much that I wonderful introduction and look I can assure you I'm no unicorn anyone can do what I'm doing in fact I think I'm looking at a lot of people here in the audience I can do exactly that and we'll be doing exactly that in a couple of months time so with that let's kick off life science at scale I want to tell you three stories the first one is around the organization that I work for see zero in Australia the second one is around the research that we do that is these genes that we're finding and how we're doing this using Apache spark the third story is around once we identify the disease genes can we correct them and doing that with a surveillance architecture so with that let's jump straight into the first one so CSIRO is Australia's government research agency and we're in the top 1% of global research agencies and these are really passionate about translating research into products that people can use in their everyday lives probably the one one of the most famous products that we developed is what on Wi-Fi that is now adduced in billions of devices all over the world and it's contributing to our healthy research budget of 1 billion dollars annually but we also developed a vaccine for the Hendra virus which is the virus that is three times more deadly than Ebola but on a lighter note we also develop the total well-being diet which is a book filled with healthier delicious recipes and is actually rivaling in the top-selling list - Harry Potter and The Da Vinci Code so if I think it's a fairly nice balance between stuff that people use and stuff that people enjoy so in that note I'm working for the eHealth Research Centre which is quite a unique digital health agency in the world by covering the full spectrum from cells basic research developing technology that can be used in the clinical practice today all the way up to measuring the impact these technologies that we've developed had on improving people's lives so therefore the vision that we have at the center is to improve healthcare research through digital technology and services so therefore our Wi-Fi equivalent is the carry hub which is the first clinically accredited mobile app that helps people through the rehabilitation after they had a heart attack and typically you would think that having a heart attack is a life-changing experience and after that - you're rethinking your lifestyle choices well it turns out it's not quite as good as an incentive as you would think so therefore having this ad which makes it more convenient and gamma Phi's the whole approach of going through rehabilitation has increased the uptake by 30% and the completion rate by 70% which is quite staggering so this little app already has saved lives so therefore jumping straight into my research area which is finding disease genes so as you might know the gene on how of the flu brain for every cell in our body it therefore affect the way that we look disease risk that we have and even our behavior so usually I do this little this was an exercise where there's a particular gene that causes the last digit of your thumb to either be straight what I have or go all the way back so I have a normal boring thumb what about you in here oh yeah I can see some really impressive specimens in the audience similarly with corriander so there's a gene that makes that alters the way that you perceive the taste of coriander and they're usually one in six coriander haters in the audience I think that's a little bit reduced here in in India but can I see any show of hands who hates coriander yeah it's not your fault it's your genome so with all of that oh sorry of course there's a you know more sinister side to it in that blue genome also holds your future disease risks so for example cystic fibrosis says one mutation in the three billion letters that we have and it causes this devastate devastating lung disease so with this it's no wonder that is actually used more and more in the clinical practice in fact by 2025 50% of the world's population will have been sequenced at least that's estimated by Frost and Sullivan but that means that genomics will produce more data than the typical Big Data disciplines in fact is producing you know more than YouTube astronomy Twitter combined and that would amount to 20x avoidance of new data generated each year which i think is quite exciting actually so and the reason that I would know that you know analyzing that kind of data is actually quite challenging is because we are part of the project mind which is an international consortium that looks at the origin of a motor neuron disease called ALS but you might be familiar because Stephen Hawking suffered from it or from the ice bucket challenge so therefore this consortium with all that publicity and was one of the first ones if not the only one that has the power to generate large volumes of genomic data in fact that will generate 22,000 whole genome data sets in order to find out what is the origin you know what is the disease gene that causes ALS and ultimately what could be a treatment for it so therefore the process is that all these patients and healthy province controls will spit in the tube or have the blood taken and then from there the genome will be unlocked and then together this large cohort of 22,000 individual we'll help identify the course and then ultimately the treatment so how to actually find disease genes well as I was saying we need to accumulate a lot of data to compare individuals therefore each line here represents an individual we then identify the differences between this individual and a reference genome so this is on average between you and the person sitting next to you there are two million differences some of them are you know very good and that they define who you are others might be less good in that when some individual might lead to the diseases so therefore each box here represents a difference between individuals and then as I said cases which are the ones that have ALS versus controls that are the healthy individuals and then we just spot the difference in this case it's is these lines lined up but reality is that complex disease is not as easy as that in fact is that it's not one location typically to contributing to the disease but it's a set of locations so it might be some drivers in there and then some modulating factors your genetic background for example in ALS usually the time from diagnosis to death is three years but some people manage to hang on longer for example Steven Hawkins managed to delay the progression for 40 years so there must be something in the genome that is protective so identifying this will be part of our mission so therefore what I'm saying here is that we need to build models over this whole feature set of the three billion letters in the genome and we not want to identify the single feature that contributed to it that the set of feature that jointly contributes to it and therefore there needs to be some machine learning involved in that in particular for us it was random forest so but doing a machine learning task on this amount of data is quite challenging so just to put it back in our head we have 22,000 individuals and we have 80 million features so the two million differences on average when you have 22,000 individuals Mounds up to 80 million features so therefore our matrix that we compute or we do the machine learning over there's 22,000 times 80 million which is one point seven trillion data points and this is by no means an easy and easy feat and again our task was to identify the features so the columns that correspond to the truth label our disease status so at this stage because I know that you're not a biological audience in this stage I would like to take the time for us to think about what other use cases there could be that will experience that kind of data maybe not today but going forward so for example you might want to predict the churn rate or the occurrence of failure in an industrial plant or even fraud or attack detection so instead of 80 million genomic variants we might have a time series data or concatenate a data for multiple events or sensor data like the IOT community here will probably attest to that the amount of automatically collected data points is easily you know soon going to millions of features or it might be log files so therefore the task here then rather than detecting disease genes will be to find predictive markers so for example in a plant you want to predict in two weeks time you know the failure rate and you want to identify which sensors in an industrial plant can forecast this catastrophic event so therefore what generally do we need to do in order to analyze this kind of white data be that genomic data would be that the data said that you might have to deal with going forward and bear with me while I tell you how I think about sort of this ecosystem we're all familiar with a desktop computer which really is geared towards small data the convenience of running your analysis then and there but of course it's limited to the amount of compute that you have available typically you have one node and there are a couple of CPUs on that note now the next step up in my mind from that is high performance compute which is basically a set of these nodes string together and you compute things in parallel on them so therefore the use case here is that it's compute in terms of tasks where each individual calculation you can be done independently on the rest and if you have to share information then that gets a bit complicated but you can do it by writing this spoke coat like openmpi for example but the problem is that this node this sharing of information between nodes is quite cumbersome and it's not automated therefore it's not catering for data intensive tasks what we have here and the data intensive tasks applications the ideal use case for that one the method for that in my mind is Hadoop spark because the way I think about it is it dissolves the boundaries between those nodes by having this standardized way of transacting between notes so therefore you can use all the CPUs on your spark adieu cluster rather than being siloed into the different note if that's if that makes sense so therefore when we develop our algorithm we used Hadoop spark for doing so so if for the tool that we developed is called very inspired as I said it's a random forest approach and we benchmarked it against the machine learning technologies that out there that typically use are used for random forests one is a spark we use a spark as well which is spark ml the other ones are our implementation c++ implementation and I think h2o is C implementation therefore what I'm plotting here is the accuracy so how well the tools did on the data set that we have against the speed and as you can see there in spark in this particular example is exceeding both in accuracy and speed of the other technologies speed that is probably to be accepted because though I expect us because you know we design variants well exactly for that application case but accuracy was actually quite interesting because underlying all of this is the same sort of algorithm and random forest therefore you wouldn't expect it to be an actual difference in the accuracy but the interesting thing here is that all the other technologies were not able to cater for the full data set therefore we had to subset the data set and then basically on each those subsets compared for technologies and what I'm plotting here is the best subset that at all was able to run successfully on there for spark amel was only able to use 80% of the data set and the other tools were even further so hatred for example I think was only 50% of the data set and that clearly shows you that using the full data sets in order to make your decision is a good approach is a good approach rather than doing a feature selection beforehand and then going in because when you do that the typical way of you know doing feature selection first and then building you a beautiful complex model on that he said you subset the data on features that are individually predictive but there might not be the set that is actually the most predictive and this said like in they have not strong association with the truth label but jointly will make the difference therefore going in completely unbiased picking from the whole data said picking the ones that together predict in our case for disease was the best approach to going forward so this one is just to quickly show you how variance bar is scaling with the number of samples so there's the traditional way of thinking about big data is you have more and more samples which is basically miss dimension but also it's scaling role equally linear with the number of rolls that we add into the data set good so with that variance bob is already used as I said by a project mine and by a couple of other universities in Australia most noteworthy in Macquarie University but is also picked up by some commercial partners data breaks for example partnered with us to generate a notebook and I'm going to show you that in a minute but again let's take back a step and think about the a cloud application or the typical work flow in a data science application so you start with a business case in our case there was predicting disease genes in your case it might be something else you then curate the data in order to you know make it computable and arguably this is the most challenging bit of the whole thing because we know data is noisy it's missing you have to consolidate it from different silos so this in itself is already you know black magic some people would say but there are certain tools that will help you do that and certain practices and skill sets that you can learn so once we have a clean data set we'll build the actual technology on to predict something for example so this I call the minimal Viable Product and here we need to scope the technology what kind of land you want to use Python cer whatnot and develop the prototype and then we iterate because the first the first thing that we put together it's probably not going to be the best approach once we have that minimal Viable Product in order for a to be used for the business case you probably have to put it to a stage that is actually production ready and for that you need to provide an endpoint and you need to test at scale so on premise this going through this is quite easy other than the challenges that we discussed but technologically it's quite easy the only problem with on premise is that it's quite expensive you have to maintain you have to put money into maintaining it rather than computing it and it's potentially not scalable therefore having a cloud-based solution might be in the majority of cases the preferred way to do it but it's quite challenging at this stage still to put something on the cloud that covers this full spectrum from doing experimental work you know in order to get the minimal Viable Product teardown solutions come up with new ones and then go to the next stage of having a stable endpoint that is easy to maintain data breaks for example is able in my mind to cover the first two boxes of curating the data you know cleaning it and building the minimal Viable Product it's probably not a good idea for the endpoint but you know being grateful you let's start with data breaks in the first instance and see how we go so specifically a variance bug is set up on data breaks and if you're not familiar with data breaks basically what it is it's you can spin up a spark or do cluster from a data breaks notebook and you can put in the code as you would with a giulio not a Julia's notebook so how many of you are familiar with notebooks this is exactly the same thing where you can have blogs annotation logs and put in some graphics in there the other nice thing about data breaks is that it has Amazon and Microsoft Azura as the endpoint so depending on which accounts you have you can use either or so obviously we wanted to put something out there that people can use and can play with but putting genomic data in the clouds is not a good idea therefore we came up with this synthetic data set and we wanted to make it a bit of fun so therefore it's the hipster index and we score people whether they're hipster or not which is a truth label and then from there we predict the genes that make you a hipster or a non hipster and looking at the audience there are some trades of a hipster textured beautiful haired coffee consumption so if you're interested in this and play around with this really fun data size I encourage you to go to or data breaks notebook and download it in fact there's what we're going to do on Sunday in the workshop with data breaks being you know nice and easy for building the minimal Viable Product but not so nice and easy to providing the endpoint we were thinking of can we do this whole thing with our data bricks can we set up something directly on AWS and let's walk through the steps that are actually involved in order to do that so we first need to put variants back in a topic container from that we then need to have something to provision this elastic cabinet is service on AWS so to have all those master nodes and then from that we need to spawn the worker nodes in that in a cluster and it's beautiful infrastructure we need to connect from the outside in order to monitor it which is the connecting to the elastic have a needed service for monitoring and then also in order to have this nice start of science approach to it we want to connect a Jupiter notebook instance to all of this in order to trigger runs and collect the information back sounds relatively trivial at least that's what I thought therefore I asked him to look into that liyan language so you might know her she's a very famous cloud evangelist and she's really at the cutting edge so I thought this might tickle her fancy and thankfully it did so she came up with this beast of an infrastructure which stands up exactly this complicated workflow that we just discussed four variants back now I don't expect anyone to have the skill set that Lynne has to to be able to create that so therefore we went one step further and put all of that in a convenient infrastructure as a code template so how many of you are familiar with i-80 icy infrastructure as a code okay so let me quickly explain the way that I think about it is that you have these beautiful architectures in the cloud that you might have put there manually or through the command line interface in order to replicate that may be in a different availability zone or free to share with your with your friends you don't want to go through this painful process of setting it up the second time and you already know what you want therefore these infrastructure as code provides a template a text file a flat file that is Jason or Jamal and it basically describes everything in your data structure so the permissions the services the connections between it the three buckets and all of this is described in one flat file therefore this flat file is given to an interpreter which in this case it's the CloudFormation template from AWS and then this CloudFormation spins up the whole infrastructure from the flat file so therefore what managed to do is put it into that flat file so they can yeah now that flat-file with each one of you and he just press a button put it in CloudFormation and he can stand up your complex piece of a Kaiba needed service machine learning with variants bug collecting to us free buggered and having the whole complicated Mouse has done for you so if this tickles your fancy and if you think I would like to help in you know fighting disease genes or writing these cruel infrastructures yourself and you think can I help and the clear answer is yes just like limb did there are lots of little things that people can contribute in order to build this ecosystem together so in fact if you if this slightly interests you get in contact with me right now and say yes I would like to be a volunteer good so with this let's jump into the last story which is around can we correct the disease genes that we identified and you might have heard of a technology that is called CRISPR which in my mind revolutionized the way that medicine will be done going forward because it enables you to edit the genome of a living cell in order to remove a disease gene for example in fact there was a paper last year that managed to do exactly that in embryos that suffered or that would have suffered from a heart disease called hypertrophic cardiomyopathy which makes the heart muscle increase and then eventually the heart stops working so they managed to do that in further damage to correct that disease in seven out of ten embryos which is great but that also means that in three out of ten it did not work and if this is your unborn child then three out of ten failure rate is just not good enough therefore we want to come in and make this process more efficient so that it works ideally the first time every time so with us we generated or we developed the search end for the genome at least what we think of as a search engine for the genome the researchers can type in the gene that they want to edit and how they want to edit it and the tool ranks all the possible ways of doing that so that researchers know straight away which is their best option on their best toys in order to put their press release or resources for example an embryo not a danger therefore there's a huge user interface here where you have each line represents a location where the genome can be edited and in green and representing besides that are good in black the ones that I'm not so good and as you can see they're quite close to each other and it's hard to differentiate them unless you have a computer done over it so why is this difficult well the way I think about it it's like finding a grain of sand on a beach it needs to have the right properties or the right color the right size and the right shape for the editing mechanism to interact with it but once you have you know all your candidates on the sand a bucket of sand you then need to ensure that this sand bone is actually or the sand brain is actually unique on the beach because the equivalent in the genome will be that you want to make sure that if editing this particular gene and not another healthy deem therefore what you need to do is you need to make sure there's the grain of sand and you need to compare this great exam with all the other grains of sand on the beach and this makes that a quite complicated and compute intensive task so the F weight doesn't fall into the typical categories that we just discussed like it's not compute intensive all the time but it's certainly not data intensive therefore those previous two solutions are not quite what we're after but thankfully there's this new technology that just came out or just called service so service really is geared towards being agile you recruit so the way that I think about is that you recruit CPUs free-floating CPU and you recruit as much or as many as you need when you need it instantaneously in this for us the search engine for the genome which is a web application is exactly what we needed because people might want to search one gene or they want to search the hundred thousands of genes or locations in the genome that they are there for the task can be quite small or it can be enormous and you don't want to have an enormous spark cluster running all the time therefore serverless computer was the exact thing that we actually needed so this is the architecture I'm not going to go into detail suffice to say that it is a web service where the user can interact and this web service is connected to an API gateway and from there all of the tasks are triggered through using different services from AWS as such it was one of the first applications servers application that went beyond Alexa skills to really demonstrate that you can stand up this complicated infrastructure that can cater for something as sophisticated as research handled that they probably received a lot of attention now it is actually also available on Alibaba and thanks to Sabbath and JC who managed to to do that from the service team that they have been so as you can see it's looking strikingly similar and there are similar components in both things and this was something that was really important to me in being cloud agnostic having a technology that is running on AWS as well as it might be running on Alibaba so if for just a quick comparison between Alibaba in AWS that I met and I thought you might be interested in the data base that were using in order to collect a lot of buckets buckets of sand a lot of buckets of potential targets ID is stored in a sql-like serverless database Alibaba is called table store AWS is called and the differences here's that table story is able to store a slightly larger volumes in each cell which support genomic research is actually a plus similarly the actual functional compute one is called function compute the other one is called lambda function and the difference with here is that Alibaba managed to have functions for other functions which is great for spawning our tasks and collecting the results or doing parallel serverless processing but AWS has a workaround that we used through the SNS service which for us gave a similar result the other thing is lock service versus cloud watch in my mind they are the same the same thing and CloudFormation template the infrastructure has a code thing Alibaba has a similar tool called fun it's it doesn't even have a logo so it's not very advanced yet it does it does what it needs to do so GT scan is available as a fun thing but cloud formation is for sure more mature than than fun so with this in my mind really once you go serve others you never go back because it's so easy it's so convenient it's so cheap it's so economical therefore innovation is not slowed down by having to think about what kind of easy to instance do I need to have can I afford it going forward and things like that he just writes you a function and lets you know the infrastructure do you want Alibaba or AWS or as your do the rest for you it allows or caters for burst of all workloads well you can do auto scaling in a traditional way it is quite slow so it's not as insult aeneas as service so for burst of a workload service was the way to go and innovation becomes easily affordable as I said you can you can stand up a minimal Viable Product quite cheaply on using serverless architecture and in fact you then are able because everything is modularized you can exchange individual components I'm going to show you how easy that is in a minute actually I'm showing it to you right now so with all this you know ease of use what is difficult about it because it's distributed infrastructure is actually optimizing the infrastructure or finding the bottlenecks so in our case for example we know that there was one component of T G skin that was actually quite slow but before we go into that let's let me introduce you to something that we call hypothesis driven architecture we've had been into one of those yellow conferences or seen James Lewis talk about it it's sort of in the way that that he is talking about it where you have you start from your infrastructure as a code from your JS in Aoyama file that defines a specific architecture you then evolve it making small changes to it like replacing a particular function for example with another function you then deploy this new architecture is updated architecture on your provider of choice and give you evaluate the runtime of each components ideally with a method that automatically detects the infrastructure that you just uploaded and before we were doing it with x-ray which is an EWS service or native yes we worked fairly nicely but now were using EPS agon which is a startup from Israel that is specializing on detecting the infrastructure and evaluating each component in that infrastructure and really having a nice visual interface for doing for doing that so once you collect as your measurements you can evaluate whether that small change that you made is actually a good idea and then the cycle iterates so through this you can do DevOps in my mind more securely more easily therefore we published a quite controversial blog article on DevOps calm the retitled DevOps 2.0 it was a new way which we think that their verbs should be done that you have your environment if your production environment running here at the same in the same availability zone and the same location you deploy your new experimental infrastructure he evaluate both against each other and then you swap over to the new one and the cycle iterates so with that again I would be doing that on the Sunday in the Sunday workshop so standing up an infrastructure and evaluating it so coming back to the use case that we had where we wanted to improve GT scan and find the bottlenecks in there so therefore we recorded we recorded the runtime of all functions that we have in the system and this is what I'm showing in the different bar chords and as you can see there are two offenders that stand out and really suck up all the runtime so they are in that architecture that I have there these two lambda function and the Middle's of the orange your arms boxes in the middle that bring in information from a dance DB a dynamo DB database compute over it and spit out the result in another dynamo DB database so those costs of academic tools that we were just using in a lambda function but being a machine learning team ourselves we thought well maybe we can do it slightly better using machine learning and this is exactly what we did so he replaced those two functions with a new function that did the same analysis but with machine learning this time so he was again a random forest approach we're not going we don't need to go into detail but what I want to show you is that the runtime reduced dramatically and we were able to evaluate that and quantify it the improvement that we made to our architecture so therefore the business case that we had is that by replacing this beast Lanta function with the one lambda function that we had we were able to reduce the runtime they 80 percent and that's probably a huge case away you know this in this case that anyone can get behind him let's quickly walk through the rest though so with recapping the use cases that we had so again in the runway who's the business case from that we need to curate and collect the data that we need in order to act on the business case we want to build a minimal Viable Product and then we want to prepare for our production so for variance Park the use case is finding a disease stream the curator data is genomic data and we had to pre-process it using Python r and SQL the minimal Viable Product it's still variants bug I mean yes it's mature but it's still it's still not a production ready environment there so yeah for the minimum viable product variants park build on apache using elastic MapReduce or data breaks or now linear elastic hibernative services therefore preparing for production is to make these elastic Avenida service to offer them as a infrastructure as a code so people can just press a button and it spins up automatically and testing at scale so we were testing it we'll be testing it on the project mind data which is 25,000 individuals the other thing that I that I showed you was TT skin so here the business problem was that we wanted to build a search engine for the genome the genomic data or the data again astronomic data it's located on s3 bucket and we want to access it for no SQL and the minimal Viable Product is DT scan service the service ecosystem in AWS but it's also now available on Alibaba therefore the research community can access it through an API gateway and testing will be done in a research facility in Australia that has 10,000 miles each day coming through getting edited and out the door so therefore where to from here really what we want to do is we want to find disease genes for a range of different diseases that are there ideally for you know things that really affect the healthcare system like stroke and heart attack from that we want to be able to potentially correct well at least you know replicate the information in the clinic or in the laboratory setting in order to identify me new ways of doing or finding drug treatments and this is where GT scan comes in but then all of this you know is still firmly in the research space we want to go into the clinical practice and really have impact there and this is a tool that I haven't showed you yet but it's called anything inside so genome phenome which is the medical data inside and remember I said once you go serverless you never go back this one is a serval of technology so next time when you invite me I can showcase that to you so the three things to remember the data fication of everything will make all datasets grow wider there's no doubt in my mind that in IOT where it's automatically collected information about a an event the amounts of rows that were dealing with will grow into millions if not billions there for a while genomic might have to deal with it today you will have to deal with it probably going tomorrow and going forward CF way in my mind this really represents a paradigm shift in machine learning and we'll need to come up with new ways of dealing with these imbalance between samples and features and gts-r a variance bug is one option of of dealing with that or one solution capable of dealing with that soulless architecture can really deal with application cases then i'm not just alexis skills or individual components but they are able to provide this huge ecosystem that can cater for something as complicated as research application so if i would highly encourage you to investigate this area and in fact forbes was saying that 50% of the companies that they interviewed actually seriously thinking about moving to service infrastructure so this this is coming and it's predicted to be a seven billion dollar market going forward so if you want to jump in now or if you want to jump in now is probably the time for it but the main take-home message i think from my talk is that business and life sciences are not that different right the tools that we developed it one can be used for other areas as well so for let's build a healthier future together with that thank you very much [Applause] perfect timing two minutes for questions right hi thank you for the session it was very nice so just a curious question about the two lambda functions okay where when you reduced it to one the time was less so was it the case that the lambda functions had some interactions and they were doing something redundant otherwise it looks pretty odd that two lambda functions when made one performance there was a drastic difference they were just doing things probably inefficient hey I'm not not meaning to trash the academic community their task is to come up with new ideas and bring that and demonstrators but they're not known for implementing stuff properly so therefore the statistical analysis that they were doing and their functions could be easily replaced with a machine learning there'd be trained offline and then had Justin do with the classification on the fly which of course reduces the time drastically our question you mentioned that server less is the future or is the present how did you guys solve the monitoring in production actually monitoring of your applications in production because serverless makes it really difficult to monitor it compared to the server architecture we faced that problem actually in production I feel your pain yes and none of the cloud providers have a good solution for that so the one that I'm intimately familiar with is X ray right on AWS which lets you to some extent a label of the function and then you can write you can monitor them whether they are down whether they you know timeout and what kind of resources dangers but it's it's painful therefore this is what not to be to marketing here and I have no stake in Epsilon whatsoever but epsilon was the solution we need the savior for us in that they take care of all of this and that they you just pointed to a new architecture in the cloud it automatically surveys the connections between the individual components and then monitors them in a dashboard for us the run time was the main thing that we ever after the end-to-end run time and where to where most of the time will be drained in or what kind of processes will be running over and over again and we need to focus our optimization efforts all of these episode one gave us great thank you dr. Dennis this was very helpful I hope people they're asking for more tech stuff and this probably gives a glimpse of some of the tech stuff that's going on in the data science community so thank you so much

Show more
be ready to get more

Get legally-binding signatures now!

Sign up with Google