How Can I Sign Iowa Banking Presentation

How Can I use Sign Iowa Banking Presentation online. Get ready-made or create custom templates. Fill out, edit and send them safely. Add signatures and gather them from others. Easily track your documents status.

Contact Sales

Asterisk denotes mandatory fields
Asterisk denotes mandatory fields (*)
By clicking "Request a demo" I agree to receive marketing communications from airSlate SignNow in accordance with the Terms of Service and Privacy Notice

Make the most out of your eSignature workflows with airSlate SignNow

Extensive suite of eSignature tools

Discover the easiest way to Sign Iowa Banking Presentation with our powerful tools that go beyond eSignature. Sign documents and collect data, signatures, and payments from other parties from a single solution.

Robust integration and API capabilities

Enable the airSlate SignNow API and supercharge your workspace systems with eSignature tools. Streamline data routing and record updates with out-of-the-box integrations.

Advanced security and compliance

Set up your eSignature workflows while staying compliant with major eSignature, data protection, and eCommerce laws. Use airSlate SignNow to make every interaction with a document secure and compliant.

Various collaboration tools

Make communication and interaction within your team more transparent and effective. Accomplish more with minimal efforts on your side and add value to the business.

Enjoyable and stress-free signing experience

Delight your partners and employees with a straightforward way of signing documents. Make document approval flexible and precise.

Extensive support

Explore a range of video tutorials and guides on how to Sign Iowa Banking Presentation. Get all the help you need from our dedicated support team.

How can i industry sign banking iowa presentation computer

and good afternoon everyone so hi everyone I'm Stuart Overman from the abstract quickly about me so I was a graduate student here at Stanford as well so I did my PhD research with Mike Flynn Martin and others here right here at Stanford in the architecture and rithmetic group and I used to frequently attend EE 380 while I was a student so it's really great to be back here on campus to in order to talk with you about NVIDIA GPU computing as Andy was kind of alluding to and do it so here in the EE 380 colloquium a little bit more I'm currently vice president of GPU ASIC engineering at Nvidia and I've been there at Nvidia now for 15 years and today I want to talk about the journey that nvidia and the GPU have taken from pc gaming to deep learning so in this talk we're going to explore what a GPU is today how its evolved and many of the technical decisions that were made along the way and an overview of some of the major applications that rely on GPUs and I'm going to try to give you some of my own perspectives of how the GPUs have evolved during this time frame I'm going to include some videos and demos throughout the presentation to help motivate and illustrate certain topics after all you know nvidia is a graphics company so any such presentation wouldn't be complete without some graphics so I want everyone to sit back strap in there's going to be a lot of information I really am going to try to cover this journey as efficiently as I can so some background about Nvidia first of all you know who we are and what we do have I presume based on Andy's introduction that are people generally aware of who and what Nvidia is yeah I'm curious are there any gamers amongst the crowd here as well now good good some hands ok good so we're going to talk about all of ads so our primary business in Nvidia is providing world-class accelerated computing solutions and accelerated computing it Nvidia is about GPU computing the GPU is created by Nvidia to accelerate originally graphics and games but today the G and GPU today it really is just another way of saying that it's massively parallel throughput computing not just for graphics kind of again as Andy was alluding to it's really general-purpose GPU computing it's evolved over time GPU computing has from just gaming to game physics to professional visualization to general-purpose computing with a focus on high performance computing data centers and deep learning AI cloud computing in automotive so we'll talk about these these different businesses that we actually have and the way we think about the entire computing market for for each of these markets that we see here each of these businesses gaming probe visualization data center and auto we offer we have a platform of processors software systems and services and what we do is we develop a new GPU architecture generally every 12 to 18 months and I don't know if you're familiar with some of our architectures we'll talk about them here there's been named after different physicists and what we do if you think about it from object-oriented programming perspective each architecture is basically a class it's a blueprint for how we're going to design our GPUs for that entire architecture and then we actually instance objects in this case GPUs the target each of these markets so we'll have big to little GPUs for gaming we have specific ones for a quad so for for gaming for probe is for data center for emprah motto and specifically just since I'm going to use some of the naming throughout the talk I'll refer to this we've got so our GeForce business is what is if you've heard of the brand that's our brand for gaming so when you think of GeForce that's our PC gaming there's quad row for pro visualization there's our Tesla business that targets the data center in our Tegra business for automotive and it's a it's a branded platform business model not a component model we don't just sell chips we actually sell complete solutions for these markets so with that I want to just let's start with just a little review of the two primary businesses that I was just alluding to gaming and high-performance computing so Nvidia started as a gaming company as I mentioned I don't know how if you guys know how big the the gaming market is I mean is it like this bigger it's actually it's exactly it's like this big it's 100 it's a hundred billion dollar industry Gaming is it's the world's largest entertainment industry bigger than Hollywood and bigger than pro sport and today PC game is a huge portion of the hundred billion dollar gaming market it's bigger than the two consoles combined and this has been driven by the improved image quality and performance of the PC backwards compatibility free-to-play games and even more recently eSports in amongst these g-forces in fact the predominant brand and the PC gaming space GeForce is actually the world's largest PC platform with over 200 million users so this is a big thing there's I want you to be understand when I say GeForce that the connotation is PC gaming by the way just as a quick side note you may be familiar Nvidia has also been involved in console gaming to the extent that we're primarily focused on PCs we were actually in the original Xbox the PlayStation 3 and and now I don't know if any of you are familiar with a new Nintendo switch maybe some of you have them there they tend to still be a little bit difficult to actually acquire it's a unique gaming device that can switch between being a TV based console and a portable handheld device and it's powered by its nvidia tegra SOC and for those who you know I yep GeForce GTX great but they're expensive I don't know the one invest and putting together whole PC we actually have a great solution today now for cloud gaming you can effectively get access to a high-end GeForce GPU on-demand from the cloud I think probably all aware of how cloud computing is revolutionising most applications out there from music and videos storage and compute GeForce now is our cloud gaming service it's been likened to in some some respects and Netflix for games so now if you want to you can instantly stream games that you want it's basically a game console in the sky so in fact we're this it's there today you can she'll you can stream to your shield Android TV and you can also stream for free we have a Mac beta program right now so you can actually go online and do that stream ok and then quickly in the supercomputing field GPUs are used for many different functions but most can be divided into two groups there's simulation and visualization and there's a many different applications which I'm sure many of you are aware of how GPUs are used to accelerate high-performance computing solutions we talked about different simulations and how they can be used from drug designs seismic automotive across the board and you there's fairly dramatic speed ups that can be done when we actually use GPUs GPU computing so at this part I'm going to start with the end in mind so we're gonna go in a little bit of a journey but what does a GPU look like today in 2017 so this is the Tesla V 100 the first chip with the volta architecture this was announced earlier this year I don't know how many of you may be familiar with it and it was actually more details were presented last it was in August at hot chips I don't know if any of you were there at hardships and you saw this yep so it's pretty much quite the Beast 21 billion transistors 815 square millimeters in 60 nanometers FinFET extraordinary amount of inter GPU bandwidth memory bandwidth compute bandwidth question how many of course it's a good question tell you what I'm gonna come back to this and I'm we're gonna go we'll go into that and some more but so I can actually develop some of that terminology but just as a quick just to understand what is actually this mean it's with a chip like that what do you get you get seven-and-a-half teraflops of double precision performance fifteen teraflops of single and you know marketing what a mind-boggling 120 teraflops of deep learning acceleration performance in the form of our tensor course that was just being discussed so the question arises how do we get here right so we Andy talked to the beginning there's there were different how did the whole computing market kind of evolve our CEO Jensen Wang he describes invidious performance and position today as destiny meets serendipity there are people who think that this was Nvidia this company was an overnight success just happen just right place but in fact like most situations like this overnight overnight things actually take years to get here and that's what I want to talk through as the years it took so does to describe this I want to take you as I said on a bit of a Back to the Future ride an accelerated view of some CGP advances in invidious history so as we do this I could talk through you these things but sometimes the common saying is a picture is worth a thousand words so maybe a video is worth even more [Music] okay so the the intent was to show you that we've actually developed what we've developed over the 25 years and it's taken us the the GP was developed in and it was first started in 1999 and that's what's being described here the sole of the GPU was to accelerate computationally intensive applications so fundamentally as I showed you on my initial slides the intent is to be an accelerator and the GPU was initially introduced to accelerate PC gaming and 3d graphics that what was the computationally challenging problem at the time and that at the time that was PC gaming and the goal was to be able to create and approach the image quality of movie studio offline rendering farms but in real time so instead of hours per frame the idea is how can we do the quality that you could see the basically photo realistic quality at greater than 60 frames per second so this involved how do we do this millions of pixels per frame can in fact all be operated on in parallel and in fact people refer to the 3d graphics problem as almost embarrassingly parallel so what we would do is use large arrays of floating-point units to in fact expand exploit this wide and deep parallelism so some classic GPUs I said that I was going to give some of my own perspectives on GPUs based on my experience and I want to start by discussing one of the very first NVIDIA GPUs that I worked on so if we think about the classic GPUs briefly go back to 13 years ago there was the GeForce 6 6000 series in the 7000 series an example was the 7900 GTX and just to place this so there this is 13 years ago with 278 million transistors 650 megahertz clock 196 square millimeters in nineteen an ohmmeter 300 gigaflops of single precision can we even remember kind of when dyes look like that so but this is what we had in the GPU we render an image as a collection of triangles I'm not sure our people are generally familiar with 3d the 3d graphics implementation and this is just a quick summary of how the polygons which are the how the entire image is collected how its rendered if we look at it we start with triangles coming in commands from the host processor we would then take vertices of the triangles and put them through our vertex processors we would then do scan conversion where we actually create per triangle data both plain equations perhaps for how different different fee aspects can may vary across the triangle and then we actually do the conversion to pixels we then may do a various amount of pixel shading on the actual operation using various pixel shaders we may do conditional texture mapping do final raster opera aster operations and then eventually we draw out a memory so this is classic you know vanilla GPU rendering and there's another brief aside we can talk about numeric representations in a GPU because we're going to come back to this even for the earlier question within that within this diagram that we talked about here across those functions there's all sorts of different numeric formats there's all different fixed point and integer formats a variety of floating-point formats there's the classic 16-bit 32-bit there's the less classic 24-bit there's a variety of different formats that are exist inside of a GPU and with these up there's also blocked floating-point formats where we treat multiple operands is having a common exponent so there's many different representations numerix lots of different processing that was happening in this classic GPU and if we actually take a look at it the way it was built then you actually had we actually had a vertex fetch engine as I alluded to there were eight vertex shaders the conversion to pixels 24 pixel shaders we would redistribute the pixels to our various pixel engines and finally we would have our memory partitions so this is what if things look like about 13 years ago now I wonder if this will actually work and that doesn't seem to work either that's why okay so we were probably in the interest of time we'll we'll keep going here so I was going to show you we can come back to what graphics look like on those but I'm not in the interest of time we'll keep going here so into it 13 years ago we had pixel shaders we had vertex shaders we had this independent logic you'll notice I didn't talk about general-purpose computation we just had these collections of engines working on these different formats with the goal of accelerating a really hard computationally difficult problem that of PC gaming PC graphics but in g80 which is about 11 years ago it really redefined what the GPU was and Andy also alluded to this about and he said his timing was just about right there in 2006 Nvidia released the g8 II the GeForce 8800 it was the first GPU with a unified shader processor architecture and it also introduced the SM streaming multiprocessor which is an array of simple streaming processors which at the time were referred to as SPS but today are more commonly referred to as CUDA cores we'll get to the terminology here but that there's some there's science and involved in the definition in this case now that we're unified all of the shader stages use the same instruction set all the shaders they execute on the same hardware and this allowed for better sharing of SM hardware resources this is based on the recognition that there are underutilized hardware and we could actually get much greater utilization if instead of having separate we could actually just share the common array so instead of having these independent Hardware of feature Hardware shading engines for shader type a B C and D we now just had this unified shader core and all of the shader types would run on that unified shader so so some commentary about this we're now up from we were at 278 million transistors 13 years ago we're now up to 681 million transistors and 470 square millimeters in nineteen an ohmmeter the g80 was the first to support microsoft directx 10 now this is the point in what one of the key reasons I wanted to bring this GPU up I know some of you depending on who you are 11 years ago was a long time don't even necessarily know what you know a GPU was or what a unified shader was then but what we did then was we invested a little extra hardware we referred to it as epsilon in the SM 2 also support general-purpose general-purpose throughput computing and this was the beginning of CUDA everywhere so we specifically said we wanted to make an investment in this GPU to enable in Hardware general-purpose GPU computing one other interesting thing that we did was the functional units in G 80 were designed to run at twice the clock the core clock frequency with the goal of area efficiency we wanted to have the number of units to be as conservatory conserving as much as we can of that at the time precious die space in this led to 576 gigaflops where the functional units were operating at one-and-a-half gigahertz and for the first time we actually had I Triple E compliant floating-point atom floating-point multiply in those shaders and this unified shader and it was dissipating 155 Watts so at least from one respect at least from my perspective others this was some at the beginning of GP this was the GPU beginning of GPU computing in just a differentiate what is throughput computing versus latency optimize latency oriente computing I would summarize it as with a traditional latency optimized computing we're trying to minimize the amount of time per unit work and to do so this is a traditional CPU perspective we want to have as many large cores as we can with as many large caches as we can so that we can reduce the memory access time and in general we want to reduce the control hazard the control hazard penalties so we would put large branch predictors speculative execution anything we can to reduce time per unit time per unit of work in contrast throughput computing for example GPU computing the goal is to maximize the amount of work per unit time and rather than investing in late see reduction Hardware explicitly the goal is to have as many simple compute cores in hardware scheduling as possible the goal being to maximize the amount of math that's available for actually doing the throughput computing so that was GA T and as I said it was also the start of CUDA everywhere and are people aware of CUDA what that is or I don't know somewhat familiar used it this was really the first time we've been able to have C and C++ available for a throughput computing it was first made available back in 2006 with a with G 80 and base it brings throughput computing to a very accessible interface so if we look at the G 80 architecture unlike the 7900 that I showed you now there's just one array of processors known as streaming multi processors or SMS which are these here and this is a blow-up of one of them these are the SPS the functional units that I talked about aka CUDA cores here and we have the same vertex work distribution hardware to dispatch work to the array before we had independent vertex shaders we have pixel work distribution specific hardware to distribute pixel work to the independent to the streaming processor array and then the key investment was this additional EPS of epsilon of hardware to support compute work distributions so we could explicitly launch compute shaders and have them use the stringing processor array for general-purpose computing does that make sense to everyone any questions on dat here so I mentioned have different architectures or classes that are effectively define the blueprints for a GPUs and I said that they were generally named after physicists so we had there was the first one we had there was the six and seven thousand there was a ga T the first full purpose intended to be build that would also enabled general-purpose computing with significant increases in investments in what it would take to do throughput computing was the Fermi architecture G of 100 we're up to three billion transistors now in 2011 so this is six years ago five hundred twenty-nine square millimeters and 40 nanometer 1150 gigahertz huh it means thank you terahertz yeah we'll call that may G was supposed to be an M everything was an M up until now so M Eggert so this is a third generation SM yes I got that's correct so that was just the genes so one of the key contributions here was the addition of the new 2008 754 compliant fuse multiply ad unit which were able to use and deploy and we actually in GA TI didn't explicitly call it out but the only memory access from the sm from the cuda cores was through explicitly accessible shared memory there was no caching available directly from the processor with Fermi we actually added a configurable l1 cache that could dual as a shared memory so we were up to now right around a teraflop of FB 32 and a 515 gigaflops of FP 64 but you'll notice we're now up to almost 250 watts so the observation here was that things have gotten powerful in multiple respects so we recognize that and over the next two years we actually developed the the Kepler architecture and the Tesla k40 was the a from the Tesla business that was it's instance one of its in chances of the Kepler architecture that the gk110 was the GPU and it was released actually in 2013 and now you see we're going even larger where it's set over seven billion transistors five hundred and fifty square millimeters and twenty eight nanometer yeah in what and cache went away no I don't have the cash cash the cash is there with Kepler this is I wanted to provide a perspective we recognized the the challenges as we said there's a number of limits that are have been we've been facing in process or design there's there's Moore's law challenges we're getting more transistors but they're not getting faster so where do we get our performance from you okay we have throughput computing we can put more math units down but at the limit we're going to be power constrained we can't up until now we've been increasing power along the way so we took a step back and we had intense focus on power efficiency and explicitly one of the things we did is remember how I said that we had our functional units running at twice the clock frequency so I would ask the the simple question which I would hope many people get asked is there it you know the simple equation for dynamic power anybody talk great summarizing obviously be squared out something of that nature so the challenge is yes we were reducing the area but in fact to reduce the area we couldn't do it perfectly linearly but the F part was definitely doubling so in fact we were paying a lot in power by doing that so this was a trade-off of area efficiency versus power efficiency so we explicitly started on the go wider in a bit slower to be effect better power efficiency in Kepler was the first to do this so we had an extreme focus on trading up area efficiency versus power efficiency as a result we ended up it over for teraflops with this GPU in single precision and one point for teraflops of double precision at 235 watts the single-wide it is now 1:30 it's a good question so one could ask why 1/2 why went through the intent was it's a good question I think that maybe this will be in the will see how time permits at the end to give the the the good answer it was it was just a balance an investment of where we wanted to spend our diarrhea it's really that simple um just the investment here was again in performance per watt the intent was to be as fast and as efficient as we could so the goal was in firming to have a fixed number of cores here you can see the ratio of control to datapath versus in Kepler where we tried to then put down many more cores again wider and slower on the functional units and then that actually gained us over three times in Perth per watt now the astute observer would note that this was in 28 nanometer and this is in 40 so there's more than one variable play here but together between design efficiency and process we were able to get over 3x improvement in performance of the Titan supercomputer at Oakridge which in 2012 I think that's the date on that it was the world's number one open science supercomputer so that that machine was the flagship system it had over 18,000 k40 nodes sorry K 20 nodes a version of that and together it was over 20 petaflop x' it still is still better that's the Titan supercomputer at Oakridge question this is Oak Ridge so it's our federal government it's open you can make I think you can make grants to get access to the Machine all the secret stuff is not available yeah well like networks there might be a secret computer that's bigger someplace correct and this that there's the top500 which we can talk about but not everyone even necessarily submits to the top 500 top 500 is something that you actively try to bench and say here are my benchmark results so this was in 2012 and 2013 so I mentioned that we have GPUs that are instanced from classes from architectures we've had there was Fermi there was Kepler I'm not going to take you through there was also Maxwell over the past from 2014 and 2015 but if we jump now to kind of recent days let's jump to 2016 have people heard of the Pascal architecture maybe some of you had this was launched just last year fit over 15 billion transistors 610 square millimeters now in 16 nanometer FinFET so this was so that going to the classic throughputs over 10 teraflops of single precision 5 teraflops of double precision but for the first time we actually included FP 16 as a native computing data type and we have over 21 teraflops of FP 16 with the intent of accelerating deep learning training and inference through two other major advancements that occurred with this GPU is we had a new we introduced a new high bandwidth mblink GPU interconnect and we started to use new HBM to high bandwidth stack memory summarizing them here there was actually three so three X the compute versus what we had previously with k4 T which I just showed you we actually increased the GPU GPU interconnect bandwidth with NB link by 5x and a 3x increase in memory bandwidth with HBM two questions on that so we there is actually a so as I said we instance GPUs from the class so with Pasco there was the GP 100 which is what I just described there's also we would then instance different gaming and other markets GPUs from the class so there's the P 102 which is the next largest GPU in that family which is actually the gaming GPU inside the GeForce GTX 1080 TI which is today the highest end graphics that you can get from that you can buy from GeForce I think I can derive this demo I think you know I get my daughter's on the screen but that's not gonna help ya BM 210 HP into stack I can show you in terms of on top let me get back to let me try to drive this one for one second here whilst I and get my okay here we go right yeah yeah okay so what I wanted to show you here is so what if you put all of those twelve billion transistors together what is game what does gaming look like to in 2017 so this is the gtx 1080 TI with the GP 102 that you can this is what you can get today [Music] so this has been in the camp [Music] it's gonna be out actually early next year it's not even out yet but you can see the different types of effects that are possible [Music] disconnects to the possible [Music] can you be gamers console perhaps so we're really approaching the photorealism began approaching which you can do offline real-time at well over 60 frames per second [Music] but you can absolutely I mean a consistent perfect so interesting first of all with with this level of computing there's no herky-jerky at this with GP with the GP 102 with over 12 billion transistors it's that just works that said there is we have a separate technology and there's a new monitor types which I'm not sure if you're aware but I wasn't going to cover here known as g-sync where we actually don't actually explicitly synchronize so that you don't have this motion we can actually go much faster and ensure a very smooth playing experience you can learn we can talk about that more offline perhaps ok so I started this talk with the end in mind so that was 2016 mind you so here we are now in 2017 it was just one year later and so we're back to the Volta which we launched again just earlier this year which is the GV 100 so now we're up to 21 billion transistors 815 square millimeters in 16 nanometer to be specific you can't build a bigger die in 16 because we we max the reticle that was so if we could have we would have so again this is this the theory of go as large as you can and maximize the compute density and optimize for power optimize as much compute as you can get into this thing and we have 80 SMS with now we're going to the terminology of 5120 CUDA cores 640 tensor cores which we can talk through 16 gigabytes of HBM to 900 gigabytes per second and bandwidth yeah it's this is a v1 hundreds of 300 watt product yeah I'm just trying to see I thought it would have actually put that on that slide but I do yeah three nerd one it stood watch but if we compare this versus just what we had last year for comparison so for training I told you an F P 32 with the P 100 we were at ten teraflops F P 32 with our high with our mixed precision tensor cores were up to 120 teraflops for a 12x improvement in training acceleration capability for inferencing we had if you remember we had FP 16 that we added to GP 100 so also at 120 teraflops for a 6x increase and we increased both single and double precision HBM to bandwidth we've taken up by 20% to 900 gigabytes per second and are in our HR envy link bandwidth we've also taken up by almost a factor of two to three hundred gigabytes per second so it's quite quite the large machine I want to talk about so we talked about tensor core let's see if this will actually allow us to play so Google let me it's really not gonna let me play and this one's kind of alright we'll talk it through for the moment here so the 10th the the intent of the tensor is to accelerate some of products matrix multiplication so it's a generally speaking we have a 4 by 4 matrix processing array that's used in many aspects of our computation gem those those kernels exist in many many places in deep learning specifically we can talk through how with the neural network it can be reduced in many cases to a gem the idea is to greatly improve what's possible in that computation and so if we look at that the tensor core what we what we're able to accelerate here is taking an MFP 16 if you have will call them for lack of to keep it general a and B we're able to compute these assuming FP 16 inputs and then we can accumulate with an existing accumulator that's either in FP 16 or FB 32 and then we can accumulate into either FP 16 or FP 32 so that's what fundamentally what the tensor core does and we're able to attend sir core is defined as actually having 64 of these think of them as 64 multiply ad units per tensor core so it's actually 128 flops that we're able to do within 1 tensor core so we have FP 16 inputs we take we get a full precision product and then we sum with the FP 32 accumulator and then we actually we can have more products potentially however many there are and then we convert the result to FP 32 this part so we have so it we have it's the accumulator and then we actually can convert we can either have an FP in FP 16 result or we can have an FP 32 result well we actually it's not an I Triple E it's not a final result it says sum of products at this point so at this point we have to actually get a final FP 32 result that make sense yeah okay so some adat product operation like this is not an I Triple E operation so this is not independent mole add mola mola at 754 the I Triple E 754 standard formalizes in MMA this isn't an MMA this is actually a sum of products it's a dot product so I'm just saying that that final result is now actually reproduced as a standard compliant 754 FB 32 how do you want to measure it yeah I mean there's there's late this is a throughput machine so it's kind of difficult the dependent raise rate the the next instruction rate I mean the I guess the simple answer is it's complicated but the not and I'm not trying to obfuscate it just depends on exactly which it tends to be that you have batched things up so that you can achieve the throughput and that's the argument there it's no longer the actual functional unit is not particularly longer than the single precision or double precision it's just how things are grouped and arranged I mentioned I want it for completeness I want to make sure we cover also the interconnect that was increased here because if you think about it increasing teraflops by itself is not as helpful if you don't have you have this big eating machine you've got to feed it so it comes down to bytes per flop so we want to be able to feed it from GPU to GPU so we want to make sure that we have the ability as you're trying to scale a problem across multiple GPUs do we have enough bandwidth for the for all of them to interact so envy link on V 100 increases that and you can actually see the possible arrangements that we have here this is an example of the in one arrangement and one product is the dgx one arrangement which i'll talk about this is eight Voltas put together there's the node that I'll talk about here where actually maybe now I'll come back to that we have a note we have nodes where we connect with IBM power P nine processors and they allow for a high-performance interconnect with the power processors as well as other V 100's so we've increased with tensor we increased with envy link andwidth and we've also we talked about the caches earlier the caches are present all the way through here in fact what we've done here is created a new type of a more CPU style l1 cache in the 100 where it has a there's still flexibility with shared memory but we actually have additional the ability to actually use a larger l1 cache directly so this now starts to look very much like a more recognizable computing engine with cat with proper cache with proper compute units so in fact I mentioned Titan earlier so due out later this year I think it's they're starting to be set up and available for generally online next year Oakridge is upgrading through coral the collaboration of Oakridge Oregon and Livermore two new supercomputers summit at Oak Ridge and Sierra at Lawrence Livermore ins Livermore which is have a large number of power nines plus voltage GPUs I hook together in the configurations that I was just showing on the previous slide enabling 40 teraflops per node and it'll be between 100 to 300 150 to 300 peda flops for each of these so this is gonna be a major step forward these are expected to be in 2018 but should be near the top of the top 500 if not the top let's see I'm as it is Tesla GPUs today are between Pascal and soon with Volta are powering the top 15 most energy-efficient supercomputers already on the green the top green 500 so yeah any questions then on Volta and on these supercomputers actually very mature process so that's I mean it is but there are fractions that where there will be dye failures we actually it's a pretty standard approach of any massively parallel machines where we make sure we can accommodate failures and understand how we can still use silicon for different for different instances of the class if you will I mean this is engineered to ensure that we can interface so I should in the interest of time I'll keep going here Andy at the point at the beginning he said I want to make sure you're gonna talk about Moore's law here and what what does that have how is that playing out here so I just want to have my the slide here that we want to talk about here because so far we've taking it at v100 i've taking you to high-performance computing this is where we're at today so where do we go from here in the last you know 15 minutes hopefully I can talk through this rather quickly we've seen that Moore's law the gray line we continue to have increasing number of transistors but the single-threaded performance that's been deliverable has been topping out up to 10 percent per year so as I told you back in 2016 a 2006 we started an investment in GPU computing and in fact if we've been following that trend we've been getting about 1.5 X per year following that trend so in fact things are actually quite alive as you're suggesting what do we need to do with there is a solution there and we refer to this combination of of availability of algorithms with the combination of GPU computing with the combination of big data is what's led us to the viability in the bang of deep learning the question is why now what's happened was a some fluke the answer is no it's the conglomeration of these three things big data algorithms and massive computing capability those together are why deep learning is such a big topic innovate and really being put to use today we're working hard to ensure that that trend so I on this yeah deep learning everywhere probably folks be here they're very aware of what deep learning is where it's how its deployed in the internet medicine and media security defense autonomous machines any any further discussion on that we'll come back talk about it great lengths people are aware of these are the ubiquitous slides what is a deep neural network densely connected layer are people sufficiently familiar with the general topic yes okay so m'a nodes here a generalized construction of a note and a neural network on the right and in general what it can if we look at a fully connected layer we can actually start to form how you actually compute the the neuron at the next step involves a dot product and then if we combine the dot products in certain ways eventually this is going to come down to look like when we actually put it together matrix multiplication so we can collapse all of the general operations in traversing and computing a deep neural network via some aspect of matrix multiplication which is what we've demonstrated heretofore we've made sure that the GPU can really accelerate those operations that we operate it's do we tensor it's part of that it's a representation with regards to that but we can we can talk about that later okay so we a lot about the hardware I also wanted to talk about the fact that how do you actually use deep learning deep learning [Music] now we're trying to blur the line because really everything is GPU computing we view more of that graphics how do we actually represent graphics as being some aspect of general-purpose computing so it's really everything is becoming really focused in and converging more on SM centric programmability and graphics is everything is becoming more general-purpose compute the shaders themselves everything are more so there are still certainly pieces that are fixed function but it's really kind of gone the other way we have this big general-purpose GPU computing engine with some amounts of fixed function Hardware that's still necessary to ensure adequate acceleration of raster graphics but it's converging and we're seeing how that balance out balances out over time so I think that the answer is its most of the hardware is to support GPU computing and with in this paradigm its how do you actually deploy its hardware's great but I also opened with we're not just selling GPUs just GPUs it's a platform how does that work you actually need an entire system where you can actually do training create the networks given to implement the networks with inference perhaps then put them on a device you might actually get some telemetry from the device suggesting new data refined model retrained model to get better a more accurate model in the process repeats so the challenge is to actually have a way to do this and I have this entire equals ecosystem and how do you even write to this going to the software engineering question how are they deployed how is this computing model just just to reiterate what we probably know 2012 we've seen with an image net massive increases with deep learning and speech recognition accuracy with deep learning I presume many of you have seen slides like this before to show that in fact this is when the GPU is first applied to image net and where we're at actually with speech recognition in 2016 question done using big data centers with massive amounts of memory how far are we from able for me to able to do that on the turret sure when you say that was which is that that the BMV increase in image net accuracy this was actually done this was actually done on available GPUs Alex Minich Tchaikovsky he did it with the available GPUs that were there two of them they were just there and he worked together and it was it was done very engineered solution with the hardware that was available in 2012 nvidia gpus off-the-shelf how far how far are we immediately do does a single Nvidia chip you can I mean it depends on what it is image that image net specifically crossa finding the dog cat elephant that's very much intractable I won't say solved but it's a very that can be done I think now people talk about like is it right now I think I don't have a number of hours but with simple image net with the simplest models it's very short periods of time with GPUs that's why that's not even you generally considered a representative network for any sort of benchmarking because it's almost too simple that's hard we're gonna get to that just a second so we can talk about what's actually happened over the past we just in the recent years Atari in 2015 we talked about image net alphago people watched up ago last year when Lisa doll actually um oops sorry let me scroll this way again yeah it was March that's what I'm trying to get out of five games alphago from deep mine actually beat the world's leader the the world champion of alphago we have conversational speech recognition in fact lip-reading I think in 2016 Microsoft Research has basically achieved human parity and speech recognition and AI has been reported to demonstrate lip-reading performance that actually beats professional lip readers so basically the TAT of the tasks that humans can do of certain forms machines are already at superhuman capabilities now to your question if you look at here's on the left here's an advanced version of image of a CNN for image recognition if you take a look at how complex are the models and how much computation is needed to run the model in 2015 it was about 60 million parameters and 7xo flops Baidu's deep speech model is about three times more computing intensive at 300 million parameters and google's neural machine translation that the real hard natural language probably get works up to 105 exa flops over 8 billion parameters so models are really exploding in the requirements to be able to do these super human achievements requires massive computing and massive bandwidths so that's what NVIDIA is in the business of accelerating these hard computational problems as I opened with so let's see the platform that's actually provided and what we have again it's not just a GPU it's a system so we actually have a whole infrastructure that allows for both providing data testing data training data conveying this data to a mechanism to a runtime environment where we can actually then deploy it in the form of inference and this is the end-to-end platform so I want to talk about training the question is how if you wanted to one of these systems today this is where it gets interesting so ok I'm all in I want to go I want a V 100 and I want to get going training so the first thing you can actually do is buy what's known as an nvidia dgx station today a personal dgx this actually has four V 100's together fully connected with envy link and it consumes 1500 watts it's water-cooled yeah exactly not too bad I mean a hairdryer is yeah 1500 watt but you get almost a half a petaflop there 480 teraflops so just to put a little bit perspective $69 for big ai if you want to start building a phone system the essential instrument of AI research we have the DG x1 which is actually 8 V 100's connect and I showed you this topology earlier with connected via NV link that gets us just a hair short of a petaflop in a in a system it's a we call it it's equivalent to 400 servers in a box it's about this big so I with my video that would explain this function the key thing with these is literally in deep learning in a box all of the frameworks are in there all of the software the only thing you supply effectively is either 120 or 240 plus depending and then your data you're looking for cats and that's the frameworks the frameworks where you actually write your networks all of the networks are there what languages is the framework written in I mean so for example tensorflow is a common tensorflow cafe which generally are cut by Concord is good know I thought in tensorflow in higher level there's Kerris there's abstractions above that how to deploy a deep and work is we can talk about that later that's a subject about even longer dissension so references like email okay yeah exactly so we can talk about what a dgx one can actually do and now you're actually talking about one of the biggest resonate imagenet questions i think you're asking about you can do a resonate 50 on a one of these systems in a little over seven hours for example you said when can we do it that's in red this isn't a simple alex net this is a full resin at 50 and even compared to what the GP 100 system was last year it's already almost a factor of it's over a factor of two and a half to three times faster than what was available just last year okay just to answer the question of tensor RT in inference and how do you use it tensor RT is our runtime inference environment and this is the this is it here I'll summarize here its how once you've got this network we just trained it real fast we did all of that what do you do with it right I actually want to use a network to do to my is it driving my car what is it that you're doing so tensor RT is the platform the system by which we take that network and we do all sorts of optimizations and reduce it into a form that can actually be then implemented for inference and some examples of this are on the Left could be a network you know that comes in framed big network perhaps with some extra many levels and layers that may not exactly be needed at the end tensor RT comes and analyzes it and optimizes it to find what is absolutely necessary to optimize it for implementation and you might say well why does that matter but if you look here's a v100 on the if we look at what's possible if you just implemented inference straight with a v100 straight with tensor flow as is this is images per second 305 but when you do the optimizations what you through tensor RT to actually make an optimal implementation we can get over 5,700 images per second so basically it's 18 times faster than if you just used a raw tensor flow implementation and it's 40 times faster than CPU alone well this was a resident 50 I have it's just resident 50 batches I have all the details on here which I can make available so times getting a little bit long just to confirm time check I have a few more slides is that doable okay just want to make sure because there's another hot topic that I alluded to we remember we said there was gaming there was GeForce datacenter Tesla we talked about that there was another one automotive and that's another one that people might be interested in hearing a little bit about and as you can expect we believe a AI is the solution to self-driving cars for all of the what's necessary for perception for reasoning for driving for HD mapping for mapping for AI computing there's a ton of Technology a ton of discussion we had a tutorial at hot chips just two months ago just on this topic so I'm not gonna have time to give this all that it's due however let me just tell you a few things in addition to the big GPUs that we talked about we also have SOC s that are targeting next-generation AI for automotive parker's the example that's out in the in the market today it's based on that Pascal architecture remember we had the big one GP 100 we have the 102 that's in our GeForce and we have Parker that's there for automotive has one-and-a-half teraflops has our Nvidia zone arms 64 Denver CPU and in fact what we do is we put these together to form an entire compute complex called Drive px 2 which comprises a pass code discrete GPU it's actually what an even smaller version of the of the GPU that we the GTX 1080 Ti it's actually a smaller version of that GPU combined with the SOC and together this is an entire system that you can then put in some number of these directly into a car massive amounts of i/o bandwidth it's well beyond the scope of what I can talk about here but basically all of the telemetry that needs to come in from all of the sensors in a car goes into this compute complex and gives you the capability for writing autonomous vehicle capability yes in order to implement a Saudi there's specific control features that have to actually be implemented to watch over the system this is independent of the actual two six two six two design capability of which is necessary for that yeah this there's whole standards on how that how the safety mechanisms work within a system that's qualified promotive briefly as we look between as we go from today to level two to level three to level four into level five what's necessary as I mentioned we have one architecture that's driving this we have Drive px two today which is to Parker's and to pascals and going forward in time we've actually announced and I'll show you I think on a subsequent slide xavier at 30 tops and 30 watts and you we can see that the need as we increase in our autonomous capability the need for a higher tops terror operations per second goes up you can see that it's i the it's well over a hundred tops we believe to be able to support through level five and just as a quick they're quick saviors in fact the next generation processor for autonomous machines and you can see it's got a collection of domain-specific accelerators plus general-purpose they there's a CPU there's CUDA GPU and there was a request to talk about DLA I think people I don't know if people have heard about DLA which is invidious Hardware inference engine targeting accelerating directly accelerating neural networks which is on this SOC but just in the past we announced it earlier this year but just a nap but it's actually available on github now we've actually open sourced the DLA so it's there today I can provide references but just wanted to let you know this has been announced and it's actually there today so if you want to actually go build an inference engine there's a number of people are doing this we actually have accelerated the process you can actually go and have the RTL there and you can actually build one so in summary the drive px 2 it is end end self-driving car platform has all of the pieces you can see we've tried to pull all the pieces together what's necessary you have all over the frame works on that DG x1 that big box that comes there you take the telemetry the data that you've acquired on your drive px 2 from your machine feed it into a network under DG x1 train train train put it out lather rinse repeat that's how the cycle goes for the self-driving car platform and just to make sure that we understand some examples of why this has even required I'm just want to give you some examples of what imager is required for autonomous vehicles detecting people in other cars so using a train network is critical to these tasks so if we look at what is current driver assist this is what we generally have in cars today pre autonomous vehicles now actually a traditional ATS system can react as simple events like approaching another car too fast so for example the kind of you can determine that the road in front of you is unobstructed and free to free to proceed but if there's a vehicle that are there special cases to consider so this is what's possible you know today we see here what what what can be done we can't can we proceed but what if there's a school bus their situation is different can we proceed now now you can't what if the driver opens the door what if there's a pedestrian are they in the street or they on the sidewalk these are all the questions that I mean these are just some very simple thought experiment problems of what do you have to do to fundamentally a cards job is going to be don't hit anything that's right that's seriously but that's huh I mean that's what we do a hard job is to make sure it can't hit anything but how do you make that determination without just standing still you have to actually make progress while avoiding any collisions that's the challenge so what's that exactly so the current does a current driver system isn't sophisticated enough to handle all of these and to the point trying to program it directly would take forever how I mean what are you gonna do sedan said today we've thought of these cases you can't do it that's why a deep learning comes into play and instead of that having that you put in DN n in there and this can be trained literally through hundreds and thousands of hours of actual driving you can actually train the network to handle everything if you think about it if you have a child who's 15 training trying to get a driver's license that's exactly what they do they go on the road in fact they only need 50 hours in the state of California they get trained their neural network up here gets trained to be able to drive the road we can do at least as well with DNS so I'm out of I think I'm pretty much out of time yep I do want to make a final call here we can talk more try to even do demos afterwards I do want to make the call that working it in video is possible in fact we're open to it just want to let people know that like the machines themselves we as a company have been ourselves consider ourselves a learning machine reinventing ourselves in 96 I talked about 2006 and what we did just last year in 2016 what's going on in the kid we're trying to evolve and adapt to the new opportunities that really matter not just us into the world it's a great place to work so there are people who are interested a little bit about us and yeah we'd love to have interns new grads open for Oh in fact next week first of all we have some folks here who we can come talk to afterwards if you want next week I know there's the career fair on campus next Wednesday please come out and see us we have openings any of this sounds exciting and hardware and software and systems and deep learning we have opportunities in all of these areas so I welcome you to come talk with me talk with my colleagues afterwards or come to the Career Fair next week thank you you you

Keep your eSignature workflows on track

Make the signing process more streamlined and uniform
Take control of every aspect of the document execution process. eSign, send out for signature, manage, route, and save your documents in a single secure solution.
Add and collect signatures from anywhere
Let your customers and your team stay connected even when offline. Access airSlate SignNow to Sign Iowa Banking Presentation from any platform or device: your laptop, mobile phone, or tablet.
Ensure error-free results with reusable templates
Templatize frequently used documents to save time and reduce the risk of common errors when sending out copies for signing.
Stay compliant and secure when eSigning
Use airSlate SignNow to Sign Iowa Banking Presentation and ensure the integrity and security of your data at every step of the document execution cycle.
Enjoy the ease of setup and onboarding process
Have your eSignature workflow up and running in minutes. Take advantage of numerous detailed guides and tutorials, or contact our dedicated support team to make the most out of the airSlate SignNow functionality.
Benefit from integrations and API for maximum efficiency
Integrate with a rich selection of productivity and data storage tools. Create a more encrypted and seamless signing experience with the airSlate SignNow API.
Collect signatures
24x
faster
Reduce costs by
$30
per document
Save up to
40h
per employee / month

Our user reviews speak for themselves

illustrations persone
Kodi-Marie Evans
Director of NetSuite Operations at Xerox
airSlate SignNow provides us with the flexibility needed to get the right signatures on the right documents, in the right formats, based on our integration with NetSuite.
illustrations reviews slider
illustrations persone
Samantha Jo
Enterprise Client Partner at Yelp
airSlate SignNow has made life easier for me. It has been huge to have the ability to sign contracts on-the-go! It is now less stressful to get things done efficiently and promptly.
illustrations reviews slider
illustrations persone
Megan Bond
Digital marketing management at Electrolux
This software has added to our business value. I have got rid of the repetitive tasks. I am capable of creating the mobile native web forms. Now I can easily make payment contracts through a fair channel and their management is very easy.
illustrations reviews slider
walmart logo
exonMobil logo
apple logo
comcast logo
facebook logo
FedEx logo

Award-winning eSignature solution

be ready to get more

Get legally-binding signatures now!

  • Best ROI. Our customers achieve an average 7x ROI within the first six months.
  • Scales with your use cases. From SMBs to mid-market, airSlate SignNow delivers results for businesses of all sizes.
  • Intuitive UI and API. Sign and send documents from your apps in minutes.

A smarter way to work: —how to industry sign banking integrate

Make your signing experience more convenient and hassle-free. Boost your workflow with a smart eSignature solution.

How to eSign and fill out a document online How to eSign and fill out a document online

How to eSign and fill out a document online

Document management isn't an easy task. The only thing that makes working with documents simple in today's world, is a comprehensive workflow solution. Signing and editing documents, and filling out forms is a simple task for those who utilize eSignature services. Businesses that have found reliable solutions to how can i industry sign banking iowa presentation computer don't need to spend their valuable time and effort on routine and monotonous actions.

Use airSlate SignNow and how can i industry sign banking iowa presentation computer online hassle-free today:

  1. Create your airSlate SignNow profile or use your Google account to sign up.
  2. Upload a document.
  3. Work on it; sign it, edit it and add fillable fields to it.
  4. Select Done and export the sample: send it or save it to your device.

As you can see, there is nothing complicated about filling out and signing documents when you have the right tool. Our advanced editor is great for getting forms and contracts exactly how you want/need them. It has a user-friendly interface and total comprehensibility, providing you with complete control. Register right now and start increasing your electronic signature workflows with efficient tools to how can i industry sign banking iowa presentation computer on the web.

How to eSign and complete documents in Google Chrome How to eSign and complete documents in Google Chrome

How to eSign and complete documents in Google Chrome

Google Chrome can solve more problems than you can even imagine using powerful tools called 'extensions'. There are thousands you can easily add right to your browser called ‘add-ons’ and each has a unique ability to enhance your workflow. For example, how can i industry sign banking iowa presentation computer and edit docs with airSlate SignNow.

To add the airSlate SignNow extension for Google Chrome, follow the next steps:

  1. Go to Chrome Web Store, type in 'airSlate SignNow' and press enter. Then, hit the Add to Chrome button and wait a few seconds while it installs.
  2. Find a document that you need to sign, right click it and select airSlate SignNow.
  3. Edit and sign your document.
  4. Save your new file to your profile, the cloud or your device.

By using this extension, you prevent wasting time and effort on monotonous assignments like saving the data file and importing it to a digital signature solution’s collection. Everything is easily accessible, so you can quickly and conveniently how can i industry sign banking iowa presentation computer.

How to digitally sign documents in Gmail How to digitally sign documents in Gmail

How to digitally sign documents in Gmail

Gmail is probably the most popular mail service utilized by millions of people all across the world. Most likely, you and your clients also use it for personal and business communication. However, the question on a lot of people’s minds is: how can I how can i industry sign banking iowa presentation computer a document that was emailed to me in Gmail? Something amazing has happened that is changing the way business is done. airSlate SignNow and Google have created an impactful add on that lets you how can i industry sign banking iowa presentation computer, edit, set signing orders and much more without leaving your inbox.

Boost your workflow with a revolutionary Gmail add on from airSlate SignNow:

  1. Find the airSlate SignNow extension for Gmail from the Chrome Web Store and install it.
  2. Go to your inbox and open the email that contains the attachment that needs signing.
  3. Click the airSlate SignNow icon found in the right-hand toolbar.
  4. Work on your document; edit it, add fillable fields and even sign it yourself.
  5. Click Done and email the executed document to the respective parties.

With helpful extensions, manipulations to how can i industry sign banking iowa presentation computer various forms are easy. The less time you spend switching browser windows, opening many accounts and scrolling through your internal files searching for a template is a lot more time and energy to you for other crucial assignments.

How to safely sign documents in a mobile browser How to safely sign documents in a mobile browser

How to safely sign documents in a mobile browser

Are you one of the business professionals who’ve decided to go 100% mobile in 2020? If yes, then you really need to make sure you have an effective solution for managing your document workflows from your phone, e.g., how can i industry sign banking iowa presentation computer, and edit forms in real time. airSlate SignNow has one of the most exciting tools for mobile users. A web-based application. how can i industry sign banking iowa presentation computer instantly from anywhere.

How to securely sign documents in a mobile browser

  1. Create an airSlate SignNow profile or log in using any web browser on your smartphone or tablet.
  2. Upload a document from the cloud or internal storage.
  3. Fill out and sign the sample.
  4. Tap Done.
  5. Do anything you need right from your account.

airSlate SignNow takes pride in protecting customer data. Be confident that anything you upload to your account is protected with industry-leading encryption. Intelligent logging out will shield your user profile from unauthorized entry. how can i industry sign banking iowa presentation computer from the phone or your friend’s mobile phone. Protection is vital to our success and yours to mobile workflows.

How to digitally sign a PDF document on an iPhone How to digitally sign a PDF document on an iPhone

How to digitally sign a PDF document on an iPhone

The iPhone and iPad are powerful gadgets that allow you to work not only from the office but from anywhere in the world. For example, you can finalize and sign documents or how can i industry sign banking iowa presentation computer directly on your phone or tablet at the office, at home or even on the beach. iOS offers native features like the Markup tool, though it’s limiting and doesn’t have any automation. Though the airSlate SignNow application for Apple is packed with everything you need for upgrading your document workflow. how can i industry sign banking iowa presentation computer, fill out and sign forms on your phone in minutes.

How to sign a PDF on an iPhone

  1. Go to the AppStore, find the airSlate SignNow app and download it.
  2. Open the application, log in or create a profile.
  3. Select + to upload a document from your device or import it from the cloud.
  4. Fill out the sample and create your electronic signature.
  5. Click Done to finish the editing and signing session.

When you have this application installed, you don't need to upload a file each time you get it for signing. Just open the document on your iPhone, click the Share icon and select the Sign with airSlate SignNow option. Your sample will be opened in the application. how can i industry sign banking iowa presentation computer anything. Plus, making use of one service for all of your document management requirements, things are easier, smoother and cheaper Download the application today!

How to electronically sign a PDF on an Android How to electronically sign a PDF on an Android

How to electronically sign a PDF on an Android

What’s the number one rule for handling document workflows in 2020? Avoid paper chaos. Get rid of the printers, scanners and bundlers curriers. All of it! Take a new approach and manage, how can i industry sign banking iowa presentation computer, and organize your records 100% paperless and 100% mobile. You only need three things; a phone/tablet, internet connection and the airSlate SignNow app for Android. Using the app, create, how can i industry sign banking iowa presentation computer and execute documents right from your smartphone or tablet.

How to sign a PDF on an Android

  1. In the Google Play Market, search for and install the airSlate SignNow application.
  2. Open the program and log into your account or make one if you don’t have one already.
  3. Upload a document from the cloud or your device.
  4. Click on the opened document and start working on it. Edit it, add fillable fields and signature fields.
  5. Once you’ve finished, click Done and send the document to the other parties involved or download it to the cloud or your device.

airSlate SignNow allows you to sign documents and manage tasks like how can i industry sign banking iowa presentation computer with ease. In addition, the safety of your information is top priority. Encryption and private web servers are used for implementing the newest capabilities in info compliance measures. Get the airSlate SignNow mobile experience and work better.

Trusted esignature solution— what our customers are saying

Explore how the airSlate SignNow eSignature platform helps businesses succeed. Hear from real users and what they like most about electronic signing.

Sign now for business
5
Alex Harris

What do you like best?

The like the ability to send contracts to my clients. I can upload the contract and send for signature quickly.

Read full review
Love airSlate SignNow
5
Michael Glenn

What do you like best?

Customer support is lightning fast and actually can answer my questions.

Read full review
Great Way To Get Documents Signes
5
Joyce Paul

What do you like best?

I’ve been using airSlate SignNow for the last four years. It’s a great way to get documents signed while also protecting documents. It’s easy to use and user friendly for those you request for their signatures. I would recommend all businesses to use this. It’s easier than some of the other products that are out there now. I am always getting transcripts request or need signatures for attendance records, report cards, etc.

Read full review
be ready to get more

Get legally-binding signatures now!

Frequently asked questions

Learn everything you need to know to use airSlate SignNow eSignatures like a pro.

How do i add an electronic signature to a word document?

When a client enters information (such as a password) into the online form on , the information is encrypted so the client cannot see it. An authorized representative for the client, called a "Doe Representative," must enter the information into the "Signature" field to complete the signature.

How to sign a document on pdf viewer?

You can choose to do a copy/paste or a "quick read" and the "smart cut" option. Copy/Paste Copy: Select your document and press ctrl and a letter to copy it. Now select all the letter you want to copy and press CTRL and v to copy it and select the letter you want to cut ( b). This will show you a dialog with 2 options. You can then choose "copy and paste", if you want to cut from 1 letter and paste the other. If you want to cut from the second letter you'll have to use "smart cut" Smart Cut: Select all the letter you want to cut and press CTRL and v (Shift-v to paste if it's a "copy and paste"). Now the letter you want to cut will be highlighted, select it. Now press the space bar to cut to start cutting. This will show you a dialog with the options "copy and cut". You can choose to copy or cut to start cutting. You must select the cut you want to make with "smart cut" In this version, when cutting to start cutting it will not show the cut icon, unless you are cutting a letter you have already selected. You must select the cut you want to make with "smart cut" In this version, when cutting to start cutting it will not show the cut icon, unless you are cutting a letter you have already selected. Cut with one letter: In this version, you must select the cut you want to make with "smart cut" and it will not show the cut icon.

How do you sign a pdf document on phone?

A.) With your computer How to create a pdf document on a windows machine? 1.) Insert a new blank document 2.) Save the pdf document as an A2 file 3.) Click on print page 4.) Copy the page number 5.) Paste the page number to the pdf document How to create a pdf document on a mac? You can use a similar method to print as long as you have your printer. 1.) Make a folder with all the pdf files 2.) Open up the pdf file 3.) Right click on each file 4.) Select the option to open it in new tab How do you print pdf document on ipad? To open pdf document on android phone use the following method. 1.) Open the pdf file in pdf reader app 2.) Select the page to be printed 3.) Print How do you print pdf document with a computer? Open the pdf document and select the print icon 4.) Click on the PDF icon in the bottom right corner How to use web links with pdf files? For more information regarding pdf reader, please visit the following website: For more information about pdf writer, please visit the following website: If you are interested in learning more about Microsoft Office, please visit the following website: If you are interested in learning more about Adobe After Effects, please visit the following website: