AWS re:Invent 2018 – Keynote with Andy Jassy

welcome to the seventh annual AWS reinvent it is awesome to be here with you this is our favorite week of the year to spend our time for the whole week with our customers you’re here with 50,000 of your peers plus I think it’s about 53,000 by the time we’re done here there’s another hundred thousand or so listening to the live stream on the various keynotes and as always with AWS reinvent it is first and foremost a learning and education conference and what people enjoy the most about the conference every year are the breakout sessions where you can get into depth about the platform there’ll be over 2,100 sessions again and there will be more than half of them will have customers and partners who will be involved so you can get the unfettered scoop about the platform but while this is a learning and education conference I usually have a few things that I want to share with you I have a bunch of them today so I’m gonna get right to it and let’s get you up so I’m gonna start with a quick business update in AWS and at this point twelve and a half years into launching the business we have millions of active customers and we think of an active customer has an on Amazon Auntie that’s used the platform in the last 30 days and it’s a very diverse and large and fast-growing Custer base that ranges from most of the big tech startups over the last 12 years who built their businesses from scratch on top of AWS and these are companies like Airbnb and Pinterest and slack and dobo and Robin Hood and Grail and stripe and intercom – what’s happened over the last several years which is very dramatic growth in AWS and the cloud in the enterprise in the public sector space and you see it across every imaginable vertical business segment in financial services you see it with Goldman Sachs you see Capital One reinventing their digital banking platform on top of AWS Barclays UK is moving data into it is moving everything the ADA because thin removed everything AWS RBC HSBC you see it in healthcare whereas Johnson Johnson and Merck and Pfizer and bristol-myers Squibb and Novartis in manufacturing you see it with GE and Schneider Electric and Siemens and Philips in oil and gas you see it with Shell and BP and Hess and Halliburton in media Netflix and Disney and Fox and HBO and Turner and Discovery even in travel and accommodations you see it with Expedia moving everything to AWS and Singapore air and Ryanair and Korean Airlines you see it with Choice Hotels and Hilton Hotels every imaginable vertical business segment in the enterprise is using AWS in a very meaningful way and you also see it in the public sector where we have about four thousand government agencies worldwide using AWS 9000 academic institutions and about 27,000 nonprofits a very broad and diverse and fast-growing customer base now since the beginning of AWS our partner ecosystem has always been very strategic to us and that’s because we knew our customers who want to move to the cloud with the same consultant partners and ISPs but just being able to do that on top of AWS as technology infrastructure platform it is not just that we have thousands of si built practices on top of AWS entities like Accenture and Deloitte dxc and emphasis and slalom and second watch and cloud reach but it’s also the many thousands of IVs and SAS providers that run on top of AWS and these are companies like Acquia and Adobe and last CNN c3 and data breaks and infor and informatica and pega systems Salesforce and sa p and Splunk and workday most highest V’s will adapt their software to work on a technology infrastructure platform some will do – very few will do three they all start with AWS given it what a significant market segment leadership position we have so in the last financials that we release which was q3 of this year AWS is a twenty seven billion dollar revenue run rate business and by the way that’s real usage real consumption real revenue no ei credits mixed in there and we’re growing forty six percent year-over-year now growing forty six percent year-over-year on a base as large as twenty seven billion dollars it’s unusual that sometimes confuses people I’ll get to that in a second but let’s start with what I think is more straightforward which is Marcus segment share and I’m going to show you in a second Gardner’s latest metrics on infrastructure-as-a-service market segments share that they released a few months ago and what you’ll be able to see is that AWS continues to have a very significant leadership position at 52% more than four times the size of the next four providers combined now there are some providers who don’t have enough revenue to show up here and they only get attention when they pop their heads up themselves [Applause] but I digress so let’s go back to the 46 percent year-over-year so part of what you have to think about when you look at growth rates is that when you look at relative growth rates the percentages only matter as they relate to the absolute base the percentages apply to and so sometimes you can be fooled by that let me give you an example and this is a little bit harder to do because we’re the only ones that break our cloud numbers out and in a clear way but I’ll take triangulated analysts estimates to try and do it so if you look at the provider who most people think is the second place provider in this space and their last financials they grew 76 percent year-over-year and you can look at that and say oh 76 percent is more than 46 percent but if you look at it in reality that 76 percent represents about a billion dollars of growth year over year if you look at the 46 percent growth of AWS on that much larger base that represents 2.1 billion dollars a growth year over year so more than double that so AWS not only has a significant market segment leadership position in share but also on an absolute revenue basis is growing meaningfully faster than anybody else and by the way the second place provider is growing about double your rear of the third place provider in absolute revenue so every year when we think about what to talk about in this keynote we think about what builders want and we think about builders is not just engineers and developers but engineering managers and operations managers and CIOs and chief digital officer z’ and chief information security officers a very distributed set of builders and in the past we’ve talked about cloud being the new normal and we’ve talked about the superpowers that the cloud gives builders and we’ve talked about the freedoms that builders deserve and this year and we thought about would talk about we thought what we would do is we would share the five sentiments that we’re hearing most frequently from our builders it turns out that having the right tool for the right job not only makes it much easier to be able to migrate all of your existing workloads to the cloud they’re also unleashes Builders to build anything that they could imagine and because of the cloud you don’t have to pay for the entire platform up front you only pay for what you use people don’t want to sit and tolerate a fraction of functionality that the leaders have and when you look at the capabilities in these infrastructure technology platforms nobody has close to the capabilities of AWS has now when we first started showing this market textured slide at the first three events seven years ago it all fit on one screen and you can see now it’s it’s spread out a little bit it’s 140 services and it’s not just how many regions we have and how many availability zones we have and how many flavors of compute and storage and database and analytics and machine learning and messaging and people services and this vast marketplace it’s not all those things it’s also what you can’t see on these slides how much more depth and how many more features there are within each of these services let me tell you a true story that happened a few weeks ago we had a leader in AWS that was flying out of Seattle somewhere and it turns out that he was seated next to a senior person at a competitor and that senior person was working on a PowerPoint presentation with senior leadership team and doing it in a way that was relatively easy for the person I need of us to see it by the way is a PR person’s worst nightmare I have two words for you privacy screen but in any event what this presentation said was said here’s our product strategy we look at everything the ad abyss launches and then we move as fast as possible to launch something in that area it doesn’t matter if it has the same capability doesn’t matter if has the same features we’re gonna get so that people can just check the box and analysts will fall for that now it’s possible for some short period of time that some people may fall for that but builders aren’t gonna fall for that because it’s so inexpensive to try these services in the cloud because it’s pay for what you use it doesn’t take long for builders to know the difference in the depth of these platforms and there’s huge difference let me give you some examples so take security people say well I have certifications I have encryption I key management check check check right but there’s a big difference in the security capabilities so if you look at AWS we have 203 significant compliance governance security and certification that’s about 40 more than the next largest provider or take encryption we’ve got a hundred seventeen services that have encryption that is 3x more than the next closest provider and 47 more than the three providers after that or take key management this is important in being able to work with encryption to encrypt and decrypt the various objects we have a service we’ve built built called our key management service or kms which we have tightly integrated with a lot of our services and we have 52 services at this point the kms is integrated with that is 4x larger than the next closest provider and 3x larger than the 3×3 provider is combined so yes others have security would not close the depth of security the AWS has let’s look at databases companies say well I have a relational database I have a non relational database check check well if you look at the details of these database offerings they’re quite different AWS has 11 relational and non-relational databases which is much more than you’ll find anywhere else nobody has close to half of that or in AWS it’s the only place where you have a database migration service it allows you to switch from sequel and no sequel or actually be able to migrate your data warehouse so again a much deeper level of capability of the database base let’s look at the pewte companies say I have instances I have containers I have server lists but again if you look at the details of this it’s not just that nobody has the number of instance families that AWS has but it’s also AWS has by far the most powerful machine learning and GPU instance in the P 3 and P 3dn family so also that we have with that hundred gigabit per second networking which allows you to scale out your models much more linearly and across a lot more instances much more easily it’s also that we have the largest in memory instances where you can run s AP applications at 12 terabytes with even more coming we’re the only ones that have an arm instance family for you to get those scale out workloads at up to forty five percent savings the only ones that have an fpga instance family for you we’re the only ones that have a truly robust spot market where you can take any of the capacity we’re not using at one time be able to run it up to at up to ninety percent savings very different instances take containers company to say I have a container service and virtually every company has a container service is a managed kubernetes service we have three container services for people who want the container that is most tightly integrated with the cloud and with AWS they use our ECS service and that’s because since we control UCS we’re able to launch every feature making sure it works really closely with UCS if you want to use kubernetes in our manage kubernetes service eks is growing unbelievably fast you use e KS and e KS is tightly integrated with the platform as well never be quite as quickly integrative because we would deal with a broader community but you’ll have integration and then if you don’t want to deal with managing at the server of the cluster level you can use Fargate which allows you to manage containers at the task level and is completely serverless again nobody has close to those capabilities and containers or look at server list itself we pioneered this category with event-driven serverless compute with lambda a few years ago and it’s incredible how many hundreds of thousands of customers are using it for virtually everything you can imagine but if you really want to enable people to run choose serverless apps you have to make it work with all the other services so you can actually trigger functions to to actually run compute and so we’ve integrated lambda at this point with 47 different services the next closest provider has only done it with 17 so yes other folks have compute but not close to the capability we have in AWS let’s take storage and this one I’ll go into the most depth on it across four dimensions block storage object storage file storage and data transfer which are the four areas that we think most about and let’s start with block storage and data transfer it’s not just that AWS has the most volume types and options in EBS so you can tune your were code for what you need but also it gives you the ability to actually elastically change the size of your volume without any disruption to the running volume or take data transfer I often think that people forget about how important data transfer is enterprises and companies have so much data on premises that they want to move to the cloud but it’s not simple AWS is 11 different ways to get your data into the cloud depending on the nature of your data and your application nobody else has a little bit more than half of that so very different and it’s not just things like Direct Connect which is a private connection between your data center and our region’s if you want to send streaming data in you can use Kinesis firehose if you want to send data in and bulk you can use snowball or snowball edge or snowmobile or even if you look at our two new transport services we launched a couple days ago AWS data sync which is a data transfer service that automates transferring and synchronizing data over the network to s3 and Amazon EFS it speeds up to 10x faster than open source tools like our sink and robocopy but also look at how much data is still caught of companies that need to use FTP to get it somewhere and so we build secure FTP as a transfer service which would Joss just launched a couple days ago to make it easier to move that data into so you see significant differences in block storage and data transfer let’s look at an object storage s3 is without a doubt the largest and most popular object store for unstructured data and in the nearly 13 years that we’ve been operating a 3 with exabytes a day how many X it’s a data it’s also become the clear number one choice for daylight’s we have over 10,000 data lakes being run on top of s3 and there’s a few reasons for it first is it’s the most secure object store it’s the only object store that allows you to be able to audit every access to an object who did it where they did it from what they tried to do it’s the only one that gives you a daily report of all your objects so you can check out things like are all my objects encrypted it’s the only one that allows you to block public access to all of your buckets or in an account level with s3 block public access we launched a couple weeks ago it’s also the only one that has to keep ability like Maisie it allows you to look at sensitive data and if there are any anomalies in the access patterns so it’s not only more secure it’s also the most operationally performant object store and there’s several things we do to make that so but I’ll mention a couple right now first we replicate all your objects across multiple availability zones typically at least three and those availability zones are separated by several miles usually it’s several no more than 100 now this is an important point because it means that you get fault tolerance but you can use those that data in those applications because the latency is low enough given how far apart they are that you can actually make the application work for end-users now if you compare that to what other providers do either they don’t really have multiple availability zones and regions and are scrambling really quickly to try and build that construct or those that do typically have those availability zones in the same building or next door to each other or across the street and that means that if there’s some kind of event in a building or on a street it blows up sometimes literally your entire durability of availability stories so s3 a very different story there also we have a lot of customers who want to replicate objects across regions and s3 is the only object store that allows you to do cross region replication without having to create a separate storage class and to be able to pick whichever region you want in as many as you want so you get a lot more cross region replication in s3 the third thing is that s3 gives you much more comprehensive flexibility and being able to operate an object level and it turns out when you get into the details of managing your objects having to manage at such a high level like the bucket level isn’t super useful and so s3 allows you to do things like replicating objects by tag or lifecycle tearing by tag or setting access control policies by object of retention policies by object and then just a couple days ago we launched s3 batch operations which allows you to take all these all these operations and actions against objects but across your thousands and millions and billions of objects in a much easier way through these API is much faster than you ever could do before it again these are all capabilities you’ll only find in s3 and then most customers are continually looking to try to find ways to save money on storage and nobody gives you more ways to save money on an object store than s3 and we have a number of storage classes we have s3 standard which we’ve had since the start and s3 standard and frequent access for objects that are access less frequently and for customers who are willing to trade a little bit of durability for lower cost they can are willing to store it in a single availability zone they have we have s3 one zone ia for archival and backup we have glacier now a couple days ago we launched a brand-new storage class called intelligent tearing as three intelligent tearing and this is the world’s first machine learning driven object storage class in the cloud that automatically saves you money and what it does is it it actually uses the 13 years of experience we’ve had with the trillions and trillions of objects and the various access patterns and is built a model and then looks at your own unique access patterns for each of those objects and what it looks like you should actually move that to a colder tier it automatically does that and saves you the money and what it looks like it’s being accessed more frequently it moves it to the warmer tier so this is significant savings for you where you don’t have to do anything when you put objects the intelligent tier in class if you don’t want to rely on intelligent tearing and machine learning we also have s3 storage class analytics which is a unique capability that only we have that shows you various patterns in your objects and makes recommendations on where to lifecycle theorem now with a lot of storage classes but we’re not close to being done and let me give you an example we have a lot of customers who have gobs of data and these are pieces of data that are accessed even less frequently than what people access in glacier and today the way that they’re managing this is they’re managing with tape either on-premises tape or off premises now if you’ve ever had the joy of managing tape it is no picnic it is hard to do it degrades fast it’s expensive the maintenance is nightmare and if you want to move all that data off premises you can but apart from the adventure of getting it there if you want that data is not close to the rest of your data if you want to do analytics with machine learning on it and so this is something that we’ve had a lot of builders ask us about and I’m excited to announce coming in early 2019 a new storage class which is glacier deep archive which is the lowest cost origin on the cloud even lower than then you can find an on-premises tee so this means that you no longer have to manage tape it’s got the same design for eleven nines of durability that we have in the other s3 storage classes it allows you if you need to actually recover some of these objects occasionally which most people won’t but if you need to you can do it in hours as opposed to having to do it in days or weeks if you have it off premises and then the kicker is it’s really cost effective it will cost less than one tenth of one set per gigabyte per month [Applause] which translates also to a terabyte from $1 per terabyte per month so this is really cost effective glacier was the best value archival storage offering before and this is one quarter of the cost of glacier one tenth of one cent per gigabyte per month you have to be out of your mind to manage your own tape moving forward and this will be here for you in 2019 so let’s talk about file storage there is so much data in enterprises today that are being stored in file systems and they want to move it to the cloud but they need the right file systems to be able to do it and it’s why we launched back in 2016 Amazon Elastic file system or EFS which is the easiest way to actually use file system file systems in the cloud particularly for for Linux were closed utilizing the NFS protocol and this service has been a huge success since we’ve launched it it gives you all kinds of capabilities you have four different modes to operate in so you can tune your filesystem to what are your whatever your workload constraints are the data is stored redundant lis across three availability zones it scales up or scales down as you need it you don’t have to do any of that at all and then just a couple days ago we launched an infrequent access storage class for EFS because we had a lot of customers who had file systems that weren’t accessed that often where they didn’t want to pay the full price of EFS and they wanted to pay a lower storage fee and so this will save you up to 85% on your file systems you don’t access very often we have tens of thousands of customers at this point that are using EFS it’s a really broad customer base ranging from t-mobile and Philips and BBC and Autodesk and Aetna and BMW and HBO and Disney really a broad group of customers and EFS has been very successful however we have a lot of customers who say well it’s great that you got EFS and it’s optimized for linux-based were closed and the NFS protocol but what about windows were closed now even though if you look at the market segment short share for operating systems Windows has been losing shared of Linux pretty consistently IDC has it going from 46% to 32% over the last few years and even though the vast majority of workloads in the cloud today are Linux based it’s also true that there is still a very significant number of Windows workloads and those workloads want a Windows filesystem now it’s also interesting by the way if you just look at this Gartner slide for market segment share and infrastructure the service and windows even in Windows AWS is a really significant market segment leadership position at 58% but our Windows customers say if I’m going to move these workers to the cloud I need a Windows file system so this is something we thought a lot about I’m excited to announce today the launch of Amazon fsx for Windows file server [Applause] so we first started thinking about this what we were hoping to do we were planning on doing was making this Windows file system work as part of EFS it would have been much easier for us just to layer on another file system and in fact this is what most third-party companies do who are providing a Windows file system capability because you know it’s much easier if you’re trying to build a business that has scale and leverage to use a general file system store and not have to build natively for each individual file system and what they do is they’ll take SMB and they’ll try and tack it on to a general store but the problem is if you talk to customers and if you’re guided by customers like we are and we talked to many many customers they want a native Windows file system they want it to be fully compatible with things like ad and Windows Explorer and the windows access controls that they use and the more we talked to customers the more we tried to see how flexible they were on this we realized what they wanted was something native and so we changed our approach and we actually started thinking about it a little bit like we think about a relational database service or RDS where we have a managed service and control playing the real fidelity to database engines like my sequel and Postgres and Bria to be an Oracle and sequel server and so we then went this different route list which was to build natively on Windows file server and so you can see it’s fully compatible with the windows files with Windows File server you don’t have to worry at all about hardware or software it’s a managed service like most of the things you see from AWS you can get tens of gigabits gigabytes per second of throughput with sub-millisecond latency and then right from the get-go it launches with PCI HIPAA and ISO security compliance certifications so we think this is pretty exciting for our customers now with the launch of fsx for window we now have a file system that will work for the vast majority of customers use cases but as we were feeling pretty good about this and talking to customers privately that a number of them said well that’s really great but there are other types of work those often with unusual demands that you don’t really have a file system for he’ll take HPC her machine learning or media processing these are really unusual types of workloads that have very high scale very high throughput need very low latency the massive parallel scale out there’s nothing here that you have that can work as a file system for these types of workloads so we thought a bit about that we looked at a lot of different options that we could help our customers with you know and probably one of the most you know arguably the most popular HPC or high-performance computing file system as an open source system called lustre and we thought you know we said to customers why don’t you just use lustre and then you can use that with the rest of AWS and they said well it’s great but if you have you ever tried to manage lustre it’s painful it’s hard to manage and again you have to handle all the software all the hardware and people said just make it easier for us so presto I give you the launch of Amazon FS x4 luster which is a fully managed file system for high performance computing workloads so with FS x4 luster it handles that very demanding set of performance characteristics you need very high throughput low latency hundreds of gigabytes per second and millions of I ops it’ll handle it has seamless integration with s3 so you can have the data stored in s3 you can easily move it to fsx for lustre or you can point fsx for lustre at s3 it’ll automatically move it over and then when you’re done doing your processing you can write that data back to a3 and just shut down the lustre file systems will save the money and then again just like fsx for Windows it has HIPAA PCI ISO comply right out of the gate so this collection of features and capabilities across block storage object storage file storage and data transfer this is the bar for storage if you it’s not just about having a checkbox in each of these areas it’s about having block storage that has the most number of volumes and the ability to resize those volumes without disrupting your volume it’s about having data transfer where you have 11 different ways to get your data into the cloud where you have unique capabilities depending on what type of data you’re trying to move and what your setup is and what your situation is it’s about having the object store with the most security the most operational performance the most feature is the most flexibility and the most way is to save you money and it’s not about just having a file system it’s about having multiple file systems that allow you to optimally run Linux workloads windows were closed or high performance computing work owes that is the bar for storage that’s what builders won having the right tool for the right job saves builders time and money it’s what they expect and because the platform is not something you have to pay for upfront builders don’t want to tolerate a fraction of the capability that the most capable provider has they want it all and they want it now and there’s nobody who gives builders close to the same capabilities as AWS doesn’t is why the vast majority of companies continue to choose AWS as its infrastructure technology platform now an example of a company who understands the value of the breadth and depth of the platform and that’s also making a very significant move into the cloud and to AWS is incredible venerable company called Guardian insurance it’s my pleasure to introduce the stage the CIO Dean del Vecchio [Music] [Music] thanks Andy it’s exciting to be here to talk about Guardian share our story with everybody today I’m the CIO Dean of Guardian it’s a life insurance company it’s been around for a hundred and fifty eight years it’s a mutual company of fortune 250 I’m responsible for all of the technology setting the direction and delivery and a host of shared services real estate facilities sourcing just to name a few I joined Guardian five years ago because of its mission its values its commitment to its employees and its customers but overall financial strength but I also chose it for a professional challenge and to help to make sure it would be around another 158 years however like many insurance and financial companies I inherited a lot of technical debt and legacy systems and platforms that were around for a very long time 1967 before we put a man on the moon we implemented our first policy admin system I like to say the good news is it’s still running but I also like to say the bad news is it’s still running I saw this as an opportunity to take a legacy insurance company into the new digital age and be an enterprise digital facing company in a highly regulated industry so I’m gonna start in a place that may not be expected I’m gonna talk about our workplace strategy a transformation like this requires you to be a very innovative culture but in order to do that you need an environment that supports innovation so we took on a multi-year journey to replace all of our legacy buildings with those gray walls and high cubes with open new bright and airy space designs and open spaces designs with both formal and informal collaboration space we look to do this so it would foster collaboration teamwork an agile operating model we think this is unique in the industry and probably not what you would expect to see from a hundred fifty eight year old insurance company so with that that sent a clear message to our employees that we were willing to make it vestment not only in our workplaces but in them as well we invested in technologies to make their jobs easier on a daily basis we invested in scaling up skilling and training them so they can operate and develop in the cloud we trained over 2,500 of our employees on the new agile safe operating model and at the same time we kicked off our technology transformation we took a year to prepare our environment because we operate in a very highly regulated industry we wanted to make sure it was Enterprise ready before we moved any workloads over to it we performed gap analysis between what it’s like to run in our own hosted environment versus AWS and believe me we found gaps but when we did the AWS team was there with us every step of the way to make sure we could fill those gaps guard duty and macey for example we work closely with them to develop those to make sure that we were going to be in a compliant environment this allowed us to think about our cloud for a strategy but what I think is more important and more unique about the approach we took we took a production first approach which is quite unique we think in our industry what we gained from this was quite a few different things one is it provided us with a highly scalable available and secure but more importantly a more efficient way to run our operations Guardian has over 40 SAS providers but only one AWS an AWS has helped us as you just heard maintain a robust security posture we took a very aggressive approach as well we migrated over 200 applications in about a 12-month period and because of this on November 5th we were able to do something that not many companies that if any and our industry has been able to do we shut down our last and owned operated data center on November 5th lights-out it was an awesome feeling pressing that power button to shut it down for good we reduce our data center space by 80% now we’re truly a cloud first for all things new this gave us some really good key benefits our staff no longer worries about racking and stacking servers and infrastructure we’re focused on new development we’re focused on growing our business and driving business value the other thing we do too I personally don’t have to worry about managing a data center anymore and all the environmental that come around with that we can quickly scale up and scale down with business demands and needs we can invent we can test and learn we can fail quick we could break new ground we see ourselves as an innovative company now it’s part of our DNA with 1 billion going into the insurer tech space we’re ready to partner and participate in accelerated innovation programs now the AWS platform has allowed us and made it much more easier for us to effectively work and with our startups and with our investments that we make and our partners on the M&A front for example we no longer take on that technical debt that only comes along with an acquisition we now just migrated as part of our integration plan we migrate directly over to the AWS platform on any new acquisition we do saves a lot of time and money so the payoff has been great we just recently launched our all digital platform Guardian direct comm it allows consumers to purchase actually research purchase self-service for Guardian products and a set of third-party products in the insurance sector so for all you guys out there take a look you’ll probably find a product you’d like and if you don’t post for a job I’m happy to say that AWS is a preferred cloud provider over the next few years will migrate the remaining workloads over to AWS the majority of them anyway so looking ahead we’re gonna continue to look and excited about how we can modernize our remaining legacy core platforms we’ll continue to expand our digital experience platforms on AWS we’re going to look to let the data capabilities gaining new insights into our clients and our customers but more importantly continue to improve on our fraud detection and protection we’ll continue to explore ways how to improve our customer experience capabilities with AI AR and VR natural language processing this is going to allow us to better service our customers when where and how they prefer in a highly digital transformation like this it’s allowed us to be a very innovative company in a highly regulated industry we see AWS as a clear competitive advantage for us so we’ve done this in a way too that supports our core 3 values we do the right thing people count and we hold ourselves to very high standards and although much has changed in our in our whole culture and the technology that we’ve been using these values will not and have not because even in a cloud everyone deserves a guardian thank you [Applause] [Music] Thank You Dean we are honored to be partnering with you in this journey to the cloud and as you could tell from listening to Dean it turns out that having the most depth and the most breadth of capability is often the needle tipper in which infrastructure platform you’re gonna build on top of when we launched AWS in 2006 one of our observations was that developers were largely being ignored by the large technology companies and what was happening was they were being constrained from being able to have access to the building blocks to be creative and to build however they wanted and all the decisions were being made for them by those technology companies and what you see is a lot of builders and a lot of developers or tinkerers they like having access to those low-level flexible building blocks so they can compose them and teach them together in any way that they could imagine and that’s one of the reasons I believe that AWS resonated so quickly with them and grew as fast as it did but well we’ve started to see over the last few years as more and more mainstream enterprises have been manic planning and managing their approach to the cloud is that a second macro type of builder has emerged and that builder isn’t as interested in getting into the details of all the services and stitching them together they’re willing to trade some of that flexibility in exchange for more prescriptive guidance that allow them to get started faster and they’ve been asking us for services here we launched some things over the last few years that really address some of these needs things like elastic Beanstalk which is a really a container for web apps or sage maker which we’ll talk about a little bit later which is a managed service for machine learning but this second group of builders keeps asking us for more with more of these abstractions that give them prescriptive guidance and help them get going even faster than they can today and there’s a lot of them that people have asked us for and we have spent a lot of time working with enterprises as they’ve been moving to AWS over the last several years and learn some of the pain points I’m going to share some some of them with you let’s start at the very beginning with a landing zone if you are making your approach into the cloud you’re gonna have multiple teams in multiple locations across multiple services across many accounts within your enterprise they’re going to be using the cloud and in AWS we give you these fine-grained controls and capabilities that allow you to thoughtfully or creatively set up any kind of multi account secure environment you want but again the second type of builder has said why are you forcing me to figure out what the best practices are can’t you find a way to make it easier for me I want to know things like what are the best practices and blueprints for setting up a multi account environment how do I maintain security and compliance as more and more of my teams are moving to the cloud how can I set up and enforce policies across all my work O’s you have all these tools in AWS which are the right ones to use and so customers really want more as prescriptive guidance and so I’m excited to announce the launch of a new service which is called AWS control tower which is the easiest way to set up and govern a secure comply multi account environment or landing zone in AWS and so control tower gives you a few things it gives you a number of these best practices blueprints which allow you to do things like best setup AWS organizations so you have multi accounts and the right hierarchy managing identities with AWS single sign-on or Microsoft ad or federates your your access with a two best single sign-on it lets you do centralize tagging using Cloud trail and config it lets you setup cross account access using AWS Identity and Access Management it gives you prescriptive guidance and how to best set up your VPC and the network pieces around it and then it gives you an easy way to configure an account factory so that all the employees in your organization know how to set up accounts the way that you want so there’s a set of robust blueprints that are prescriptive and just clicks that you can choose from and then we make it really easy for you to set up guardrails so you should think of guardrails as prepackaged rules that allow you have the right security and operational control and compliance that you want so you can pick things like don’t allow internet access for these specified accounts or disallow public readable storage or prevent any s3 object from being uploaded to an account where the object is not encrypted all these are guardrails you can set up we have a large number of these plus we give you the capability to set up your own guardrails in the UI yourself and once you enable these guardrails control tower automatically translate these translate these into granular AWS policies like I am or s3 bucket policies and implements them on behalf of you under the covers so blueprints guardrails and then you have visibility through a dashboard that shows you all the accounts all the guardrails what the status is in compliance whether there are any outliers if so which are they and how can you take actions so this is a much easier way with just a few clicks and a GUI for you to be able to manage your multi-account secure environment or landing zone AWS save people a lot of time now chief information security officer to say to us when we talked to about control tower they say this is awesome thank you for providing this saving me a lot of time too but there are a lot of times where I don’t need to see all of those pieces in the landing zone all the multi account pieces I just want to go to a place where I can make sense of all the security findings from the different software I use and one of the challenges here is that most companies use lots of different security software and third-party software some AWS services and one of the big challenges is you have all these findings they’re in different data formats they’re different services and systems are forced to constantly be pivoting between different consoles of different services or aggregating all that data in and try to normalize it to make a coherent but it’s a lot of work on their side and they said again make it easier for us please and so I’m excited to announce the launch AWS security hub which is a place where you can centrally manage security and compliance across your whole AWS environment so what security hub will do is it’s going to give you a GUI that saves you a lot of time it’ll take all of your findings whether you’re using AWS security services like inspector for vulnerability scanning or guard duty for network intrusion or macey for anomalous data patterns or were they using the very large number of third-party software security services in our ecosystem you’ll take all that data and aggregates for you that data and normalize that data and then it makes it easy and coherent to see and take action on in a single GUI in the security hub you’ll be able to see all those findings prioritized and whichever way you want you want to display them but often people are looking for the ones that have findings the ability to get down into the details of which ec2 instances or s3 buckets or objects that are violators of some of these policies or where there are findings and take action quickly this is gonna pretty radically change how easy it is to look at what’s happening security wise across your state an AWS now this service only works in my opinion if it has a robust third-party partner ecosystem because so many of our customers are using all these third-party security services and these are companies like Symantec Palo Alto Networks of koalas and Splunk and alert logic rapid7 it’s a very broad group of those those are this is a large number that are initially integrated and we expect the rest of our community will be excited to do so as well so how about how about data leaks everybody’s excited about daylight’s this is kind of maybe this year is very vogue concept like we’ve had with machine learning and edge and big data and cloud people realize that there is significant value in moving all that disparate data that lives in your company in different silos and making it much easier by consolidating a data like for you to be able to run analytics and machine learning which the cloud allows you to do in a way that’s never been able to be done before so everybody wants a data like and as I mentioned earlier we have over 10,000 data lakes that are that are built on top of that’s three but if you tried to build a data like it’s hard I mean as I said we have a lot of experience with this there are a lot of things you have to do first you have to you know you have to ready your storage and configure the s3 buckets and then you have to actually move that data from all the disparate places and you know in the process you have to crawl the data to extract the schemas and you’ve got to add metadata tags to the catch of the data so you can find it so you put it in a catalog then you have to go through the step in cleaning a prayer of cleaning and preparing the data where you have to carefully partition and index and transform the data to optimize the performance and the cost associated with being able to find that data and run analytics and then the hardest part if that isn’t enough which is actually setting up the right security policies and this is some of your most sensitive data in your entire enterprise and so you have to create data access rules of the table and column row levels and you have to figure out how to encrypt that data and you have to have the right access control and from each of the analytics or machine learning services that you want it’s just a lot of work and then of course you’ve got to find a way to make this data accessible and trusted for your data analysts who want to do the analytics at a later stage this is a lot of work and for most companies it takes them several months to set up their data like which is frustrating them so we’ve tried to take that experience that we’ve had working with so many enterprises and building their data lakes and build an abstraction that makes it much easier for all of you and so I’m excited to announce the launch of AWS lake formation which is a service that allows you to set up a data Lake in days instead of months so lake formation it solves a lot of the problems and challenges I was mentioning earlier and it lets you do it from a dashboard with just a few clicks so you point date lake formation at the data sources that you want to move into lake formation and it moves it cleanly and takes care of crawling the schemas and setting up the right metadata tags and then you can choose from a pretty significant list of prescriptive security policies that you can apply at any level in your data lake this takes a lot of the guesswork and a lot of the heavy lifting out of it it also does the encryption and you can set up access control policies instead of for each analytic service do you want to do that you can or you can set it up for all your analytic services or segments of it your choice makes it much easier and then we do that heavy lifting of cleaning and partitioning and indexing that data and deduping the data so that you can be storing it and accessing a cost effectively and quickly and then we actually put it in a catalogue in a much more easier to manage way for your analyst and data scientist as they’re doing analytics and machine learning this is a step level change and how easy it is gonna be for all of you to set up data Lakes I think folks gonna be pretty excited about it so for this second type of builder who’s been asking for more of this prescriptive guidance and more of these abstractions to let them get going even faster I know you’ve been waiting and I know you want to have them and they’re here for you I think to allow you to set up a landing zone a data Lake and have much easier visibility to the security posture of all your findings across AWS is a huge step forward and your ability to manage quickly what you’re running and getting going and being secure and compliant in the cloud so [Applause] freedom we’ve talked about freedom for builders a lot over the last few years and if you think about freedom freedom is for builders is not just about having all the tools that you need to build whatever you want to build at your fingertips but it’s also being free of abusive and constraining relationships I can assure you that enterprises are singing in the dead of night and in the afternoon in the morning too and that’s because the world of databases in the old guard commercial-grade databases has been a miserable world for the last couple decades for most enterprises and and that’s because these old guard databases like Oracle and like sequel server are expensive they have high lock in their proprietary and they’re not customer focused these are companies forget the fact that both of them will constantly be auditing you and finding you for some license violation that they’re able to find but also they make capricious decisions overnight that are good for them but not so good for you so overnight Oracle decides they want to double the cost of Oracle software to work on a Wi-Fi rosette or Azure that’s what they do or Microsoft you buy your licenses you’ve paid for your license system sequel server you’re running them an RDS and they decide they don’t really want to let you take those licenses you’ve paid for and run them an already house anymore they want you to write on Microsoft it’s good for them it’s not so good for you and people are sick of it they are sick of it and now they have choice and so this is why companies have been moving as fast as possible to these open engines like my sequel and Postgres and marija DB and if you want to get the performance in these open engines that you can get in these commercial grade databases you can do it but it’s hard it takes tuning we have a lot of experience doing this at Amazon and so what our customers kept asking us was they said look could you guys figure out a way to give us the best of both worlds we want the open engines with the customer friendly policies and the portability with the performance of the commercial-grade old-guard databases and that’s why we built Amazon Aurora which continues to be the fastest-growing service at this point of evolution in the history of AWS and what Rory gives you is it has both my sequel and Postgres compatible editions it’s about five times faster than the highest-end implementations of these open source engines it’s at least is available and durable and fault tolerant as the commercial grade databases the one tenth of the cost and this is why you see so many thousands and thousands of customers using Aurora which at this point you know we have tens of thousands of customers using Aurora this is the third year in a row that I’ve been able to show this slide and say that the number of customers is more than doubled and you can see it across lots of different examples you know Verizon is making a huge shift to Aurora from Oracle and sequel server and db2 databases they have or you can see it with Expedia Capital One or Astra Zeneca or Dow Jones or bristol-myers Squibb or Samsung or Ticketmaster tens of thousands of customers are moving now there are a lot of reasons as I was mentioning earlier and why people are excited about moving to Aurora but one of them is because that team continues to innovate and iterate on behalf of customers really quickly and they have launched about 35 significant features in the last year alone and they’re too many to mention but I’ll mention a few that people are excited about you know when we launched rora server list you no longer had to actually even provision rawrr anymore you could just actually just say you want to our server list it scales you up seamlessly when you need it scales you back down so you don’t waste money you pay per request or we had customers who were really excited about parallel query which speeds up your queries by two orders of magnitude or we had customers who said I really want backtrack which is almost like an undo button in Aurora that that brings you back by a second to add it to a previous point in time just a couple days ago we launched our global database which allows you to have a multi region or our database where when you write to one spot it replicates that data across multiple regions with a lag time typically of less than a second which gives you even better disaster recovery and lower latency reads all over the world so Aurora is continuing to iterate quickly it’s continuing to innovate on your behalf and grow really really quickly but I want to talk about a different database trend that we’re seeing that is becoming more and more significant and more and more pervasive and what’s happened is that if you look at the last 20 to 30 years most companies have run most their workloads using relational databases and that made sense back in those days when those applications typically were gigabytes of data and occasionally terabytes so where you needed kind of complex joins and you know ad-hoc queries and and the data levels were at the levels I just mentioned gigabytes sometimes terabytes a number of things have changed over the last few years that are impact in people’s receptivity to that idea the first is that people woken up to how useful data is at the same time that the cost of compute and storage has come way down in large part because of the cloud which means that most applications today are storing lots of terabytes and petabytes of data instead of gigabytes and occasionally terabytes and then the expectations for builders as well as end users of those applications is really different the latency requirements are much lower and they expect to be able to handle millions of transactions per second with many millions of people using the app simultaneously and then once you’ve seen over the last few years is that more and more companies are hiring technical talent in-house to take advantage of this huge wave of technology innovation that the cloud is providing and these builders are building not in these monolithic ways of the past but with micro services where they compose the different elements together using the right best for the right job and so all this has led people away from using relational databases for all of their workloads and let me give you a few examples take a company like lyft or take fortnight if those of you who have kids you probably know what fortnight is these are if you think about these companies lyft has millions and millions of passengers and then lots of GPS coordinates for where their passengers and and their drivers are and fortnight has millions of gamers and then millions of gamer profiles this is a really simple data that can be stored in a simple key value pair where the key is the number is the users and the value is either the GPS coordinates of the gamer profiles and so what we did was we built a really scalable database that optimized running key value pairs a single-digit millisecond scale and at very large scale and that’s what we built with dynamodb and that’s why many many companies like epic with fortnight or likely have to use dynamodb let’s say that you can’t even stand single-digit millisecond latency you want something even shorter like microsecond latency so Airbnb is an example for their single sign-on for their guests and hosts they want all the applications to work with microsecond latency and so what they do is and what they want is they want an in-memory database or a cache and that’s why that we build ElastiCache which is manage Redis and manage memcache XI and that’s what Arabi Airbnb uses let’s say that you have data sets that are really large and have a lot of interconnectedness so take Nike as an example they built an app on top of AWS which looks at their athletes and connects them with their followers and then compares all of their relative interests well those are a lot of big data bases if you think about all the athletes and all the followers and all the interests and they actually have a lot of interconnectedness and if you tried to build that application using a relational database it would slow it down to a crawl and that’s why people are excited about graph databases and why we build Amazon Neptune which we launched here a year ago and it’s off to really a rare and start so people want the right tool for the right job and they want the right database for whatever their workload is so let me go back to dynamodb a second so as I mentioned earlier we have many thousands of customers who are running dynamo DB which is a very scalable low latency q-value database and you see companies like Samsung and snapchat lift an epic and Nike and Capital One and GE lots those customers are using DynamoDB and like you saw with aurora that team is continuing to iterate at a really fast clip another 30 significant features that they’ve added just the last year or so and again too many mention but some of the ones that people are excited about last year here we launched global tables which was the world’s world’s first multi master multi-region database you would online backups allow you to while the application is running and the database is writing to do backups of hundreds of terabytes without a disruption of the database point in time recovery was also very exciting for people but when we talk to dynamodb customers the thing that they probably struggled most with still is how to think about provisioning the write and read capacity and if you’re a business that has been using dynamodb for a while it has a large table or a large database where you’ve been using dynamodb for a while you kind of know how much read and write capacity you need you use our provision functionality golfin time at an auto scaling policy so that if it turns out you have an unexpected spike you can scale but you don’t have to live at that peak when you don’t live consistently at that level but those same customers as well as many other customers have lots of tables and lots of databases what they can’t predict how much capacity they need either because they have seasonality or spike enos or just there’s their new or small tables and so what they tend to have to do is they have to guess provisioning menu and what do you think they do they provision at the peak so they make sure that application will function no matter what and they usually don’t attach an auto scaling policy and that’s a waste of money and what we have decided a long time ago at AWS is that we’re always whenever we can going to try and do the right thing for our customers over a long period of time even if a cannibalizes revenue for us and so we tried to think about how can we build capability that solves some of this waste for people and so I’m excited to announce the launch of dynamodb read/write capacity on demand so what this means is that you no longer have to guess what capacity you need for read/write throughput you can just set up a table for in dynamodb say you want to run it in demand and we will automatically scale it for you even if you need more it will skate up if it turns out that you don’t need as much will stop charging you who you pay by the request so when you know the capacity you need and you’ve been running something in scale it will still be most cost effective to use provisioned but for all those other tables and customers who don’t know you’ll be able to that dynamodb manage it for you and save a significant amount of money no we’ve talked about these purpose-built databases that we’ve been talking about key value in memory and graph and one of the things that we have seen is that a new pattern of database need is emerging and this pattern is driven by the millions and millions of sensors and edge devices that are everywhere in our homes in the office and factories and planes and ships and cars and the oil fields and agricultural fields they’re everywhere and they are collecting large amounts of data and people have become very interested in being able to understand what’s happening with those assets and how things are changing overtime and so people are interested in what we call time series data and with time series data each data point consists of a timestamp and one or more attributes and it really measures how things change over time and helps drive real time decisions so you can imagine give you some asset where all the sudden the temperature has changed significantly you might want to take some action on that asset and you see it across lots of things clickstream data all kinds of IOT sensor readings even devops data and the problem is as more and more companies have this need and this desire to collect and analyze time series data there aren’t good solutions for them for how to use it in a database if you try and do it with a relational database it’s quite difficult it means that you have to build these indexes which are really large and clunky and are slow to query or the schemas in relational databases are rigid here and aren’t flexible enough as you keep adding more and more sensors and also the relational databases don’t have the analytics pieces that you want in time series like smoothing and interpolation and approximation these are all things that you don’t have and then if you look at the existing time series either open source pieces or the limited number of commercial services they’re they’re either just really hard to manage or in particular they just don’t perform and scale well I mean they have all kinds of limits if you look at some of these limited commercial opportunities or services when you reach the data storage limits it just starts purging data who knows which data it’s pershing whether you need or not it’s just not a good solution for people who need to deal with time series and so we’ve been asked lots of times by our customers because we have a really really large and fast-growing IOT business and edge business if we would help here I’m excited to announce the launch of a new database called Amazon time stream which is a fast scalable fully managed time [Applause] so timestream is gonna change the performance of your time series database by several orders of magnitude it’s a very different equation and the reason is because we built it from the ground up to be a time series database what keeps happening is people take these general stores and then try to retrofit them to serve whatever the emerging needs are but as you saw with fsx for Windows and fsx for lustre people want the right tool for the right job and so we built this from the ground up with an architecture that organizes data by time intervals and enables time series specific data compression which leads to less scanning and faster performance we have a separate data processing layer for data ingestion storage sharing and queries and we have an adaptive query engine that understands the location the resolution the format of the time series data the time if you look at time stream it’ll be a thousand times faster at a tenth of the cost than using a relational database to handle this time series data it handles trillions events daily so it’s highly scalable it’s got all those analytics capabilities you want in it right in the service interpolation and approximation and smoothing and then it’s serverless you don’t have to worry about the capacity we scale it up and we scale down for you so pretty exciting now I’m gonna take a semi rare detour if you’ll engage me here and give you an idea about how we’re thinking about blockchain this was interesting a year ago a lot of us got asked why didn’t a double us announce a blockchain service last year at reinvent and even though we have a lot of customers who run blockchain services on top of AWS or we’ve lots of tools for it people were curious why and what we shared was that we in talking to customers we just hadn’t seen that many blockchain examples in production or they couldn’t pretty easily be solved by a database and I think people assumed that that meant that we didn’t think blockchain was important or that we were gonna build a blockchain service which was not true it just we genuinely didn’t understand what the real customer unit is and again unlike maybe some other folks the culture inside AWS is that we don’t build things for optics we only spend the resource to build things where we understand what problem you’re really trying to solve and then we believe we’re gonna solve it for you and so we spent the last part of 2017 in the first half of 2018 talking to hundreds of customers about what is it that you really want when you say you like the idea of blockchain and what we found was that there were two jobs they were trying to solve but they were each a little bit different the first was that we had a significant number of customers who effectively wanted a ledger with a centralized trusted entry entity but where that ledger served as a transparent immutable cryptographically verifiable transaction log for all the parties that they needed to deal with and if you think about this this is something that a lot of companies need you think about all the supply chains and wanting to have all your supply chain partners aware you can see this in almost every industry you mean on the slide you can see healthcare and manufacturing and government with the DMV and HR but you know think about how many of these types of use cases there are and the problem is that to solve this really well and really scalable is not so easy today again if you try and solve it with a relational database it’s not really built to be immutable so you’d have to do a bunch of kind of wonky things to try to make that happen and then maintain it and there’s no way to cryptographically verify the changes so the other way people think about doing it is they say well maybe I’ll use the ledger and one of these blockchain frameworks but the problem is that you have to wade through so much muck and so much functionality that you don’t need for this first use case to use the ledger you’ve to set up a network multiple knows and configure and all the certifications and all the members etc and the reality is that that ledger isn’t that performant because it’s built for a use case where it needs to get consensus across transactions of all the parties and so that was the first problem that we heard and these were the challenges people were having and really solving them and then the second problem we heard customers wanting to solve was a little bit different these were typically peer organizations that wanted to do business together and where they didn’t want any centralized trusted entity they wanted to have complete decentralized trust and so all those transactions and interactions everybody would see and everybody would get to approve by consensus before they happen and again this was an interesting problem most of them are trying to solve this by using these blockchain framers however I ask you have you tried using these block chain frameworks it’s not easy it’s a lot of muck and we had you know some of our very best developers and builders inside of AWS try and spend several days getting something real done and it was awfully difficult for them and so that’s because you know you have to wade through all this functionality you have to set up all the networks you have to provision hardware and software you have to setup the certifications each member has to do their own part so these are two problems that are distinct that we heard and if you think about the way that we operate in the way that we provided building blocks and capabilities to you over the last 12 years we’re always gonna give you what we think is the best tool for each job and these are pre two pretty different problems the people are looking to solve and so on this first one I mentioned when we were thinking about what we could do for people we had an epiphany which in retrospect was fairly obvious but but at the time it was an epiphany which was we actually had to build something like this ourselves in AWS a few years ago so as we had these services like ec2 and as three and a bunch of these that had giant scale what they really wanted was they wanted a transactional dog every single data plane change that was being made because it makes things like operations much easier and billing much easier and we thought initially to build that in a relational database but of course doesn’t scale for all the reasons we mentioned and so we built this service that we called QLD be inside of Amazon to be an append-only immutable transparent ledger and we said we could probably externalize this and so that’s what we’ve done I’m excited to announce the launch of the Amazon quantum ledger database or QL DB which is a fully managed letter ledger database with a central trusted Authority and so what ql DB gives you is it gives you that ledger where you’ve got that central trust Authority like a supply chain all the entries are immutable they’re all cryptographically verified it’s transparent to everybody that sees that ledger you grant permissions to it’s much more performant and fast and you’ll get in Ledger’s in these blockchain frameworks because we don’t have to wait for that consensus it’ll be two to three times faster it’ll be really scalable he’ll have a much more flexible and robust at API is for you to make any kinds of changes or adjustments or to use the ledger database and then it’ll be easy to use it’ll have sequel like properties they’ll make it easy for you to operate so that’s the solution to the first problem the second problem where you want decentralized trust across a group of people that needs to be solved with blockchain and so I’m excited to announce the launch of the Amazon managed blockchain which is fully managed blockchain service supporting both hyper ledger fabric and ethereal so this service is going to make it much easier for you to use the two most popular blockchain frameworks so for companies typically you know the number of members that they want in their block network and where they want some kind of robust private operations and capabilities people typically choose hyperlink and for those who don’t know the number of members or want to allow any number of members to join where it’s largely public they usually choose aetherium hypu Hydra fabric is available for you to start using today aetherium will be available in a couple months scales to support thousands of applications running millions of transactions really the the most exciting part of it to me is just how much easier it is to get started and to get operating a blockchain with a few clicks so in the console use choose your preferred open-source framework you add the network members you configure the nodes of the members and you deploy applications to the member nodes and just saves a lot of time is much more efficient so when we heard people saying blockchain we felt like there was this weird convoluting and conflating of what they really wanted and as we spent time working with customers and figuring out the jobs you were really trying to solve this is what we think people are trying to do with blockchain and we’re really excited to give you both QLD B or managed blockchain service so when you look at this collection of database services this is what we consider database freedom it’s not just the ability to use a performant relational database that’s free from abusive and constraining relationships but it’s also easy access to the right database tool for the right job modern technology companies are not vanilla their workloads are diverse and they vary depending on how much data they have and they’re holding or what the latency requirements are or how much complicated joins there are of data sets or whether you’re using time series or whether you want a ledger they’re different and you can use a single relational database or all-singing all-dancing solution that somebody will tell you will solve everything as easily as you can use a hammer to build your house and to fix every room but I would be very skeptical of that rhetoric very suspicious the reality is that having the right tool for the right job saves you time and money this is now your right nobody gives it to you in the way the AWS does where we have way more selection of databases and the right tools for the right job and I think you’re going to be excited with the new service that we launched as well now [Applause] let’s switch gears a little bit have you ever had something you were excited about or that you were really anticipating and you talked about it and you talked about it and you talked about it and at a certain point you said well it’s really fun to talk about it but actually like more action to happen that sentiment is a sentiment that we hear a lot from builders there is so much buzz and so much talk about machine learning and people are making progress but not at the rate that they really want and there’s a few things missing we need more education we need more training but the biggest thing is that we just keep needing to provide more capabilities to make it easier for builders now even though we wish it were going faster all of us do if you look at the last year it’s pretty remarkable how much progress has been made there is a lot of machine learning being done in the cloud and people are making great strides and most of it is being done in AWS where we have tens of thousands of customers who are running machine learning on an AWS with twice as many references or customers and you’ll find anywhere else and it’s across a pretty broad set of customers if you look at the customers doing machine learning in AWS it’s companies like Liberty Mutual and slack and c-span Intuit and Pinterest and Capital One and the American Heart Association Yelp and FINRA and NBC it is I won’t read all those but it is a very broad and fast-growing group now we get asked a lot of times how do you think about machine learning and I’ll explain it and we think about machine learning as having three macro layers of the stack at the bottom layer is for expert machine learning practitioners and these are people who are comfortable building models training models tuning models deploying models and they’re comfortable operating at that framework and the infrastructure level so let’s look at this bottom layer a little bit let’s look at the the infrastructure first the vast majority of deep learning and machine learning in the cloud being done is being done on top of these p3 instances in AWS and we just announced a couple days ago the p3 DN instances which are the most powerful GPU instances and powerful interest instances for machine learning that you find anywhere in the world you can see that they have hundred gigabits per second of networking which changes how you can scale out and parallel eyes and save costs and money on these models they have if you look at them they have three times as fast networking throughput as anything else out there twice as much GPU memory as anything out there 100 plus gigabytes more of system memory than anything out there they are really powerful and this is where you see customers starting to do machine learning at large scale now of course they use lots of different favorite frameworks and we support all the frameworks that are major frameworks that customers want equally well but the one with the most residents right now in the community is tensorflow but it continues to be tensorflow and if you look in the cloud 85% of the tensorflow being run is run on top of AWS and you see it with lots of different types of customers like Expedia and Siemens and X shell and snapchat but for our customers who run tensorflow there are some challenges may talk to us about these challenges particularly scaling challenges and what they tell us is it’s really difficult to actually consume as much of the GPU with ten supposedly one that’s because there’s a lot of complexity in distributing the weight sufficiently of the neural network across the GPUs with tensorflow and so if you just look at kind of a typical workload you can look at one that has turned 56 GPUs usually with tensorflow it only uses about 65% of the GPU now I think everybody knows the GPUs are really expensive and so that’s wasteful and people don’t like them and in AWS we have separable teams who work on each framework and so we have a team that works on tensorflow and we challenged the team we said look most of the tents fell in the cloud right now is being run in AWS and yet look at the problems our customers have look at how inefficient how much money are wasting solve that find a way to invent and solve them and so that team went away and innovated and made some pretty significant improvements on the tensorflow framework and what they did was they found a way to invent distribute those weights much more efficiently across the neural network such that now with that same 256 GPU workload they’re using 90% of the GPU almost linear scaling that is a huge improvement in terms of your efficiency and the cost equation and you know just to give you an idea of what that means let’s look at an example here so this is resident 50 which is a common computer vision model and before the fastest trained time on this was a company in the Bay Area they wrote a proprietary algorithm on proprietary hardware that isn’t available at all yeah and they were able to do it in 30 minutes with this optimized tensorflow on our p3 instances that the machine learning team at AWS build we we were able to do that same resident 50 workload in less than half the time in 14 minutes which is pretty cool and by the way what’s most cool about in my opinion is it’s not some kind of optical benchmark used in all these proprietary ways that aren’t available to you all the changes we made all the ways we did this are available to all of you you can use those p3 instances which are available in virtually every AWS region and then the optimized tensorflow invention that this team did is available for you either in Sage maker when you use tensor flow or just in the AWS deep learning AMI so all that can be put to use right now for you as you’re running tensor flow were close now as I mentioned it’s a lot of you know I think a lot of interesting and innovative work that the tensorflow team and AWS did but tensorflow is one of several frameworks that customers use and I think we have a different approach here than most of the other providers where most of them are trying to funnel all the workloads and everything into tensorflow and as you can tell we don’t believe in one tool to rule the world we want you to use the right tool for the right job and it turns out if you’re doing things like video analytics or recommendations or natural language processing MXN is a great solution scales the best or if you’re doing computer vision cafe 2 is great there’s all kinds of incredibly innovative research being done on top of pie tours – more than half of our customers who do machine learning in AWS are using more than 2 frameworks in their everyday machine learning work we will always make sure that all the frameworks you care about are supported equally well so you’re the right tool for the right job and the one constant in a very fluid world and machine learning that we’ve seen is changed and I’m pretty confident in the next couple years they’ll be other frameworks to care about – that will support as well now we’ve been talking about training and AWS has made training the ability to build and train and deploy models much easier than ever before but when you think about machine learning there’s two big pieces there’s the training and then there’s the inference and in the inference of predictions but it was commonly called inference and if you think about the cost equation even though people spend a lot of time talking about training in part because of the stage where we are at machine learning right now the vast majority of cost probably about 90% of it for big scalable machine learning were closed and production is an inference and it makes sense take an application like Alexa if you think about it we train a very substantial model a few times a week and Alexa but think about how many questions and then predictions or inferences and answers are happening every minute across the world that’s inference and about 90% of their cost and machine learning is on the inference side and there hasn’t been much optimization and help on the inference is largely been focused on the training side and there are two parts in inference that make it complicated and inefficient and more costly than people want the first thing is it is not one-size-fits-all you know if you’re looking at something like a simple classifier it only takes a few top say only a few tops that you need but if you’re taking the advanced computer vision algorithm that often takes hundreds of tops and so they’re not all the same and then the problem is that inference runs best on GPUs GPUs are really expensive and you have to figure out how much GPU to provision it’s like the old infrastructure days before AWS which was you’d have to guess how much to provision you’re always going to provision for the peak because you don’t want to be you know stalled or impact customer experience and you’re sitting on all this wasted money and people don’t like having no elasticity and so you know took a long time before AWS came around fortunately for all of you it won’t take so long to have that elasticity in GPUs I’m excited to announce the launch of Amazon elastic inference which will let you add GPU acceleration to any Amazon ec2 instance for faster inference and muscular cost up to 75 percent savings so here’s how it works you know typically you are running inference like you’re running trading on these big beefy p3 instances and what we see typically is that the average utilization of these p3 instances GPUs are about 10 to 30 percent which is pretty wasteful with elastic inference you don’t have to waste all that cost and all that GPU instead you can take any ec2 instance in this case as an example I’ll use an m5 large and then you can provision elastic inference to it right at the time that you’re creating that instance and so what happens is you create the instance you say you want elastic inference you can start at one teeth teraflop or you can you can do up to 32 teraflops and then you effectively end up deploying it in a way where you attach that elastic inference to an instance much like you attach an EBS volume it lives in the same V PC and then what we’ve been done is that elastic inference can detect when there’s one of the major frameworks running on that ec2 instance and that it looks at the pieces in that neural network that would best benefit from acceleration and then moves it over to elastic inference to run it to give you that performance and acceleration you want and you only need to provision the amount of elastic inference that you want so let me give you an idea of how this changes the cost let’s look again at a resident 50 were code we were talking about earlier and running 360,000 images an hour and using just a general m5 instance with the smallest piece elastic inference you can provision that cost 22 cents an hour which is about 75% less than what you do in a p2 or p3 instance that is huge huge savings so this is a pretty significant game-changer in being able to run inference much more cost-effectively now we have customers who have large machine learning models there are in production that are spitting out lots of inferences where they say well actually I can consume all the GPUs and I want to consume all the GPUs or the latency is such that I need it in the hardware and the way you have to solve something like that isn’t a chip now we have a fair bit of experience at this point with chips we acquired a company in Israel called Annapurna a few years ago they have been designing all kinds of cars Nix for us to do ICI to network acceleration that has completely changed the performance for our customers in ec2 just a couple days ago in our Monday night live presentation and Peters keynote you heard that they built a purpose-built chip that’s based on the ARM architecture that allows you to hat called graviton and allows you to happen to use on scale out workloads a lot of generalized workloads and save up to 45 percent in the process and so we asked that team if they would think about trying to build a chip or a processor in this space but focused on trying to change what is the big driver of cost for our customers doing machine learning which is inference I’m excited to announce coming next year a new processor called AWS inferential which is a high-performing machine learning inference chip is custom-designed by any of us and so inferential will be a very high throughput low latency sustained performance very cost effective processor for inference you’ll be able to have on each of those chips hundreds of tops you can band them together to get thousands of tops if you want he’ll support multiple data types like into eight or F P 16 with mixed precision he’ll support all the major frameworks tensorflow and extant PI torch and then it’ll be available for you on all the ec2 instance types as well as in Sage maker and you’ll be able to use it with elastic inference and we think that the cost equation on top of the 75% savings you can get with elastic inference if you layer it inferential on top of it it’s another 10x improvement of cost so this is a big game-changer these two launches across inference for our customers so let’s talk about the middle layer of the stack you know that bottom layer is for expert machine learning practitioners who are comfortable that infrastructure and that framework layer but the reality is there just aren’t that many of those people in the world more and more being trained at universities but there just aren’t that many and they mostly hang out at the big tech companies and so if we want machine learning to move much more expansively and move from a little more conversation to a little more action please and most enterprises we have got to make it much easier and more accessible for everyday developers and data scientists and that’s why we launched Sage Maker last year and read them and just as a quick recap sage maker is a managed machine learning service that makes it so much easier to build trained tuned and deploy machine learning models so the first thing you have is to visualize your data and see what’s interesting you have a hosted Jupiter net no book right at your fingertips and then we have all these algorithms and frameworks built into Sage maker to make it easy for you so we have separable teams who’ve taken the most popular algorithms and work to make them 10x more performant and CA than you’ll find anywhere else and then you can just deploy them and all the frameworks and drivers are taken care of under the covers for you you can ofcourse choose whatever frame you framework you want we have native integrations with pi torch and MX net and tensor flow and you can also bring any algorithm you want and then we make it much easier to train your models you have one-click ability to set up a cluster will auto scale it for you will train it and then will tear it down for you when you’re done we’ll give you an opportunity to do automatic tuning with hyper parameter optimization which makes it much faster to train your models and then again you won’t find any place that makes it close to a simple to deploy those models at scale and production where you have one-click deployment to multiple availability zones for fault tolerance where will auto scale you and manage it for you and then moving forward will maintain it and it’s not just the auto scaling it’s all the security patches and the maintenance and things of that sort so it has never been easier to be able to build the machine learning model than it is today and it’s much more approachable for everyday builders and everyday developers in everyday data scientists and it’s part of why you see so many customers using sage maker over 10,000 customers alone in just a year and it’s a large number of them you heard on Monday night GE healthcare say that they’re all anti machine learning into it and Cox automotive in Formula One and NFL and Korean Airlines and Expedia Major League Baseball and Edmonds and Ryan Air and Shutterfly and GoDaddy and FICO it’s incredible after just a year how many companies are choosing to standardize their machine learning on Sage maker and AWS really remarkable now somebody the reason that people are so excited about Sage maker is the speed with which allows everyday developers to actually get started and be able to use machine learning and speed has been a recurring theme for AWS over the years trying to give you speed get what you want done somebody knows a little bit about speed his next peak are gonna bring to the stage he started off as a mechanic and he rose and builds his way all the way up to manage 22 championship Formula One teams he’s currently the managing director of Motorsports for Formula One because my privilege would bring up to the stage Ross brought [Music] thank you Andy it’s it’s great to be here and hopefully give you an insight into my world of Formula one and the role that AWS are playing I like to think of Formula One as a gladiatorial sport between the drivers and a virtual war between the engineers and technicians to produce the best car and it’s a contest it’s a complete team contest neither can win without the other with the fastest racing cars on the planet 230 miles an hour but what’s really impressive is we pull 5g in cornering and braking it’s a big business and a big sport and it’s grown we race in 21 different countries and we have more than half a billion fans and we’re generating multi duldul revenues for both the teams and our business it’s about going fast in every part of the competition but the races aren’t only one out on the track a fast pit stop can lose or win a race and every millisecond counts so take a look at this but look carefully or you might miss it so one point six seconds to change four wheels and tires and you can imagine the training that goes into that and that’s just typical of former one it’s also a contest of innovative minds the virtual war that I mentioned and every team has hundreds of Engineers all trying to produce the best car the best aerodynamics the best chassis the best engine and undoubtedly standing still is going backwards in Formula one and to my mind these are the most technologically advanced racing cars in the world after three decades of working in teams I joined former motorsport in 2016 with a mission make sure we add even better racing cars make sure we had the best action on the track and developed ways that our fans can engage with the sport I’d never before we’re the most data rich sport in the world and data fuels our performance each car has more than 120 sensors producing thousands of streams of data more than a debt more than a million data points per second are transmitted between the car and the pits during a race we chose AWS to be our partner to help unlock all of this data for the benefit of the lifeblood of our sport the fans we focused on two initiatives so far using high-performance computing to develop better or more racing all cars and to use machine learning to increase fan engagement and I’d like to take a closer look former one helps to develop the rules and regulations of the sport and we now have put a team together specifically to do that task amazingly it never been done before and currently the the car suffered badly when they’re following each other the airflow from the car in front disturbs the car behind too much and we want to make the aerodynamics of the cars more benign and much less sensitive to encourage wheel-to-wheel action so we’re developing designs using two cars one following another and we’re doing it mainly through CFD and as you can imagine this is a massively complex problem and has never been done before AWS high-performance computing is enabling us to do this an experiment faster and better than ever before we also want to give our fans a better insight into what’s happening on the track and using Amazon Sage Maker we can build models that help us understand how a car is being driven is the driver attacking or Z in conservation mode so we’re training machine learning models using this huge amount of data that we have in Formula One and we using those models to make predictions of what’s going to happen in the race [Music] we call these f1 insights and for the 2018 season we’ve started the process we’re digging deeper to show you where the performance is coming from when is a car faster why is it faster and I’m about to show you an example from Mexico in a recent race and on the screen you’ll see a graphic that compares two drivers Ricardo and Hartley take a look look at the graphic on the lower right side and you can compare Ricardo’s corner speed as he follows Hartley and this comparison helps a fan see where and how Ricardo is gaining time and it’s not always on the straights we’re not stopping there and for next season we’re expanding f1 insights for our viewers by further integrating the telemetry data such as the car position the tire condition even the weather we can use sage maker to predict car performance pit stops and race strategy I’m going to give you a world first preview of some prospective and exciting new AI integrations into next year’s f1 TV broadcast and I’ve selected three new cases to give you an insight look at the box on the right we know that somebody’s in trouble his rear tires are overheating but we can look at the history of the tires and how they’ve worked and where he is in the race a machine learning can help us apply a proper analysis of a situation and we can bring that information to the fans and help them understand whether their guys in trouble or whether he can manage the situation and these are insights the team’s always had we’re going to bring them out to the fans and show them what’s happening here’s another fascinating element for our fans overtaking wheel-to-wheel racing as the essence and critical aspect of the sport and now with machine learning and using live data and historical data we can make predictions about what’s going to happen so the graphic on the right shows what we expect is going to happen in this event now what’s great about this is that the teams don’t have all this data we ask for more know the data from both cars and we can make this comparison and that’s never been done before the pitstop it’s a major strategic strategic element of the race if one stock is mandatory stopping at the right time fitting the right tire for win or lose a race and we’re going to take all the data and give the fans an insight into why they stopped and when they stopped did the team and drive and make the right call so they infobox you see on the bottom is given the fans that insight that we can build using machine learning further down the road what’s really exciting is we’re going to investigate the influence of the tracks and the racing formats on the quality of the racing can we create tracks that achieve better racing and better overtaking can we build models to allow us to do that can we change the format of racing to make it more exciting and less predictable for instance what happens if we change the formation of the starting grid so instead of being spread out its bunched up and we believe that using machine learning AWS is enabling us to do these things I hope I’ve been able to give you an f1 insight into the work we’re doing I’m incredibly excited about this new phase and the partnership with AWS is bringing so many opportunities and I hope you’ve been able to get an insight into what we’re doing thank you for your time [Applause] [Music] Thank You Ross Ross is a legend motorsports and it is a privilege and an honor for us to be working and partnering so closely with f1 super cool what they’re doing on top of the platform and on top of sage maker so as I mentioned a lot of customers are excited about and having success with sage maker but what happens is when you take a group of people that have been constrained for so long and give them hope and give them the ability to get started it wets their appetite and they have lots of things that they want us to build which by the way is great keep them coming and so what else are people asking us to do with sage maker well people say well it’s awesome that you give us a host of notebook that we can visualize what data is interesting but a lot of times I can’t even get started cuz I don’t even know what the objects are I can’t label anything and if you think about it you typically have to label what objects are to train the models so they know what they are to actually get results especially we’re talking about things like computer vision or speech or language or things like that so if you take this example on the screen you need to know what’s a stop sign what’s a traffic light or what’s a pedestrian importantly and the way this works in things like videos and training is that you it requires thousands of hours of these video recordings consisting a hundreds of millions of video frames and you have to label everything and the way this is done today when it is done is that it’s typically distributed across thousands of human beings which is obviously expensive and slow and hard to achieve and if those humans make mistakes you have the wrong labels and your training models the wrong way but typically just because it’s so hard to get so many people to do it most companies just don’t bother and that makes it much much harder to really build these types of computer vision models and so we wanted to help with that and sage maker we view that as the flow enabling everyday developers and data scientists to build these models so I’m excited to bring to you the launch of AWS sage maker ground truth which is a highly accurate training data set labeling service for you now this should reduce the cost of labeling for you for those that even engage in doing it by up to 70% and it’s really interesting how it works so what you do is you take all the data that you want help labeling and you point ground truth at those as three buckets and then you decide do you want ground truth to try to auto label or do you want it to use human laborers and in the case where you choose auto labeling you also specify the confidence threshold and each of those labels that you want the model to be over or if it’s not it’s sent to human laborers and you can choose from three different big pools of human laborers either the five hundred thousand-plus Global Mechanical Turk workers who by the way do a lot of this work every day for companies who use Mechanical Turk or we have a number of third parties that we have vetted who do this type of work if you want some kind of SLA on performance or a third group if you just want private workers you know people your friends or people at your company you can choose and then once you’ve chosen Auto labeling what we do is we pull a small and diverse set of those objects from s3 and we send them to the human laborers of your choice and we start building a private model for you labeling model that is using something called active learning that’s constantly learning from all the inputs and adjusting the algorithm and then what we’ll do is we’ll take that auto labeling private algorithm that we’ve built for you and we’ll run the rest of your objects through it and those that we can auto label at the confidence level you specified or higher are done those that can’t we send it out to human laborers now one of the cool things too is that every time a label is sent to humans when it comes back feeds into that active learning algorithm so it keeps getting better and better and as you have future objects to label you can do more and more through Auto labeled of course if you want to choose a hundred percent human laborers you can do that too we have three very large pools of eager human laborers they’d like to do that so this is a total GameChanger in being able to label your data so you can build those types of miles that before we’re really difficult or nearly impossible and too expensive to do so that’s ground truth now as I mentioned earlier when I was talking about how sage maker worked I mentioned that we have all these algorithms that we’ve built into sage maker where all you effectively have to do is click and deploy them and the frameworks and the drivers just work for you and since we launched Sage Maker we’ve added about 40 percent more of these algorithms that are built into sage maker but there are so many new algorithms coming out all the time from academia from machine learning expert practitioners from companies and our builders say I’d like more of these things that are just built into sage maker where I can choose them and they just run and so we’ve thought about this a bit and what we decided to do was launch for you a new AWS marketplace for machine learning with over a hundred 50 algorithms from the get-go and models that going to be took applied directly to Sage make it so the marketplace for machine learning works very much like the very broadly used and very popular AWS marketplace where you have a bunch of categories and you things like speaker identification speech recognition and video classification and handwriting recognition of large and over these categories you browse and choose what you want you subscribe at a single click it’s available to you then through the sage maker that you can deploy and just run because we’ve set up the frameworks and the drivers underneath to just run them just like the other algorithms in Sage Maker and then if you’re a seller it’s also really simple you just package your algorithm and your model and configuration and then you register it with the marketplace and you automatically validate the algorithm or model with a tester on Sage maker and if it checks out in a self-service fashion that algorithm shows up in the marketplace that same day this is a huge game-changer not just for consumers of machine learning and algorithms also for the sellers who want to actually make some money from the things they’ve built or get more usage out of what they’re doing now if you think about algorithms they generally fall into two broad categories and you can see these on these axes on the x-axis because the amount of trading data required and on the y-axis is the sophistication of the machine learning models and if you looked at super supervised learning models in the top right these are typically ones that require a bunch of labeling and ground truth and those labels will train the algorithm if you think about recognition as an example which is our computer vision service in AWS is trained on looking at millions and millions of objects the same thing is true with Pali where we’ve trained on listening to thousands and thousands of voices in the lower left quadrant is unsupervised learning and these approaches are usually used for things like anomaly detection where they’re trying to find hidden structure in the unlabeled deal so if you can see anomalies of some of that data you want to know about them so you can take action on them and this is no worse a methodology than supervised learning it’s complimentary it’s just different it doesn’t require training or labeling data over the last year so a third methodology has emerged that’s also complimentary that’s called reinforcement learning and if you think about what the superpower is of reinforcement learning it’s that you can build really sophisticated complex machine learning models with no training data and what you do here is you give a reward function or an outcome that you want and then reinforcement learning algorithms just iterate and iterate through simulations until it finds the right answer and so if there is something and you’re building a model where there is a right answer like is this a stop sign or is this a pedestrian you need to have labeling and training data and a supervised learning model but for problems where there is no right answer where you don’t know the right answer reinforcement learning is incredibly valuable just think about it think about if you’re trying to optimize your supply chain think about if you’re trying to figure out the best treatment for cancer think about a game let me give you a real example if you spent some of your youth like I did playing pac-man and not very well I might mention you know that there is no right way to clean a board in pac-man although I’m sure some of you will argue there is but there is no right way and so with reinforcement learning what you can do is you define the reward function which is to clean the board without getting eaten by the monsters and all the reinforcement learning model needs to know is that you can go up or down or left or right and then it will iterate and iterate until it finds the optimal way to do it and so this if you look at reinforcement learning and you think of the promise think about how many problems in the world exist where there is no right answer where you actually just need the ability to simulate iterate it can solve so many problems but reinforcement learning is largely inaccessible to most mortals it’s not supported anywhere it’s complicated to use there are no tools that make it easy to use it and if you think about the history of AWS from the very start of AWS we have always had as our mission that we want to enable every builder large or small in their dorm room we’re a big company the ability to have the same access to services and cost structure as the largest companies the world and so we want you to be able to take advantage of reinforcement learning so I’m excited to announce the launch of Amazon’s sage maker RL which is a new machine learning set of capabilities and sage maker they let you build train and deploy reading first reinforcement learning models so this will provide you a number of reinforcement learning models right and sage maker integrated the same way as all those other algorithms where you don’t have to worry about the frameworks and the drivers underneath it’s fully managed through the sage maker it’ll support all the major frameworks so tensorflow and MX dot and PI tours but also the reinforcement learning frameworks like Intel coach or ARL it will allow you we built both a 2d and a 3d set of simulation environments or anything that’s compatible to open gym protocol you’ll be able to use to actually try your iterating and to train the model we’ve also integrated Sage maker RL with the simulation environments both in Robo maker which is our new robotic service that we launched at midnight madness and Sunday night and with Sumerian which is our AR and VR service so you have lots of different types of simulation environments to optimize your RL model and then because it’s new we’ve got a bunch of notebooks and tutorials to make it easier for you use so we’re incredibly excited about bringing this capability in a much more accessible way to our builders now when we thought about this we ended up having a pretty similar conversation as we had last year so last year as we were building sage maker and we were making machine learning much more approachable for everyday builders we said well it’s great to learn about things but what are some things that we can provide that give people hands-on experience because the best way of learning is to actually try it and so we thought about the same thing here and reinforcement learning we said what can we do that allow people to get real hands-on with a long brainstorm about this and I have two related announcements to make that I think you’ll find useful and interesting the first is the launch of AWS deep racer which is a fully autonomous 1:18 scale race car driven by reinforcement learning these are actually pretty cool vehicles they’re about the size of a shoebox and they have an HD video camera mounted on top to have a view of the road it’s got dual core Intel Atom processors has got all wheel drive mounted with a monster truck chassis it’s got suspension it has two batteries one for the compute and one to drive it’s got an accelerometer for measuring change in speed a gyroscope 2 for detection of direction and orientation and this car will be something that I think you’ll enjoy using and I think that you’ll actually be able to experience reinforcement learning with we have a bunch room here and reinvent and you’ll also be able to order from Amazon now here’s the way deep racer will work you’ll have a fully configured 3d physics similarity simulator available in the cloud it’ll have a virtual track and a virtual card that you can read at you you can start racing right now and use right from the get-go all you need to do is you supply a simple or a complex reward function and you can do it which is Python script if you want and then you send it to Sage maker and then sage maker will start training your reinforcement learning algorithm you can watch it train it and check in on where it is at any point of course you’re able to tweak and make it more complicated and make it more performant which I’m sure many of you will do and then once that reinforcement learning algorithm is actually trained you can take it through sage maker and to play it to your physical deep race or car this is pretty cool now one of the things that was really interesting as we watch a lot of our internal play around with deep racer you know first in the virtual sense and then ultimately with the car was that people started forming races and they started competing against each other and then they started building teams and factions and at first it seemed kind of funny but it started getting pretty competitive we didn’t remind people that we were actually trying to build this and launch this for customers but it was actually kind of interesting and educational for us that people got so into building the optimal RL algorithm to be successful and be first and racing and so we thought about whether there might be something broader here and I’m excited to announce the launch of a new sports league a W has deep racer League and this is the world’s first global autonomous racing league open to everyone and let me tell you how it’s gonna work we will have 20 deep racer League races at AWS summits around the world in 2019 the winner of every deep racer League race plus the top ten vote getters or point getters from those races you can go to as many summits as you want to participate in deep racer championships there the top ten point getters and the winners of all those races qualify for the deep racer championship Cup which will be here in Las Vegas and reinvent in 2019 we will also have a number of virtual races where the winners of those and the top ten point getters will also be invited to participate in the championship cup for deep racer and reinvent next year now this year we’re do something a little bit different because we just announced this today and reinvent is over in just a couple days we’re gonna have an accelerated version of the of the deep racer championship Cup so you should consider yourselves all under starter’s orders and that’s because the very first deep racer championship time trials will start 30 minutes after this keynote is done on a track that we set up at the MGM Grand and all of you that build simple reinforcement learning algorithms and deploy them to cars which we will have there at the MGM track and run laps we will post those laps and the top three finishers between 30 minutes after this keynote and 10:30 tonight will participate in the finals of the championship cup at 8:00 a.m. tomorrow morning here in this keynote hall before Vernors keynote so very exciting [Applause] I’m very curious is gonna win and to share with you how this all work as well as provided a demo it is my pleasure to welcome to the stage as I do every year the one and only an inimitable dr. Mac would [Music] [Applause] [Music] good morning everybody and thank you Andy AWS deep razor is a fully autonomous 1/18 scale race car designed to get you rolling with reinforcement learning you can train your racing models in the cloud and race them in a physical car against other developers in the AWS deep racer league you can train your deep racer models using a simulated car and track running in the cloud you can see here the track and the simulated view from the cars camera this is a full 3d physics model of how the car interacts with its environment all the way down to the tire pressure and friction on the texture of the track deep racer learns by experimenting in the simulator under the hood there are two neural networks being trained the first detects features on the track the heat map overlay shows what the reinforcement learning model is paying attention to with red being the most important in making driving decisions the second is the policy network this is what makes decisions about when to steer left steer right or even accelerate once it’s learned the basics the algorithm starts to incorporate your reward function this is just some Python code which tells you the algorithm what behavior to reward while optimizing for the fastest lap time finding the right rewards like staying on the track and close to the centerline this is where the skill is in autonomous racing now I won’t spoil all the fun and tell you all the tricks to rack up a fast lap time except to say that speed it’s not as important as you might think so let’s take a look at this on our test track we have here the deep racer on the start line of our official 2018 racing season track it has a dotted line in the center of the track to help guide the car but deep racer will navigate around this track completely autonomously the view here is from the camera on the car and we’ve added the same heat map overlay to show where the model is looking as it’s driving and we’re going to run two models on the same car each of which has to spend a different amount of time training in the simulator and used a different reward function what we’ll see is that with more track time and better rewards the algorithm can learn more and more sophisticated driving behavior let’s start up the first model this is like a baby model is only spent about 40 minutes in the simulator you can see that it’s very erratic it’s not very fast the colors flickering in the heat map show that the model hasn’t yet learned what to pay attention to it just hasn’t had enough time on the track the model also uses a very simplistic scoring function okay so that was the baby model now let’s see what the pro racer version looks like in our second model this has been trained with a more sophisticated reward function which rewards correct track positioning and cornering with several hours of simulated track time to learn from let’s fire her up you can see this is much less erratic on the track deep racer is moot driving more smoothly and deliberately if you look at the heat map in the camera feed you can see that the attention is much more focused the reinforcement learning algorithm has been able to discover human-like driving behaviors such as taking white corners and aiming at the apex of the curve which all result in faster lap times not too shabby so before we move on let’s give a quick round of applause to our autonomous racer so how can you get started with deep deep racer well starting today you can pre-order deep racer on is priced at 399 but for a limited time we’re making it available for just 249 the first deep racer league kicks off in the MGM Grand Garden Arena right after this keynote and we have tracks and cars ready and waiting for you to race the fastest lap each hour will receive a free deep racer car and you can win prizes throughout the conference including a chance to enter the championship Cup final which is handing here tomorrow we’re so excited and we’ll see you on the track with that I’ll hand it back to Andy thanks a lot [Applause] [Music] [Applause] [Music] Thank You dr. wood always illuminating appreciate it so let’s talk about the top layer of the machine learning stack we talked about the bottom layer for expert machine learning practitioners the middle layer for everyday developers and data scientists this top layer is for companies and builders who don’t want to mess with the models at all they just want to plug into built models effectively through an API and we have built a number of these services over the last couple years that we’ve delivered for you so if you want to know here’s an object what’s in the object or here’s a video tell me what’s in the video or is this a face or does this face match a set of faces that I this customer have given you as a set of faces those are all part of our computer vision service called Amazon recognition or some customers say here’s tax turn to speech and that’s what we built poly for or they say here’s all this audio transcribe it to text and that’s why we built the Amazon transcribe here’s this transcribe text translate it into lots of languages and that’s what Amazon Translate does here are these corpuses of transcribe translated text and tell me what the heck is in it so I don’t have to look or have humans do it and that’s what you use natural language processing or Amazon comprehend for so a broad set of these top layer services that mimic human cognition what people often call AI but if you think about it so much of the world’s data is still locked away in documents and dealing with documents is actually painful if you think about it what most people do is one of a few things they all have issues either again they try to use thousands of humans who sit there with these documents and a terminal and type it in that’s obviously slow and expensive and doesn’t get you nearly the amount of documents you want some people use OCR but the problem with those CRS is kind of a dumb protocol it’s just it takes all the language and it just kind of takes it as a car sometimes to get text that you can use in a digital format and sometimes you don’t or sometimes people build templates for certain forms that the problem is there is an endless number of these forms and the forms themselves change every few years so they’re very rigid and very fragile and so our customers are frustrated that they can’t get more of all this text and data that are in documents into the cloud so they can actually do machine learning on topic so we worked with our customers we thought about what might solve these problems and I’m excited to announce the launch of Amazon text raft which is an OCR plus plus service to easily extract text of data from virtually any document know machine learning experience required so this is important you don’t need to have any machine learning experience to be able to use text rack let me tell you how it generally works so here’s a pretty typical document it’s got a couple columns it’s got a table in the middle of the left column when you use OCR it just basically captures all that information in a row and so what you end up with is the gobbledygook could you see in that box there which is completely useless that’s typically what happens now let’s look at what text rack does text rack is intelligent text rack is able to tell that there are two columns here text track so that you actually when you get the data and the language it reads like it’s supposed to be read text track is able to identify that there’s a table there and is able to lay out for you what that table should look like so you can actually read and use that data and whatever you’re trying to do on the analytics a machine learning side that’s a very different equation take forms so what happens with most of these forms is that you know most of the OCR can’t really read the forums or actually make them coherent at all for sometimes these templates will kind of effectively memorize in this box is this piece of data well first of all as I said there are thousands of different forms and text rack is going to work across legal forums and fans forums and tax forums and healthcare forums we keep adding more and more to these but also these forums will change every couple of few years and when they do something that you thought was a social security number in this box turns out now not to be a social security cover it’s a date of birth and what we have built text track to do is to recognize what certain data items or objects are so it’s able to tell this set of characters is a social security number this set of characters is date of birth this set of characters is an address so not only can we apply it to many more forms but also of those forums change text track doesn’t miss a beat so that is a pretty significant change in your capability and being able to extract and digitally use data types and documents now in the case of something like extracting data from documents there is a right answer and there isn’t master algorithm if you can build it but there are a bunch of other services where there is no master algorithm and personalization is a good example that if you you know there is no right answer although my daughter Emma thinks there is – what is the best artist you know you know what recommendations to make you have to know what that person likes what songs they listen to what artists they buy what albums they have what are other interests of light customers have bought similar things what do they like and the same is true for film recommendations and article recommendations or any product recommendation of any sort and when we launched the slew of AI services we did it reinvent last year and we were talking to customers in the early part of this year and we said what else can we do for you that would be helpful they all said why don’t you provide as models for us some of the things that Amazon’s had to get good at over the last 20 years that you are pioneers in and one of those things that people ask for over and over again as the next service I’m excited to announce which is Amazon personalized which is real-time personalization and recommendations based on the same technology using Amazon no machine learning experience required so at a high level the way this works is that you set up a web or a mobile sdk that you attach to your app you start streaming data in you tell us things like views and conversion and products and and what people were buying and you give us the listed inventory across the array of items you want us to make recommendations on you can give us demographic information because that often will help the model by the way these models are private models they’re only your models nobody else has access to them and then Amazon personalized uses all the techniques and algorithms we’ve built over the last 20 years and doing personalization in our retail business and Amazon and effectively gives you recommendations via an API now the way this works behind the scenes is that as you’re streaming that data in we set up an EMR cluster and we inspect the data and we look for the interesting parts of the data that give us some kind of predictor we also will deal with the sparse data at the long tail of your catalog which Amazon’s had a lot of experience doing and then will select from up to six algorithms that we have built ourselves over the years to do personalization or a retail business and we’ll mix and match we’ll set up hyper parameters to train the data we’ll train the models and then we’ll host the models for you and you know a lot of the recommendations that we see most frequently will keep in a cache for you and then you get them on the other side an API so if this model by the way and these steps look a lot like sage maker it’s because it is similar to sage maker except at this layer of the stack we’re taking care of all those steps for you so all you have to submit are the inputs and out of it you get the outputs which are recommendations so this is exciting I think this helped with a lot of companies most of us have the need to make recommendations to our customers along a lot of different dimensions now we built personalization at Amazon over the last 20 years because one of the core tenets of our retail business is that with no physical boundaries we were gonna have millions and millions of items available to customers and we worried if we didn’t find a way to give people signal through the noise and make them personally relevant the catalogue size might just be overwhelming for people so we built personalization another good example of this where necessity was the mother of invention for us was we have to order a lot of items in our business think about the millions of SKUs we have in our retail business we have the same issue in AWS think about all the SKUs we have and all the availability zones and regions we’re having the wrong capacity is a problem and forecasting is actually pretty difficult but most companies do is they have a data point or two that they’ll base that forecast on if you gas to low it turns out that you run out of inventory and you provide a bad customer experience and leave money on the table if you guess too high you sit on wasted product and often obsolescence and you waste a lot of money and sometimes you have to make it up and charging people higher prices and the problem in forecasting because it’s pretty complicated if you have any kind of businesses scale is not usually one or two data points that impact the forecast is typically lots of these data points and you know if you think it retail is things like weather there’s things like seasonality or things like shipping times there’s things like who the author or the artist is or it’s things like did a bad review get rid their are tens sometimes hundreds of these variables that you need to analyze and it’s pretty complicated to do and again our customers say look you do this at scale you’ve been doing it for a long time you’ve built a lot of these models can you find a way to make the models available to us and so I’m excited to announce another service which is called Amazon forecast which is accurate time series forecasting based on the same technology used at Amazon to do our forecasting and again no machine learning experience required so forecast works very much the same way that personalized does you actually start streaming historical data into us all kinds of information on supply chain and inventory levels and and then you’ll give us all the possible variables that could have impact on the forecast what we call causes and then forecast use the same algorithms that we’ve built to do forecasting in Amazon over the last 20 years and gives you time series forecast out of an API on the other end and what’s happening behind the scenes again is very very similar when we do the analytics we look for the signal we actually will then choose from eight proprietary algorithms that we’ve built to do our forecast over the years let’s select the hyper parameters train the models host the models cache the ones that you need and then spit out for you time series forecast and what it means for you is that you can give us any historical time series for a forecast we’ve made it so you can integrate Amazon forecast with your more traditional supply chain software like I say pay an Oracle supply chain it also integrates with Amazon time stream our time series database earlier if you want to use that and the data in there as part of your forecast with just three clicks you can actually give us the information and get a forecast so super simple and when we benchmark with customers and private beta and ourselves it’s providing up to 50% more accurate forecast and what people are doing on their own before at one-tenth of the cost of traditional supply chain software so these are a set of new AI services that should allow you to be even more effective in your everyday activities and building a better customer experience and a better business for your customers so in the little less conversation a little more action please realm I think this is probably the second year in a row where you’ve seen just this plethora of services that we’ve launched for builders at every layer of that stack and it’s never been easier and faster and more cost-effective for everyday builders to build train to and deployed machinery models in his today now as we’ve said a couple times it’s not just machine learning models and services that allow you to do machine learning at scale the way you need to they’re really useful and there’s a huge number of these in AWS at this point but to really do machine learning the right way it starts with having the right secure operationally performant fully featured cost-effective data link or data store with the right access control on some of your most valuable data and the broadest array of analytic services and that brought us and Ray of machine learning services at every layer of the stack there’s nobody you cross those dimensions it has a set of capabilities like AWS which is why the vast majority of companies are using AWS for machine learning I’m gonna switch gears again and I’m going to talk about the fifth sentiment that we hear from dealers have you ever had something that you know you need to do you know it’s important to do you know it’s gonna be hard to do you know it’s gonna have challenges but you know you need to do it because the longer you wait the harder it is that a lot of enterprises are feeling right now as they think about this transition from on-premises infrastructure to the cloud and they know that this work that’s required to do to move from on-premises to the cloud is work and they know there are risks associated with it and there are things that they don’t know about yet but they also know that the longer they’re wait the deeper the ditch gets it gets harder over time so they need to get going and so we have been spending a lot of time over the last six to eight years building capabilities that allow our enterprise customers to be making this transition at the pace they want to make and so he saw over several years we built capabilities like virtual private cloud or V PC or a Direct Connect or storage gateway and make it as seamless as possible to run your on-premises infrastructure alongside eight yes but what a lot of our customers said was that’s great and that’s helpful but because most of the world is virtualized using VMware it would really be helpful if I could use the same software and tools that I’ve invested all this time resource into to manage my infrastructure on-premises the last number of years but be able to use it to run my infrastructure on AWS is technology infrastructure platform and that’s why we worked really hard to build the partnership that we have with VMware to launch VMware cloud an AWS where companies can now use the same VMware tools they’ve used to run their on premises infrastructure but use it to run their presence on top of AWS and so customers are pretty excited about this we have a fair bit of momentum at this point to share what he’s seeing in the market it’s my pleasure to introduce to the stage my partner and my friend the CEO of VMware pack L singer [Music] Oh Andy it’s great to be here and to join your little party this is tremendous that’s great you know we’re doing this so often you’re like we’re like Jimmy Fallon and Justin Timberlake except not as funny or music but just as talented I may be indifferent well Pat as you know we’ve been at this for a while and people are pretty excited I share with us what you’re seeing in the marketplace right now yeah well you know since we announced our partnership Andy this idea of the leader in private coming together with the leader in public and delivering this unique hybrid cloud experience you know we’ve seen globally you know an incredible response from customers and partners and we’ve been working to deliver the key certifications right were more industry standards and you know we’re now rolling out globally at a very aggressive pace and we’re committed by the end of next year every place that Amazon is a VMware cloud instance will be in those Amazon Ozone’s as we’ve seen enterprises are widely embracing this capability and in fact every time we open up a new region there’s already pent up demand to go for it and we’re seeing you know because of this 20 year experience that customers have with the VMware environment the cloud api’s and services the customers quickly come on board you know maybe a couple of examples Andy you know we have customers like Fiserv right data center consolidation and dr you know maybe a customer like Brinks who’s been about those aster recovery you know one of my favorites when we launched in Europe was plate Iike right a gaming company they did a very rapid migration hundreds of VMs in a few days but now the elastic capacity that we’ve built together has been a huge factor for them and maybe the last one has been TEPCO right the largest energy company in Japan is now up and running on a VMware cloud in the zone that we announced there but you know as we’ve done this and as they’re accelerating their cloud strategies you know what we’ve seen as they’ve said Wow you know this is compelling but what about my database environment you know and that was so exciting because you know as we’ve seen RDS you know what are your most successful services right cus summers are using that to manage their cloud databases but how do they manage their on premise database environments and you know it just seems like yesterday where you and I were on stage announcing at vmworld the availability of Amazon RDS on the VMware environment and that’s gotten a great deal of interest now because they have consistent operational environment you know the ability to have cloud hosted backups and copies being able to do clones so that environment has been super successful and it showed that we’re not just taking workloads to the cloud we’re also bringing services from the cloud and really executing on this hybrid environment it’s really been spectacular Andy yeah it’s exciting and customers are excited we’re seeing the same thing across our customers when we talk to them about their hybrid situation and their move moving forward as they’re planning on working on that transition so what else our customers asking for so people are very excited as patches share about van we’re cloud and AWS but we have some customers who also say well that’s great but I have a number of workloads that are going to leave on-premises for a long time and there’s lots of different reasons for that but you know oftentimes an example is they need really low latency to something that sits on premises like a factory or something like that and what they have asked us for is they’ve said hey is there a way that you can provide AWS services like compute like storage on-premises but in a way that really seamlessly and consistently interacts with the rest of my applications in AWS and the rest of the services I might be able to use natively and we thought a lot about this because the thing that customers say consistently when we ask them was they said I need it to be the same and I need to be consistent I want the same api’s the same control playing the same tools the same hardware the same functionality and if you look at the options out there today that are trying to solve this problem they’re not the same it’s different control playing different API is different custom tools the functionality is always different in the cloud than what’s on premises because you can deploy quicker to the cloud it’s why those is why those options aren’t getting much traction yeah you know same as simple to say but hard it’s the same as hard and so we thought about this we’ve thought about this for a while because we haven’t liked the models been out there but we had a breakthrough a few months ago where we were working with a customer who’s trying to get compute and storage from AWS on premises and then connect seamlessly with the rest of their AWS presence in the region closest to them and this idea we were calling far zones and we thought actually that could be a more generalized idea where we could uniquely solve this problem for customers and so I’m excited to announce a new capability coming next year called AWS outposts which will allow you to run infrastructure AWS infrastructure on-premises for a consistent presence on the cloud and on-premise on so Andy so you’re bringing the AWS cloud hardware on-premise yeah so let me tell you how to work we have you will take customers will order racks that will have the same hardware that AWS uses in all of our regions with software AWS services on it like compute and storage and then you’ll be able to order it in two different variants for those that want to use again those same tools in control plane that they’ve been using on VMware forever and that they use with VMware cloud and AWS they’ll be able to have an outpost for VMware cloud on AWS on those outposts and then for those that want to use the same exact AWS API is that they use in the AWS cloud but run them on on-premises outpost you’ll have an AWS native how post option and on that you’ll be able to run services like compute whether it’s instances or containers or ECS or eks service or a relational database service or EMR sage maker we expect to have those at launch or shortly thereafter and also you’ll be able to run software on that initially starting with the really interesting really important really broadly used VMware software that allows you to be able to manage your presence on-premises and across the cloud and so Pat this is another area where we’ve done to innovate together and provide more capability from those running hybrid you know and we’re excited you know we’re seeing this opportunity just to continue to innovate together like the MC like RDS and now without posts and you know we’re committed to this partnership and we’re seeing it expand – because we’re seeing customers really say this hybrid value proposition is very significant so with that what we’re doing is we’re announcing two new capabilities to complement what you’re doing you know first is VMware cloud on the AWS outpost offering so not just having VMware Cloud in AWS but bringing it on premise without posts and that’s going to give the full software-defined data center capability on-premise it’ll compute networking storage management automation the same VM C extension right back on premise so it’s going to be to the clouds from the cloud in a consistent way you know and this fully integrated solution is going to give enterprises all of those capabilities they expect data protection management storage in a consistent environment across those we’re also building on our foundational capabilities like the motion and HDX so that customers will have the seamlessness this dynamic environment across those two worlds but we’re also announcing the second variant right VMware Cloud foundation for ec2 so we’re also bringing many of our technologies that complete software-defined data center and offering it for outpost native as well and in particular as part of that is really the networking piece because how do I connect my network to this new environment I’m bringing on-premise and and a sex you know it’s already the standard for software-defined network 80 plus percent of the Fortune 100 are using it already today you know we’re seeing these ideas like micro segmentation common networking bridging between worlds is foundational for nsx so that as well as cloud management at defense for security cloud automation all of these enterprise capabilities you know we believe is just going to be this perfect wrapper around the AWS outpost native offering as well so between VMware cloud foundation the we’re cloud on AWS outpost these two offerings are just going to continue to extend the rich innovation that VMware and Amazon are bringing to the broad customer base that we have here yeah it’s very exciting and you know one thing I would say as we close this segment is that we’re excited about all the offerings that with clabber and I collaborated on together but I would also say that the partnership has really been a terrific partnership where the team’s continued work very closely together and most importantly trying to do the right things for all of you we’re trying to listen to what you most want trying to listen to what’s working for you and what’s not working for you either in our own offerings or others and then continue to innovate quickly together to let you run hybrid the way you want it’s a great partnership I appreciate you being a Andy thank you so much and congratulations on the announcement reshoot Thanks [Applause] [Music] so we’re really excited about outpost and we’re really excited about the collaboration with VMware I’ll mention two of the quick things which is regardless of which variant you choose and it’ll run the same hardware that you’re able to run in your regions in AWS and then also it’ll we will deliver the racks for you will install them if you want they’re pretty easy they just plug in but if you want us to install them we will and then we’ll do all the maintenance and repair on them so it’s I think it’s a pretty exciting opportunity for you so when you look now at the family of hybrid capabilities that over the years that we have started to build for you as you’re making this transition from on-premises to the cloud it’s pretty expansive so as I mentioned earlier you have the ability to run V pcs and directly active storage gateways to run seamlessly with your on-premises infrastructure in AWS if you want to use the same tools to manage your VMware that you’ve been managing your on-premises environment through VMware to be able to run those on the AWS cloud you can do so with VMware cloud and AWS outpost will allow you to have AWS services and hardware on-premises for those workloads that need to stay on premises for where you want the same connection and consistency in the same AWS to the rest of your workloads in AWS and the rest of the services and then for those that have environments that have little to no connectivity or that are rugged or you need a ruggedized device we just launched a couple days ago our latest version of snowball edge which is compute optimized which allow you to run those in perpetuity or for as long as you want at the edge where you have no connectivity so a really broad array of hybrid options as you’re making this transition over the next few years so I’m gonna close with one final thought you know I think over the last several decades it has been a little bit of a dreary world for builders they have been constrained and a lot of these companies for a long time by the on-premises infrastructure they’ve had and they frequently had to choose when they’ve had three good ideas the answer is you can do that idea or that idea or that idea but only one of them that is frustrating most builders didn’t decide to join a business so they could do the same thing over and over again or if they work creatively with a team to come up with ten ideas to do one of them those types of or choices are demoralizing and what’s happened at those companies over several years is that they’ve trained builders not to spend their free time or energy trying to invent or innovate on the customer experience because what bother you’re just not going to be able to try anything even if you have a great idea now what’s happened for enterprises and for companies who have adopted AWS in the cloud the last several years is it’s completely changed when you have a hundred forty plus services with a depth of features we have in those services an AWS where you can deploy servers and minutes and you don’t have to build any of the infrastructure software yourself you get from ideas to implementation in several orders of magnitude faster than before that’s a very different world it turns an or world into an and world I have five ideas and I can experiment and try all five ideas and see which one’s work and one of the things that we’ve seen in enterprises who have made a big move to the cloud is that it has totally changed the culture such that builders every level of the organization are spending their free cycles thinking about new customer experience ideas because they know if they come up with them they can try them and see them and that’s why they come to work if you think about how many companies in the history of business have been able to build long-standing sustainable businesses it is a vanishingly small amount that is getting smaller and smaller by the day and if you think about what FDR said which is the only thing to fear is fear itself a lot of time people get focused on the wrong things when they think about how to build sustainable businesses the best way to do it a sustainable business is not to worry about your competitors or focus on your competitors or any of the little small things that we all can sometimes get distracted with every day for any of us that are trying to build long-standing sustainable businesses which is hard the most important thing by far is to listen really carefully to what your customers want from you and how they want the experience to improve and then to be able to experiment and innovate at a really rapid clip to keep improving the customer experience that is the only way that you will be able to survive any of us will be able to survive over a long period of time that’s what you should care about is giving your builders the most capable platform to allow them to keep iterating on that customer experience there is no platform that gives your builders the capabilities anything like AWS we’ll be here every step of the way to help and so I appreciate your listening I hope you have a good week I’ll see you soon thank you [Music] [Applause] [Music]

, , , , , , , , , , , , , ,

Post navigation

75 thoughts on “AWS re:Invent 2018 – Keynote with Andy Jassy

  1. Yes, it is true that AWS is a clear winner in this cloud war… but AWS should not stop inventing. It is nice to see that Amazon is still doing good in field of invention and new features.

  2. 'Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.' – Jurrassic Park

  3. It is awesome to see FSX coming up, I requested it 3 years ago in re:invent Las Vegas, I am waiting when AWS launches it's own operating system likewise Windows has dominated a lot)

  4. @Ross Brawn fans want close and no bullshit racing, not boring real time datas, unless you told us when Mercs activate party mode.

  5. Great to see Amazon joining the blockchain trend! I wonder what type of privacy will it offer to end users, something similar to Monero or DeepOnion?

  6. Did they cut the music tracks out of Queen, The Clash, and The Beetles, et al? I keep seeing recaps mentioning these but cannot find them anywhere.

  7. Yeah, let's make the F1 even more boring with computer predications so now I don't even need to be exicted. The computer will tell me if there will be an action or not. WTF? WHY?

  8. Andy's shoes remind me of people who go to mosque for praying (you have to take off your shoes at the door), and when they come out, realize that shoes are stolen! It's a usual scene, men with fancy formal attire while wearing slippers…

  9. Ross Brawn's comments (although a little dry) are a preview into how analytics will improve both the sport and the viewing experience. He is a true visionary.

  10. Ross Brawn "changes to make F1 less predictable".. yet one graphic shows the % likelyhood of an overtake happening and on WHICH SIDE! too much info.. i just want to watch the race 🙁

  11. I am a big fan of AWS but I am getting sick of Andy year after year doing the same bag the competition keynote speech. The analysts revenue numbers are pure guesswork, as other vendors don't break down IaaS vs PaaS and SaaS numbers. He always manages to find the analyst with the IaaS estimate numbers that makes AWS look the best, and goes on to bag other companies that don't even do IaaS. No SalesForce or Oracle in Andy's world. I have seen other analyst estimates that paint a very different story. He also bags other companies for doing what AWS has always done, which is launch new services with minimal features and add features over time.
    I think it is time to move on from this theme. Most customers live in a multi cloud world, and deal with multiple cloud providers. Customers are increasingly looking at ease of use, value for money and innovative services that solve business problems. AWS is great at all this stuff, you donlt need to spend half a keynote at your major annual conference bagging other vendors.

  12. I'm practicing listening skill. Anyone catches Andy's words at 2:42:03?
    That's a very different world. it turns what into what?
    Thank you so much.

  13. AWS security has the potential to be very strong, but poor configurations have led to more than one serious security breach. When implementing your security infrastructure, be sure to create different identity access management (IAM) users for each service and only provide access to the resources each user requires. If you need to process client data, create a storage bucket just for client data. If your machines need to pull configuration data from the cloud, move this data to its own S3 bucket and create a separate IAM account just to access that data. Not doing so could potentially lead to a vertical escalation as we mentioned in the attack above. Lastly, it is imperative to analyze the trust relationships between external services you use, and perform regular penetration testing against your AWS environment. Whenever you choose a service to supplement your business, you must understand the default configurations used by the third party and how they must be changed to fit your environment.

  14. Thank you for sharing this entire session to public. I enjoyed most of AWS re:Invent 2018 keynote session from Seoul.

  15. Excited to see Ross Brawn pitching F1 on here. And then I tried f1tv. What a disappointment. I hope that F1 eventually uses the power of AWS effectively.

  16. Andy, I would like to see James Hamilton coming up on stage for this year re-invent for any one of the sessions.

  17. all tax scam… money is scam…..and getting gov money while hiding openbsd replaces cisco and all of this…biges tmiddleman in history and the stock market funnels moeny to it as they pay no tax USSURY folks

Leave a Reply

Your email address will not be published. Required fields are marked *