Director of Product Management, MapR
Container-based strategies are being increasingly deployed, and Kubernetes is rapidly surfacing as the de facto standard for delivery of microservices in today's Big Data marketplace. Watch this webinar as we dive into Containers, Kubernetes and the critical dependencies for a successful Container-based strategy.
Containers are light-weight, easy and quick to deploy. They break complex monolithic applications into smaller, modular microservices which aligns exceptionally well with modern DevOps practices and dynamically evolving applications. But there are some critical dependencies for leveraging Containers at scale, in production. Cloud-scale storage and data persistence and business-critical capabilities such as security and HA are vital for a Container-based strategy to be successful.
Central to this story is Kubernetes as the leading Container Orchestration tool. Suzy describes how, through Kubernetes' storage APIs, MapR is able to deliver scale, performance, security, tiered storage, analytics in-place and the ability to deploy Containers anywhere - on-premise, at the Edge and across multiple Clouds - all while retaining a global view of data, even as Containers are created, deleted or moved across nodes.
Suzy V: 00:01 Hello. Welcome to the session. I'm Suzy Visvanathan, Director of Product Manager in Mapr. In today's session we are going to take a look at containers and Kubernetes and what some of the best practices are if you are starting your journey with either of these. So in the agenda I will be covering containers, Kubernetes, we'll see a little bit about the use cases, talk about some best practices on how to get started with containers and Kubernetes. And then finish off with what is MapR's Approach to all of this.
Suzy V: 00:38 K. So by now containers and Kubernetes are something every organization has heard of. They already have them very small to minimal investment or investigation into it. Based on all the information that is there today what do we know about containers so far? Well, there is enough information out there to tell you that it is lightweight. The way a container is packaged and how much of dependency it needs for that application determines the fact that it is lightweight. We also know that containers are stateless. Because of their lightweight and stateless attitude we know that containers are easily portable. Can easily be moved from one location to the other.
Suzy V: 01:33 It started off as a very good media for developing and testing. But, definitely it's starting to move towards production. The company Docker made it popular of course. Even though the concept of Linux containers have existed for quite a while. There are still a few of us who think that this is going to be a short state. But, as we proceed through the slides that are enough information for you to understand what it means to take containers into production. And what some of the benefits are.
Suzy V: 02:11 So what do containers enable? So we talked about it being lightweight. Lightweight means that it is a lot easier and quicker to deploy. Usually the comparison is drawn with VMs. But, this is true in terms of being compared with a bare metal deployment as well.
Suzy V: 02:36 The easy and the quickness allows an organization to be quickly able to test an environment, test a new product, and timed mark of the product accordingly. That is one of the biggest benefits of containers being in the Dev and Test ADS especially. The portability of the containers allows you to not have one set environment at all times. Many organizations today are struggling with the fact that they need to keep investing in hardware, need to keep investing in maintaining a data center. Because of the portable nature of containers, organizations now can simply deploy almost just about everything in say the public cloud. Or even in an existing environment without having to purchase any special hardware. Just for running applications and containers.
Suzy V: 03:34 The third main aspect of containers is the fact that it allows you to take these huge large applications and break it into smaller modules. What is the benefit of breaking applications into smaller modules? This concept is what is called microservices. But, the aspect of the application being in small tangible entities is so that if you have to make a change to one aspect of the application you need only have to change that small entity, without having to touch the entire application.
Suzy V: 04:14 So lets take an example where you may want to change a piece of the application, recompile it, and retest it, and then regenerate it. If it is one single large application you'd have to end up recompiling the entire application that may take hours. Testing may have to involve all other aspects of that application itself, even though you have not touched it. When you press this into smaller modules then recompiling, and rebuilding, and retesting is all restricted to that small module so there by you can make changes independent of the rest of the application.
Suzy V: 04:54 Be quick and fast about it and you can have the product that you want out the door in no matter of time. So these are some of the essential characteristics of containers that allow you to maintain a very elastic environment if you will. So that it helps you build and product quickly.
Suzy V: 05:19 So lets take a look at the aspect as to why containers are called stateless. Now since a container basically requires only the bare minimal dependencies that an application needs all the other infrastructure resources like CPU, memory, disk resources. They are all shared among or between the containers. The same concept if you read about it, it's also mentioned around the fact that the [inaudible 00:05:55] itself is shared among the containers. But, the applications and the dependent libraries and files that the applications requires are the ones that are sub contained in the container.
Suzy V: 06:08 And that's exactly true. And that's exactly what this picture depicts. Since the infrastructure and all [inaudible 00:06:16] is shared, each container need not carry the burden of getting a noise image within the container. So thereby it makes it lightweight. Now what makes it stateless is because the application and the libraries that are required are the only ones contained inside a container, the data that this app requires is in the shared resource aspect tone, the disks.
Suzy V: 06:48 So if a container is deleted there is no way that it can retain or remember the data is over here. That is one of the biggest aspect of the containers being called stateless. It allows the container to be smaller in size. Allows it to be very easily portable. But, later on you will see that this very same aspect of it being stateless also leads to a different problem where, if you wanna run containers in production, how do you retain data? How do you retain or remember the data? And there are several solutions for that as well.
Suzy V: 07:34 So what is the impact of containers being stateless? We already saw that. So because of the container being very self contained and carrying only the absolute necessary minimum that the applications requires, it opens up the possibility of you running your application as a service. Anything can now be packaged into an as a service model as long as you're using containers.
Suzy V: 08:04 That gives you an ability to service your customers at a very proactive fast manner. You can have tangible entities on how the service is being consumed by customers. You can also have a way of then adding metering and monitoring to these smaller application modules so that it gives you a better feedback on what it is that your customers need in the future. Updating a container or the concept of something as updating a container is not really applicable here. Because your image itself is shared. So if there is an update, or if there is a change that needs to be applied, it need only be applied once and to that one image that is shared among all the containers. So thereby maintenance and management is also very much made simple.
Suzy V: 09:01 A concept of backing up a container is not a required entity here either. Mainly because what is inside a container is very much for that instance of time. If you really want to have another container it is a matter of just a few seconds for you to be able to spin up another container, thereby backing containers not really a requirement or needed in such an environment.
Suzy V: 09:34 So we talked about containers that will allow your applications to be broken down into smaller modules. And I also mentioned that that is the fundamental concept that is [inaudible 00:09:45] and microservices. So if you really want to carry this forward and come up with a microservices architecture in your environment what are some of the points to consider?
Suzy V: 09:58 So there are several aspects, but I have narrowed it down to very high level main aspects. The first and foremost thing that you would require to do is to understand your application very well and make a decision as to whether your application can be broken down into components or not. That is very key so that when you're actually reformatting that application into smaller modules you know exactly how and how many of the components should there be.
Suzy V: 10:34 The second aspect is decide at this juncture whether you want run these containers, run this application in your own premises data center, or in the cloud, or across both. Because the containers are very tangible, because they are very much enabling you to breaking down the application, it makes it a very ideal way to run your applications in the cloud. So if you have not been sure about whether you want to deploy in the cloud or not this is a right time to actually consider that and do the analysis and the research.
Suzy V: 11:20 Decoupling is another aspect that you want to take into account. In the previous slide we saw, that the container being stateless also means that it does not carry or store or remember the data for the application. Which is why it gives the container the lightweight aspect. So when you're breaking down the application make sure that that application is not dependent for the data to be there right in the container.
Suzy V: 11:55 Now for instance, you may have written and application for a VM environment. In those cases the decoupling of the application from the data was not a requirement. So it may very well be that you've written an application like that. If that is the case and if you're moving that into a container environment, decoupling is something that you would wanna keep in mind and design upfront.
Suzy V: 12:21 Once you have made these three decisions of course test it extensively. Meaning do a trial and error method if required to determine if that particular application you're trying to reformat will work once it's been broken down into microservices. And will be self sufficient and you can deploy them each of the modules as a service to your customers.
Suzy V: 12:47 After all of the...there may be applications that are simply not going to fit into a microservices environment or a container architecture and that is perfectly fine. Not necessarily need to refactor every old application into this environment. You may be better off rewriting or starting to write an application from scratch in keeping containers and microservices in mind.
Suzy V: 13:14 Okay. Alright. So I made this special segway with containers and VMs, right now when we're talking about microservices. So if you're contemplating as to which one is better this will give a fairly high level comparison between the two. Like I mentioned containers virtualized the OS. That means the OS image itself is shared among other containers. The containers then is focused only on running the application and the dependencies of that application. We also saw the benefit of that. It is lightweight, portable, and so on.
Suzy V: 13:59 VMs on the other hand it came right after the phase of all environments being bare metal and physical. So VMs attempted to address the infrastructure problem rather then the application deployment problem. VMs said that we'll virtualize the hardware so that on the same set of a single silver hood you can run multiple instances of OS. You can run multiple instances of a virtual server. Many multiple instances of OS gives you one main advantage over containers which is you're not tied tied to a single OS.
Suzy V: 14:41 You can have a combination and a different mix and match of OS's so that you're not tied to one single environment. Thereby you can have different kinds of application that requires different OS's or running on the same server.
Suzy V: 15:00 And the containers aspect, like I mentioned, containers are meant to solve your application, how do you deploy your application, your development environment? Whereas VMs addresses the infrastructure and the infrastructure admin environment. VMs are more focused on how do I make it easy for the admin to provision the resources. How do I make it easy for the admin to be able to maintain running different applications all on the same infrastructure. Okay.
Suzy V: 15:35 Security is another aspect where both of them are very much quite diverse in nature. Since the operating system is very much shared among the containers this requires that unless they reuse it as root access, now that opens up a risk and the vulnerability to the data. This is as of today. But, I do envision that a lot of investment or technology trend is going to happen in this area so this problem is only going to be addressed very soon. VMs, on the other hand, has had a lot of security that has been done and offered that are definitely different vendors who build the security features just for virtual environments. In that aspect, VMs definitely have a better story than containers, at least as of today. Containers assist in easy development. Again, to reiterate, that easy development means you can finish your project on time or earlier on time and thereby your time to market is quicker, easier and faster, okay? VMs, again, on the other hand, has had enough time in the market and enough maturity that there is a huge plethora of ecosystem of networking and storage and security vendors all offering all kinds of added services and benefits along with VMs.
Suzy V: 17:12 So, in that aspect, if there's any other additional adjacent areas that is of interest to your organization, you can get a full blown solution already pre-packaged from either the same vendor or different vendors for VMs. Containers, on the other hand, they are by its very nature they very much make your deployment into the Cloud quite easy and more adaptive than VMs, whereas VMs have predominantly stayed on the On-Premises environment. While this distinction doesn't really say which one is really better than the other, these are some of the considerations you may want to keep in mind and pick as you understand these better. Now, I strongly believe that containers and VMs can and will co-exist. There may be applications that are better off being run as VMs, maybe because they are highly sensitive, mission critical applications that require a bunch of security compliances to be met. At the same time, containers are easier and makes it fluid in deployment so that I don't see a reason why both cannot co-exist in the same environment.
Suzy V: 18:36 Okay, so now that leaves us with Cloud and containers. So, why is it that containers are more attractive to being run on the Cloud environment than on an On-Premises environment? Aside from the fact that they are easier to deploy, they are portable. There are certain fundamental aspect where the fine granularity nature allows you to run multiple instances on a single environment, meaning you can have multiple clusters of containers all running on the Cloud and once a job is being done or taken care of you can break it down and all you would then end up having to do is adjust your subscription costs. This leads to a reduction in [inaudible 00:19:32] usage and thereby you literally do use only what you need when you need. This was a slogan that was introduced by the Cloud vendors early on where they started with the hype of grow as your business grows.
Suzy V: 19:52 On demand consumption but with containers that truly seals that particular adage or a set of adage where you really need to use your resource only as and when you require them. Now, as you run multiple instances, all on the same containers, on the same environment, your subscription or your number of Cloud instances you use goes down as well. And thereby you can see that your Cloud computing costs using containers are a lot more lower than if you were to do it otherwise. So this is a side of the business benefit of running containers in the Cloud. The technical aspects, we have already covered them in extensive in the previous slides. Okay. Now, we briefly looked into the legacy applications and I'll reiterate it here. Not all your legacy applications will fit well in a container methodology. It may require a lot of work to re-format, a lot of time and skillset and cost investment may need to be made. So you may be better off leaving those applications in those environments and starting off with new applications on containers.
Suzy V: 21:21 So some of the legacy applications may have been built specifically for bare metal and virtual environments. They may, again, be better off in those environments and more importantly creating or existing tools are not available to repurpose the legacy applications. That is very key because there would be enough leg work that needs to be in order to rewrite the legacy applications. So this is one of the things that I do reiterate when I talk to customers where they are looking to do a digital transformation or they have been told by their higher executives that they are going to change everything into containers. So this is something to keep in mind if you do have one such direction that maybe some of these legacy applications are left as is and you are probably better off writing new ones.
Suzy V: 22:19 So those were some of the business nuances and best practices to keep in mind if you are thinking about deploying in containers. Now let's talk a little bit about Kubernetes. For those of you who don't know what Kubernetes is, Kubernetes is a container management and orchestration software. Of all the container management software available today, Kubernetes is the most popular one. So, why is it popular? So, Kubernetes, as you can see, this is a survey that was released by Cloud Native Computing Foundation, CNCF, and they have clearly shown that Kubernetes is by far having the most install base compared to all of the other competitors here. Of course, there are many other software vendors that are not highlighted here but the main reason why Kubernetes is popular is because of the fact that there's an entire community working to improving that and the simplicity of with which you can deploy Kubernetes today.
Suzy V: 23:37 So, some of the key features of Kubernetes are worth understanding to also give you a fair idea of why it is popular. The first important thing that Kubernetes introduced is the concept of a pod. A pod is nothing but a collection of containers but this pod allowed then users to maintain or manage multiple containers in a single entity, especially there are organizations who have already understood the benefits of containers and they are already thinking about thousands and thousands of containers then managing them as a pod is much easier then if they were to do it on an individual container level. The second aspect of this is the scheduling. So Kubernetes offers a way to automatically schedule these containers or pods on the same host or on any particular host the then user may require it to be. For instance, if you have an application that has been broken down into several different modules you're better off leaving them all on the same pod so that they can share the same resources. So, if you have a requirement like that, the scheduling aspect of Kubernetes will combine all of that together in the pod and make sure it is run on the same server. Kubernetes also allows you to auto scale, replicate and recover. This means that if you have, say, a thousand containers and you have a requirement where you want to keep adding a thousand more every month, you can definitely recreate the template and have the containers schedule adding a thousand more containers every month at the appropriate time. Kubernetes addresses the concept of the containers being stateless. So, earlier in this session we saw that the same aspect of the containers being stateless actually has a disadvantage, which is if you're going to be running containers in production you would need some way to retain the data that those applications need. Now, containers themselves don't have the ability to store data. However, Kubernetes offers robust persistent storage volume management.
Suzy V: 26:21 This is nothing but a plug-in and many storage vendors can choose to interface with the plug-in and also persistent storage and enterprise storage features. Kubernetes also has a monitoring module where this module monitors your usage of resource at the periodic pace and offers insights into what your consumption rate has been. So this will give you an idea of how to plan additional containers if you need it. Much like it has a persistent storage volume management, it also offers a network pluggable architecture. If you're looking for an entirely different network interface for containers then Kubernetes offers you choices as well. Kubernetes also does other things like load balancing, make sure that all the pods are spread evenly across the cluster and has periodic health checks so you know at any given time that your entire Kubernetes environment is maintained well.
Suzy V: 27:33 So combined with the big, rich developer community and ecosystem, these were some of the main reasons why Kubernetes is quite popular and continuing to be popular. Now that may lead you to the question as to whether you need Kubernetes or any orchestration to begin with. So, here are some guidelines. If you are in an experimental phase and have a few containers at hand and you simply want to understand if containers are the way to go or if you are simply using it to a POC environment then the need for a container orchestration tool or software is very much reduced. You're probably going to benefit more by creating and managing this manually so that it gives you a better idea of how containers are deployed, how it runs and how the environment is. However, if you have already made the transition from an experimental phase to a production environment, say, and you are already dealing with large scale containers, thousands of containers, hundreds of thousands of containers, then it is more imperative that you use Kubernetes or any of the container orchestration tools that you have seen.
Suzy V: 28:58 If you have environments, even if you don't have thousands of containers but you have an environment where there's a sizable number of containers but they are constantly changing, you're constantly creating them, deleting them, moving them, then that is another use case where you may want to consider a container orchestration tool. So, what are some of the use cases for containers in Kubernetes? So, by now, listening to the information that I've given, that should already set the stage for what some of the use cases may be better served with containers in Kubernetes. So, I wanna lay a ground statement saying that any enterprise application that you can deemed can be run into containers following some of the best practices that we just saw earlier in this slide set, they can be run as containers. However, there are three distinctive areas that do stand out. One is actually microservices. We already saw that in detail. Because of the inherent way of containers allowing you to run applications in small modules you can design and deploy applications in a distributed manner as a microservices architecture.
Suzy V: 30:25 We also saw that containers started off as a ideal way to create, deploy, develop, test and integrate your applications. So, if you have a CICD environment, containers are most definitely the best use case there and if you have not already considered, this is probably a good time to do so. The third aspect is machine learning. So machine learning pipelines are something that are growing as we speak. There are many customers who are already doing machine learning. There are many customers who are contemplating creating machine learning pipelines. So, if you fall into either of those categories then running these machine learning applications in containers will make it really easy, especially for the real time nature of these machine learning environments. So, we talked about some of the differences with containers and existing environments. So, we talked about applications best practices. We talked about if and when is the best time to use Kubernetes or any container orchestration software.
Suzy V: 31:48 So what are some of the considerations before deploying in production? So, where to start? The first and foremost is a mindset change. Not everybody in the organization may see and realize the benefits of containers and Kubernetes. If that is the case then evangelizing the benefits of it is quite important, especially articulating the aspect that if you need to stay competitive then this is something that the organization would have to think about and invest in. Start small. I also mentioned this earlier. If this is something that you want to undertake in your environment, you want your applications to be running in containers, then validating it would be the next best step. Starting small always will lead to better results at the last end so that is something to keep in mind. DevOps, to move your development framework to containers, that is always a very good starting point to moving to containers. We ourselves use containers internally in our environment for our development.
Suzy V: 33:16 This is much easier as a starting point than just jumping into production without having gone through this learning curve. Invest in standardizing in containers. If you are already in the crux of building new applications, next generation applications, build them with containers in mind. After you have made all this groundwork and you are about to deploy all of your applications in containers, splitting them into different phases would help as well, just like what I mentioned about starting small. There was an excellent article the other day about a customer who said that they did the entire digital transformation from bare metal to containers split into three to five phases, which took over a year and a half span but that was very methodical and that helped them save cost instead of doing it all in one go. Okay? All right. Where and how to deploy it?
Suzy V: 34:26 So we already saw that pretty much any application can be on containers anywhere and by anywhere it means On-Premises or the Cloud or even on Edge Environments. Almost all of the public Cloud vendors today are either offering a container service in a Kubernetes services or already talking about launching something to that effect this year. So, if you are considering containers then running them in a public Cloud vendor environment is a choice as well. You can run containers on Edge Note. In fact, because the Edge Environments are very much small, both in Footprint and in their workloads, containers may be the ideal use case for them and like I've mentioned earlier, I don't see why containers cannot co-exist with physical and virtual applications. Okay so, coming back to the stateless aspect of containers and persistent storage, Kubernetes does a very good job of offering a storage plug-in. This will require that you approach a storage vendor and inquire them as to what kind of plug-in that they have for Kubernetes and what are some of the unique benefits that they get?
Suzy V: 35:58 Docker offers Docker volumes, which is another way of offering persistent storage. So if you are deploying or going with Docker then that is an option to consider. One best practice here is you may want to choose a data storage platform that does more than one thing, meaning just having storage for containers is not going to be tenable in the long run because persisting storage is only the first step. Then as you run containers in production, all the other aspects of storage such as high availability, security, middling, snapshots, all of that will immediately start coming up as requirements for these applications because these are the same application you could run in a VM or a physical environments and the requirement is the same. So, choosing a platform that says we are meant for just running containers may not be feasible in the long run because I do envision containers and VMs and physical environments co-existing.
Suzy V: 37:19 So if that is the case then choosing something that is elastic and lets you do multiple things is a good option to go with. Network, I have briefly touched on this. If you are looking to running Kubernetes or containers in a high scalable environment, choosing a container network interface compatible solution is not a bad idea. The Kubernetes POD, by design, comes with a lot of unique capabilities, you might want to consider that. Docker itself has a lot of pluggable interfaces and not to mention there are several third party vendors like Calico and Flannel who have targeted niche market and who have invested in building a CNI network just for containers. So if you're looking at the large scale environment then investing in a CNI is a good thing to do. Okay, so where does MapR fit into all of this? So, first and foremost, our flagship product is our MapR Converged Data Platform.
Suzy V: 38:36 So, what is in our MapR Converged Data Platform? So we are built on the fundamental concept that it needs to cater to a scalable environment. So, our foundational, rather our bread and butter basically, even though it has long been known as an analytics environment, the underlying foundation is nothing but a distributed file system. This distributed scalable file system gives us the base to run almost very, varied, diverse applications. Hadoop environments can be easily run on us because we support HDSS. Now, because of our distributed scalable file system, you can use MapR Converged Data Platform as a persistent storage layer and you can make the connection here. Since it can be a store, you can use it as a store for your container applications as well. Now, once the data is brought in, you can use all kinds of machine learning and analytic tools on top of it to then work on that data that you brought in.
Suzy V: 39:56 If you have applications that requires to be run as a database, we allow you run an operational database, operationalize your data, on our platform as well. Now, because of the yield time nature of all of the applications that are a good fit on MapR, we give you streams, which is an excellent tool for streaming in real time, high volume data into an environment. So, this was the crux of the MapR Converged Data Platform. So this makes it ideal for not just traditional environments but for applications that are and will be run in containers. Scale is something that we do very well. There are customers who have 100 petabytes all on MapR running on the single cluster. So if you are thinking of starting small and scaling and adding more and growing your environment, MapR gives you the ideal platform for it. MapR can be run on On-Premises, Edge and Cloud as well.
Suzy V: 41:18 It gives all the capabilities of an enterprise funnel and enterprise environment, high availability, data productions and intelligence automated data placement. So, all of this combined together addresses the many use cases we saw for containers earlier in this session. It gives you an excellent framework or platform to persistent store your container data. It allows you to run in any environment, be it Cloud or On-Premises and in addition to all of that, it will give you all the capabilities that are required to run a containerized application in a production environment. Many, many customers have had successes with our MapR data fabric. These are just a few of the locals here to give an idea of how many have seen success on our platform and are continuing to adapt us. We are extending the data fabric to the containers based on all the capabilities and features that we just saw.
Suzy V: 42:31 Containers definitely are here to stay. Hopefully, by listening to these details that I just provided if you were one of those that thought that containers is just a hype, you probably, hopefully, leave with more information. There are applications that are being written with an intent to be deployed only in containers. This only makes it ideal for MapR to be part of your journey. What we are hearing from customers are like this, we hear things from customers such as, "Do you do MapR containers, what are they doing with Kubernetes?" "I am thinking of moving all of my applications to containers, can I do that on MapR?" "What are the advantages and differences that MapR can give us to run containers in Kubernetes?" So, many of these questions or demands from our customers, combined with the fact that we have a really good converged data platform, makes us an ideal environment to also allow you to run containers in Kubernetes. We are extending this data fabric, this converged data platform, for containers as well. The first thing that we have done in this is integrate with Kubernetes.
Suzy V: 44:04 We have the volume travel plug-in that we have developed in conjunction with Kubernetes and it is available today and that we have packaged it along with our POSIX client. So this gives you not only the scale but the performance that you need to run Kubernetes in containers in production. Now MapR volumes that are created in the MapR environment can now be mounted so then the containers in these pods can then directly talk to them and the IO into these volumes can then be done directly from the applications running as containers. So this addresses so many aspects that are key for containers. You have a scalable persistent storage, they have performance and not only that, along with it comes the high availability, the reliability and security that we always promise our customers that can now be carried forward into the container's environment as well.
Suzy V: 45:19 So, the benefits of this is very many. If you recall, some of the differences between containers and VMs, we saw that VMs are definitely the way to go if you're looking for security and that, that is one of the biggest requirements in your environment. MapR makes it easy for you to run the applications in containers. You benefit from our MapR security, end to end security, that will be easily carried forward into a container environment. Performance high availability is not an issue either. We already have a distributed environment. We have a global name space that allows you to run your app anywhere but you give a unified view and access to the files that is on MapR. You can deploy containers on your On-Premier Cloud and data can persist on MapR since MapR can be deployed in any kind of environment as well. MapR offers this as part of our Enterprise Premier so this is a matter of very easy licensing.
Suzy V: 46:33 If this is something you're interested in, it is just a matter of talking to our sales, talking to your account rep and you should be able to easily get more details on the Enterprise Premier and the fact that the company's volume driver plug-in is included as part of that Enterprise Premier. If you want more information, we have a plethora of content and information on our website. I highly encourage each one of you to further look into this, read about this and if there are any questions, reach to your MapR rep and it'll be more than glad to do a one on one deep dive or have further conversations with you. So, thank you so much for listening to this webinar. Hopefully, this gave you a better idea and more information around containers and Kubernetes leaving you completely comfortable in deploying them in your environment. Thank you.