Show 3.3: Kubernetes – What it is, Why it Matters, and How to Get Started - Part 3

Podcast transcript:

Hi. My name's Jim Scott, the host of Data Talks. Today we're back for our third and final session with Michael Hausenblas from Red Hat and Sebastien Goasguen from Bitnami. We'll be wrapping up all this Kubernetes talk by discussing social media influencers and role models, foundations for creating a roadmap to ensure success with these new technologies, how to make the most or the least of the cloud as well as our wrap up, rapid fire segment that we have.

I have a question and I really genuinely do not know the answer to this. I'm pretty sure the answer is no, and it's a straight forward question ... I haven't really seen anything to do this, but is there any way to be able to plug a dupe HDFS into Kubernetes as a persistence volume?

My assumption is no, because its not a read/write file system. I just want to make sure 'cause there's a lot of crazy things that get done out in the community. Have either of you guys seen anything like that?

Sebastien G.:

I haven't seen it specifically and I haven't tried to do it, but I don't see why it couldn't be done. Well the only thing in my mind that would prevent it, is if you can't random read/write, then it's an append only log, which for a persistent volume for software that's expecting to get a volume for persistence, it's likely going to be expected to be able to perform read and write operations simultaneously. So random read/write activity wouldn't be allowed in an HDFS persistent volume.

Michael H.:

Right, what maybe helps our listeners to understand the situation completely, if you're not that familiar with it, with respect to persistent volumes at least, there is this idea of decoupling. You have shared responsibility, but you also have clear roles for what part like who takes care of what.

On one hand you have cost administrators, or storage administrators, whoever takes care of essentially saying, alright, in this environment, for example, on AWS or on premises customer, or wherever, these are the kinds of volumes we want to ... our developers to use. There are storage classes ... many, many ways to characterize them.

On the other hand, you have developers who are not storage experts, who are not ... they might not know all the details about the storage, they just say well I need ten gigs with these and these characteristics and to some extent, and increasingly so, Kubernetes automatically matches these ... here are the offerings, which are managed by the cluster storage ops, with the requests that developers put in by the persistent volume types. Right.

Michael H.:

So you are decoupling that and you make sure that each and every one has a defined role in that process. I'm not sure if that helps, but in terms of thinking about it. Yeah, I think that does and I think it confirms what I said and it's very good that you covered that for anyone who doesn't understand the concept of persistent volume yet. Yeah, in this use case, it sounds like there is ... I haven't found anything out there because there is nothing out there because it's just highly unlikely that you could make this work for that type of scenario.

Sebastien G.:

Actually, with the ... I haven't looked deeply at the recent work by the folks at Data Bricks who did spark on Kubernetes and are now shipping ... I believe they are shipping spark where the default deployment platform is Kubernetes, something like this. I would be surprised if they are not running HGFS on Kubernetes. As far as I know this has been done. It's not to say you couldn't run a dupe cluster with Kubernetes, because I know that can be done, because you can run anything with Kubernetes technically speaking, but it's actually making the file storage part of the persistent volume to run other stuff with, which would be unfeasible to expose it as a persistent volume.

Sebastien G.:

Yeah, I am not a file system expert, so we should have the folks from Data Bricks maybe you should bring them on in another podcast and discuss HGFS on Kubernetes. Well, I would venture that running ... 'cause I have seen the installation instructions for being able to run a pure dupe installation on Kubernetes, but that wouldn't be what I would be working to tackle. It's the separation of actual real business applications, which would depend on a persistent volume, right? And that persistent volume is that key abstraction for Kubernetes that I would be concerned with for any enterprise customers out there that say hey, I want to put my software on Kubernetes. They're going to be dependent on that persistent volume construct.

Spark on Kubernetes using HDFS as an implementation, I'm fundamentally not very concerned with, but it is a great suggestion that we do a talk with the Data Bricks guys because I haven't talked to them in at least five, six months. So, let's move on and ... Michael you mentioned actually reading twitter, so let's start off with Sebastien.

Who on twitter do you follow that you think our listeners should follow? Who's impactful to you that you read and you're like yeah, this is someone that other people should follow on these topics?

Sebastien G.:

On Kubernetes, definitely ... and I would say, follow Joe Beda. Joe was one of the creators of Kubernetes. He used to be at Google for a long time; he was the brain behind GCE, Google Compute Engine, so he knows [Mark 00:05:50] very well of course and then together with Brian Grant and Brendan Burns he created Kubernetes. Of course a few other guys as well from Google, lots of amazing brains there, but Joe is pretty active on twitter. Every Friday he does a Thank God it's Kubernetes, TGIK, so it's a live stream of him tackling and investigating new software. So it's great to see him read me for the first time and then try to deploy the software and understand it and so definitely follow Joe Beda.

And then of course there is Kelsey Hightower, who is the best among us. First, an amazing speaker, totally amazing speaker, and then also a great inspiration in terms of tech and then where things are going. And for the listeners, we will drop the twitter links in to these folks to help simplify you being able to find them and follow them.

Sebastien G.:

Sure

Michael H.:

Yes, definitely, definitely. Michael, do you have any personal suggestions for twitter followers that you think are impactful our listeners should follow?

Michael H.:

Just to reinforce that, definitely Joe. What impressed me, I think it's not only that he has an awesome brain, but he's really an awesome human being. If you tune into that TGIK, he does light coding there. There's a chat on the side people are asking questions. He interacts with people. He really is very approachable and very accessible and very happy to help different folks in different stages, and I think that's a really, really good suggestion by Sebastien there. I totally subscribe that. Is there anybody else?

Michael H.:

I thought about that and there are two ladies. One is Charity Majors, she used to work at Parse, Facebook and is now the CEO of Honeycomb and she's awesome. She has sometimes strong language but makes really, really good points, awesome presentations, awesome tweets, like small take aways and whatever 280 characters, whatever we currently have, where you really make you think and want to learn more about especially around monitoring apps and ability and so on.

The second suggestion that I would have is another lady, and that's Liz Rice. She is now with Aqua Security and she is again, as Charity is and Joe, is an awesome human being and an awesome life coder. I learned a lot in terms of container security from her.

And whenever she speaks and tweets, shares her wisdom, I definitely listen and I definitely make mental notes. Fantastic. Well I'm going to throw one of my own plugs in here, and I think this is actually someone that Michael may follow on twitter because I believe Michael, on twitter, you have like a gopher for the golang, like a little caricature for your face icon, correct?

Michael H.:

That's right, too. Yeah, so, Ashley McNamara is who I'm going to call out, because she has, I would have to say on a percentage of hilarious, thoughtful, and thought provoking tweets, probably a larger percentage than anyone else I follow on twitter. And I follow a pretty good number of really smart and really mind bending twitter activists. And she's fantastic. I'm going to read one of her tweets that she recently posted, which just kind of made me chuckle because she said someone asked her, "you have a lot of women on your team. Is it a diversity thing?" And her response was perfect, which was, "let me be very clear here. We hire the best in the industry and the best are women. The end. Don't insult us by assuming it's a diversity thing. It's a skill thing."

Sebastien G.:

I remember that And it's just, it's great to me because she just says it like it is. She's not going to beat around the bush for anybody. So these are five people that we've just mentioned that I would strongly suggest for all of our listeners to go ahead and follow.

Michael H.:

I can only highlight what you said. I remember her talking at GopherCon in Denver last year, where she essentially told her story, where she came from and how she got into that position. And to be honest, it's just you and Sebastien here, no one else is listening, as I can confess that I was actually crying. When she told the story I was so moved by her story and how she got into this position that you know I really struggled with the tears, it's very moving. Alright, well, we're not going to be editing that out of the podcast, just to be clear. So there will now forever be a record that you cried at a conference. Good, good to know. Thank you.

Michael H.:

It was dark. It was dark. Yeah, yeah and I'm sure no one saw. I can attest to that because I have actually read a number of tweets, people who basically tagged her, tweeted her and said it made her cry, but Michael's the only one I personally know.

Sebastien G.:

But Michael was probably not the only one. So, let's go ahead and get back onto another topic. So you both talk a lot to people about adopting new technologies like Kubernetes. Some people might look at this as it's a part of like a transition to modern data platforms. If you're an enterprise architect in one of these companies out there who recently discovered Kubernetes, what specific advice would you want to be made available for them to be able to get started? For example, now that they're getting started, what are some of the processes, the working relationships, development team, administration ... what is it that they can do to help ensure their success? Michael, do you want to start?

Michael H.:

Right, just chuckling here a little bit when you started off with you guys or you two, you talk a lot and it's like, where is going with that. Oh, you talk a lot about this. Right. It's not just that you talk a lot.

Michael H.:

Well, so, again, I think that technology is the easy bit and that a lot of work has to go into the group dynamics, into making people work and accept and respect each other. And if you have that environment, take any codist distribution [inaudible 00:12:46] obviously, but pick any codist distribution, pick any container on time, pick any CSCD pipeline you like and you will be successful. And if you don't have your duckies lined up, if you don't have the people on board mentally, and you know management backing them and everything in place there, then no technology on earth will make that happen for you. That's my strong belief. That's good. What do you think Sebastien?

Sebastien G.:

Yeah, very practical thing, I would say when you start with Kubernetes, try to spend some time understanding imperative versus declarative and the impact that it's going to have on the way that you manage your application, the way you develop your application and also in the culture that you have in your teams. Because to me, this mindset of declarative application management was quite a big shift in Kubernetes. And I think it's extremely powerful. Great. Are there any key performance indicators that you think will help them know that they're headed in the right direction.

Sebastien G.:

Well, is your app working? Can you run your app? How easily can you deploy your applications? Are they up and running?

There was recent case study by the folks from Black Rock, the asset management company, that was just published couple days ago, 100 days for Kubernetes in production. So it's a great read. And you see there, traditional enterprise, financial, they decided to go with Kubernetes. And in 100 days, three months roughly, they went from nothing to having Kubernetes running an app in prod and integrated with their existing processes and monitoring and management systems. Excellent. So normally, this is where I like to do the tip of the day except I'm going to change up the tip of the day a little bit. So, instead of me offering up a tip, I want to put something out here for you guys and I want your opinion on whether people should do this or not.

So, within Kubernetes, how do you feel about ... and Michael, please go ahead and go first on this one ... how do you feel about setting up a default namespace for the Kube control to use?

Michael H.:

I'm not entirely sure I'm following, to be honest. Well, so when people run Kube control and they have to give the dash and parameter for the namespace that they're doing the work in-

Michael H.:

Yeah. How do you feel about setting up a default namespace to use? So, when you run the command, always run it in this namespace unless you've specified otherwise.

Michael H.:

Right, so, now I think I get it. You're talking about Kube cuddler right? Yeah, Kube control. You call it cuddle?

Michael H.:

There we go. See when I see CTL, I think control. Is this because you live in Ireland?

Michael H.:

It's an ongoing debate, it's a [Curdis 00:15:53] community and I think your best it probably has ... there's also some strong thoughts about that, how he pronounces it. There are essentially three-

Sebastien G.:

Kube control. Oh, so it's two to one right now. You lose.

Michael H.:

I can back up my [crosstalk 00:16:09]. Tim Hawkins, many, many others say cube ... actually, I will show you that. I'm going to be giving a talk about Kube cuddle next week in Brussels [inaudible 00:16:22] again. And I printed off t-shirts that actually have Kube cuddle written on it so just to make it very, very clear, how you pronounce it.

Well, anyways, coming back to your question. Good insight. I can't wait to hear what the third way is.

Michael H.:

Kube cuddle, Kube control, or Kube CTL. Okay, that's fair.

Michael H.:

And the thing we have ... the last KubeCon in Austin, one of the guidis initial contributors said he pronounces it Kube control and that's why you pronounce it Kube control and team Hawkins was just early on, on record, and Twitter and youTube, whatever. Actually there's a youTube video, if you put Kube control, Kube cuddle, or Kube CTL in youTube and Tim Hawkins, you will find how to pronounce it there, right there on youTube. So it's up on youTube. Okay, but if we stay on track and we go back to my original question.

Michael H.:

Back to your question, yes sir. Is by default, and certain guidis distributions might override that. There is the default namespace leeways, right? For example, in openshift online, our hosted offering, that's not the case, default namespace is not available. You have to actually create a prototype, which is essentially a namespace on steroids.

So yes, obviously, you want to use the default namespace, right. I'm having a hard time understanding where that is not the case.

Sebastien do you have any example where that is not the case?

Sebastien G.:

So it's defined in your Kube control context, in your profiles, so by default, you go into default namespace. But if the user has RBAC privileges where it says that you can only create objects in a specific namespace, then you can set up that specific namespace in your Kube control context so that when you do Kube control, get pods. It will actually get the pods in that namespace and not the default namespace.

So in that sense, you created a default namespace for that user, which is not the default namespace for the cluster. Does that make sense?

Michael H.:

Right. Now we've totally confused pretty much everyone out there. Long story short, there is a [inaudible 00:18:43] space in any sensible set up I'm aware of, there is some default namespace and some type of ... most often in vanilla Kubernetes, it is actually called the default namespace, it's called default. Exactly, which is why I wanted to bring it up, because it is able to be configured and changed. And, of course, as you pointed out depending on distribution and where it's running at. But I just wanted to bring it up as a tip of the day for people to be aware but because as you get into a larger and larger distribution within your own enterprise, it could potentially have ramifications

Michael H.:

Good point.

Sebastien G.:

There's an interesting follow on to that tip, which is to actually configure your PS1 to show the namespace you're talking to in your prompt. Oh, that'd be ... you think you could share that with me and we'll post it with the podcast for people to have a reference to it.

Sebastien G.:

Sure. Alright, fantastic. So, back to regular hard questions. Something that I find to be of the utmost importance when it comes to general agility in the enterprise is the ability to be able to place data and jobs, right? Put the software and the data where they belong. Oftentimes, this will include placing data or jobs on specific pieces of hardware. So, this could be for something as simple as ultra fast storage like SSD and NVME's or even something like use cases, which require maybe an invidia GPU. Is there something that Kubernetes offers to be able to ensure that your software is running on a physical server that you want it to run on, like maybe labeling servers.

Michael H.:

There are many, many options and affinity just being one of them. I believe the [inaudible 00:20:36] the special interest group ... is it resource management or is it node? One of those eggs is actually working on something to make that even more powerful and flexible but maybe Sebastien has some more on that topic.

But out of the box you can use affinity to do that yes.

Sebastien G.:

Yes, and we shouldn't forget at the core, Kubernetes is really a scheduler, so it has an APR, you talk to the APR. You say hey, run that container, but then there is a scheduling algorithm that's running predicates, priorities, to decide where to actually land that pod.

So you can totally configure, customize that scheduling process. You can write your own scheduling policy file. But more and more those scheduling customization are exposed through the API. So you have node affinity, tie affinity, pod affinity, pod and tie affinity, node selector, taints, toleration. So you've got a bunch of scheduling capabilities so that you can make a workload land on a very specific node.

So definitely if you have GPU's you could tag your nodes saying, hey, GPU equals, blah, blah, and then when you submit some Cuda C code, containerized, you can say, hey, boom, go to that node.

And there is very strong support now for GPU type workloads in Kubernetes. That's great. And I bring this up because I find this really to be a straight forward use case for map R. And it's one of the opportunities to help tear down some walls, which I've personally seen, which are, we see in the last ten years, the advent of distributed data processing. But what's been around for quite a while is the concept of high performance computing workloads. And where we've seen some pretty significant pick up is in tearing down the walls between those two environments and making them be able to run together. And I personally see Kubernetes as being able to significantly help in this scenario because of exactly like you pointed out, at the core, it's a scheduler. And if you have different workloads and you have your data appropriately separated across these servers, you now have the ability when you're looking at this to run either of these types of workloads on the same type of hardware just based on your scheduling constructs that you've defined for your business.

Sebastien G.:

Yep, exactly. So if a person is doing something like machine learning. Let's say they're doing something maybe around image or video processing and they want tens or hundreds of servers to access the same exact data without having to replicate the data everywhere, thus preventing tens of hundreds of copies. What features of Kubernetes, in this case, would you use to help them solve their problem.

The silence tells me this must be Michael who wants to answer this.

Michael H.:

Can you repeat the question? What's the use case, machine learning? So if I was doing machine learning on images and I want these images to be on a bunch of different servers or at least accessible from a lot of different servers. What would I do to enable these machine learning implementations to have access to the data without copying the data to all the different locations. Right? So thus enabling as I scale out my environment I don't have to worry about copying the data to every node that has the machine learning libraries running on it to get a copy of all the images it's going to process.

Sebastien G.:

So I would give ... and Michael can I'm sure comment, but I would give a super practical answer right now. I would tell you to use TensorFlow and go check out a project that's called Kube flow, which is in an effort that just started to be able to run TensorFlow training jobs and then being able to actually serve the models that you've trained. And if you look at the manifest on how they actually deploy TensorFlow training workers, then you'll be able to see how the data is accessible.

Michael H.:

Kube flow.

Sebastien G.:

Kube flow.

Michael H.:

Right. Funny thing. I was actually talking until I realized, when you said, Michael wants to answer, or whatever, that I'm muted. I muted myself, it's my fault, so I'm very sorry for that. I was actually talking.

Yeah, what Sebastien said, it is probably the best way to start and just to give you a little bit of context, Kube flow is really, it's very, very young. It's essentially launched by Google in December last year around KubeCon. I remember we had that openshift [inaudible 00:25:30] the day before where it was already kind of semi, probably announced. It was essentially, just for the time being, just TensorFlow plus, Jupyter and Jupyter notebooks, so you know, you can try it out in cut the coder for example. No install required. You just use it with any modern browser, you just go there to cutthecodercom/kubeflow and can try it out and that's probably the best way to do that.

And there is also the flag community if you want to interact with the folks there, many from Google. We, at red hat, started to contribute once we became aware of it.

Yeah, and I guess Jim, you also are interested in the storage aspect a little bit, not just in the computer aspect, and for that, I'm not that familiar, to be honest, how TensorFlow handles data locality. I know, how for example, spark does it. I don't really know how you can optimize that or influence that with TensorFlow so they essentially send ... the basic idea is essentially you don't copy big data amounts, data sets across network. You're launching or scheduling the computer tasks to where the data sits, so I have no idea how that actually would work out with TensorFlow, but for the simple case, you just do that on a single machine or a single node and you should be good. Great. Yeah, and I think this is a place where, like within MapR, that we would play more in. But I was really curious to kind of hear what else is out there that I personally might not even have thought of and if I wasn't thinking of it, most other people out there probably weren't thinking of it either.

So, now, we're about to move into rapid fire. So these are very straight forward questions. There's not anything super complicated in here, they don't require really long answers. And this is an opportunity for you guys to plug frameworks and other things like that out there that people should understand and know about, but we're not going to be giving lots of details on this.

So what tool, technology, framework, etc., do you feel best fits the following statement: most underappreciated tool for automation. Michael?

Michael H.:

Underappreciated. Now you've caught me off guard. Underappreciated. Sebastien, do you have one?

Sebastien G.:

NCBO. Okay, what about most underappreciated framework for testing.

Michael H.:

Bats.

Sebastien G.:

Gatling. Bats and Gatling? Did I get that right?

Sebastien G.:

Yeah. Okay, what about most underappreciated tool for monitoring?

Michael H.:

Prometheus. Bats.

Sebastien G.:

Collect D. Oh really, Collect D? Man it's like you're trying to start an argument here. Michael, do you have one?

Michael H.:

I think [crosstalk 00:28:51] because it's already very appreciated so I will go with that as well, yeah I'll go with Collect D as well. Okay. Alright, what about most over-hyped tool for automation.

Michael H.:

Most over-hyped.

Sebastien G.:

Pass. Pass. We pass. Oh, come on. Is that too political for you guys?

Michael H.:

I'm trying to figure out, does bitcoin count. I'm pretty sure you can ... since it's just a distributed database with shared consensus, you can pretty sure run some information on top of that, right? Okay, okay, okay, how about we move on. Most over-hyped framework for testing.

Michael H.:

Over-hyped. Hmm.

Sebastien G.:

I don't know. Oh my goodness. I've caught you guys speechless. This is a real change of pace.

Michael H.:

These are really hard questions. Over-hyped, under-hyped, it's like much in the eye of the beholder there, right. It is, it's political, that's why I'm asking you guys.

Okay, how about one more. Most over-hyped tool for monitoring.

Sebastien G.:

Prometheus definitely. That's actually pretty funny.

Michael H.:

I would disagree. So I would say data docs, just to make a few enemies. That's a good one too.

Michael H.:

Right. Yeah, I like that. See, now we're being political.

What would you say is the most pivotal person/role that drives the success of a Kubernetes based infrastructure. Michael?

Michael H.:

Person/role. Person or role.

Michael H.:

I would say the one person that manages to get developers and administrators/ops into the same room. Okay. Sebastien.

Sebastien G.:

Yeah, to me, I would say it depends on the size of the company but I think the CTO or whoever is perceived in the company as the one having good vision and technical judgment. Okay, that's actually that's a pretty good answer. I like that.

So regardless of the title, the person who does that, the person who is more forward-looking, is probably that critical person to be able to get the people in the room talking about it.

Sebastien G.:

Yeah, and it's something that I've discovered. I didn't really know about those roles but sometimes you call them tech leads. It's what they have at Google, you know, people who are extremely technically savvy. They're definitely at the top in terms of technical abilities but they haven't gone into management. And basically, they are really respected by everybody for their technical vision but they don't manage anybody. Alright. So we've talked a little bit about cloud at this point. Now, I personally love the ability to run the cloud. But I personally feel that the real benefits of the cloud are when people use the cloud as infrastructure as a service. The cloud providers I think are eager to lock clients in on their API's which again, we talked about a little bit. How do you guys feel about the cloud in just a big overview for people. What's your stance on the cloud?

Sebastien G.:

The cloud, the public cloud you mean? Yeah.

Sebastien G.:

Yeah I think the public cloud is great and actually as much as I love on prem development and I think that on prem is still going to be here for a long, long time. These days when you look at the number of API that you find on public clouds, it's becoming very, very difficult to resist not using cloud services. Last week I was doing a ... I spend some time doing some machine learning and exposing a model and I was writing some python code and I was writing a bot in twitter and everything. And then after awhile, during the whole day of coding, everything was working and I was like, really, why am I doing this. I could just use the cloud MA from google and I'd be done. And I think that the inner share of ease of use of the cloud services is going to be very, very difficult to withstand. And making the case for on prem, even though I love red hat, is going to be tougher and tougher. Michael, any thoughts?

Michael H.:

Yeah, so, no doubt about that cloud increasingly dominates everything we see. We've seen it already the [socres 00:33:54] service for many years and increased so for IAS and pass and SAAS and whatever else AS. Hey watch your language, huh?

Michael H.:

XAAS. Something AS a services. Yeah, not necessarily, I'm not sure if I agree with everything Sebastien said there in terms of like, it's not only whether many, many vendors at a current point in time are pushing towards having more in the cloud knowing that this public cloud piece will grow rapidly over time. I don't really know like will it be in 5 years or 10 years or 20 years, and it's maybe, obviously a trust issue. But I see obviously, the big benefits of the clouds and less and less to regulations and national or whatever loss kind of hold people back to cook something with workloads or data in clouds so that's a good thing I guess. Alright, so Michael, can you offer one actionable piece of advice that our audience can do in order to successfully audit their current infrastructure to drive a success plan for implementing a Kubernete strategy?

Michael H.:

I would say go for the low hanging fruits, the scope is very, very small and potentially something that you're writing from scratch and not something that you're trying to break up anomalies. So start with something small and green fieldish. Sebastien, same question.

Sebastien G.:

So I would say, start with the big name in Sandbox that's available on all the cloud providers and then deploy Kubeapps which is the amazing Bitnami product to deploy Kubernetes application. And then move on to function as a service using Kubeless which is the Bitnami project for server-less.

Michael H.:

Are you sure that you didn't forget some of the Bitnami offerings on [inaudible 00:36:14] Wow, that was shameless.

Sebastien G.:

And use Bitnami containers of course. What about Bitnami secret management. Should we use that?

Sebastien G.:

Yeah, you should definitely use Bitnami sealed secrets.

Michael H.:

Have you got us t-shirts as well. I think that would be awesome.

Sebastien G.:

We have t-shirts. And we do Bitnami Kubernetes training. Fantastic. Oh my goodness, this is ... wow. This has turned into a blatant advertisement for all of our companies. Let me think of one to throw out for MapR.

Sebastien G.:

At the end of a two hour podcast I think we can put a shameless plug there. Alright, so, one more question. What can our listeners do to enable architects, engineers and data scientists to work together more efficiently and effectively, Sebastien?

Sebastien G.:

Oh, wow. I mean, don't forget Dev ops and that it's a culture thing. So don't be married to tools. If Kubernetes doesn't work for you and Kubernetes doesn't have to be the solution for you, containers don't have to be the solutions for you, okay. So it's not because it's the latest technology out there with tons of traction that you should be using it. So assess your problems. What are you trying to achieve? What is the business problem that you're trying to solve with an IT solution and everybody get together at the table, you know whatever developer admins, architect, even the business guy. What is the business guy trying to do? So get together, and then, once you truly understand what problem you're trying to solve figure what's the easiest.

Sometimes a bash crate may be totally legit. Take your time and don't embrace the latest tech just because. Fantastic. That is extremely helpful I think to many people. Michael did you have any thoughts to add?

Michael H.:

Yeah so, plus one to what Sebastien said there, no doubt about that. And on top of that just to make it a little bit stronger, if you have a budget, just get your entire team to an offsite meeting. Dance together, have fun together and then you are probably in a good place to adopt whatever technology and if it happens to be Kubernetes and containers, then yeah, you're probably in a good place. Make it happen. Make people talk to each other, respect each other, and then everything is possible. So get people into a room and be nice. Is that what I'm hearing you say?

Michael H.:

Yeah. Right. Respect and niceness. Yes. Fantastic. So this is all the time that we have for today. For all of here at Data Talks, I'd like to thank all of you for listening. I'd like to give a very, very heartfelt thank you to both Michael and Sebastien for being my guests on data talks. This is quite possibly one of the best podcasts I've ever recorded, let alone I can't wait to hear it again in playback because there's just an abundance of information that's been shared here.

There's a couple of additional resources available that I'd like to mention and point out again because Sebastian did a great job of advertising for Kubeapps, but that's one I really want to point out, because I think the benefit of Kubeapps really is going to help people be able to pick things up and get running faster when it comes to implementing Kubernetes. And then I'd also like to point out that there's a book that I've written titled, A Practical Guide to Microservices and Containers, which can be found at MapR.com/ebooks.

For all of us here at Data Talks, I'd like to thank you for listening. I'm Jim Scott on twitter at kingmeasel. Be sure to tell all of your friends, maybe even your closest family members that they might want to consider listening to the Data Talks podcast. Before we sign off, I'd like to leave you with one final thought.

Subscribe to the Podcast

Be the first to hear our newest interviews and other DataOps topics

Subscribe Now

Data talks - A Podcast to Help You Drive a DataOps Culture

Subscribe