Solving for Information Security with Containers and Kubernetes

Contributed by

9 min read

The value that containers and orchestrators like Kubernetes bring to an organization is quickly becoming apparent. Faster deployment, built-in fault tolerance, lack-of-dependency hell: there is a reason that containers and Kubernetes are trending in IT. And these tools also bring value to the world of information security. Here is an example, based on a true story (with names changed to protect the innocent).

Containers and Kubernetes

A long time ago, in a network far, far away…

Mr. X is an information security practitioner. He watches logs, keeps up-to-date on vulnerabilities, implements blocks based on the latest threats. Mr. X protects his company’s technology assets. Through a network of security practitioners, Mr. X learns that his organization will likely be targeted by Distributed Denial of Service (DDoS) actors in the near future, and they will use a rented botnet to make his organization’s website inaccessible due to oversaturation of web resources (CPU, network, etc.).

Mr. X is prepared! He has been working with industry trust groups, has samples of the traffic from previous targets, and has worked with his team and internal IT to implement a detailed level of logging on the organization’s web properties. He also has good relationships with the web firewall and traditional firewall teams and knows that this is the moment to shine. Time to go to work: using threat intelligence, previous traffic, and understanding of both the HTTP protocol and his organization's specific web traffic, Mr. X dives into the data.

Mr. X not only works well with the data science team, he considers himself a data scientist. After all, as a security practitioner, he looks at data, makes sense of that data, and then delivers business value, just like a data scientist. Mr. X just happens to add value by protecting the IT infrastructure and customers of his organization. Mr. X is well researched and, having reviewed information like this SANS paper on the importance of HTTP headers, Mr. X has already dived headlong into web logs and knows the answer to his DDoS problem will be found in there somewhere.

Mr. X has all the pieces for success and, using these pieces, finds the answer in historical data. He works with his team to create a tool that allows his organization to identify attacks in real time, putting together a pipeline of analytic queries and open source software libraries for analyzing headers, and testing that against sample attack traffic and historic “known good” traffic. Mr. X and his team settle on this solution, confident in it efficacy.

But here is where our narrative becomes a “Choose Your Own Adventure” story. The following two scenarios present choices that many organizations, prior to tools like containers and Kubernetes, would have to make.

Option 1: An immature change management process

Mr. X’s network is the Wild West. Change controls and project management don’t apply to Mr. X’s teams. Mr. X is able to distribute his code and libraries to his security data lake cluster nodes without delay. He implements code that allows results of his model to be implemented as blocks directly on his firewall, web firewall, and even with his internet provider. This works great: the DDoS attack has been blocked, and Mr. X and team feel proud of the job they’ve done.

However, Mr. X didn’t realize that one of the libraries used in his model was also used in a fraud detection model running in production. Same library, but different versions, resulting in a slightly different response, leading to the fraud detection code missing multiple fraud events. When Mr. X installed his library, he overwrote the old library. There were no errors, just a marked increase in fraud for his organization’s customers. This insidious result of Mr. X and his team doing the right thing was not noticed until much later.

Option 2: Change management rules IT

Mr. X goes to implement the detection model on the organization’s big data system and is told “Change Management meetings are only once a week, and to distribute your code and libraries to all the nodes, you need to go through that process to ensure there are no conflicts with already running jobs.” In this scenario, due to the risk of affecting production jobs and systems, a review must be done, deployment carefully managed and tested.

This review finds that there are library conflicts on the nodes, specifically with a library that production fraud jobs utilize to save the company’s customers millions in fraud losses. This allows Mr. X to go back, retool his model, and bring it in line with the organization's standards. But in the time that took, the first attack wave hit. This causes downtime to the organization’s web portal, customers were angry about not being able to access the organization’s portal, and the DDoS actors were able to use a video of the company’s website showing “page cannot be displayed” in their propaganda campaign.

A Better Way: Containers and Kubernetes

The problems encountered in both options could have been avoided or the risk greatly reduced if the organization had implemented Kubernetes and made extensive use of containers in its big data environment. In that case, when creatinghis model, Mr. X would have developed directly in containers. It would be easy for him to put in all the code and libraries he needed to detect the DDoS traffic. Hewould also be able to create components that allowed him to easily interface with APIs on the firewall and web firewall running in discrete containers. All the pieces would be put together in a simple way that is easy to assess, easy to monitor, and easy to run on the organization’s security data platform.

In this environment, there are few concerns over library conflicts with production jobs. Those jobs also run in containers, and the libraries that they use are specific to the application. Mr. X’s model did use a different version of the same library used in a fraud job but did not cause any conflict, as that was all packaged in the container Mr. X made for his model.

Mr. X was able to to go to the IT administrators with an emergency containerized model to prevent the DDoS because IT knew that there would be no library conflicts, and since they knew exactly what was running and were based on Kubernetes APIs, they were able to manually monitor to ensure there was no production impact. Because of the platform choice at Mr. X’s organization, Mr. X had quickly put protections in place that not only mitigated the DDoS attacks, but also worked seamlessly with production jobs.

A Better Future, Now

Unfortunately, when the events that inform this story unfolded, Mr. X did not have containers or Kubernetes as an option, and he was forced to pick between two bad choices. He had the data he needed in a MapR-based security data warehouse, but not the flexibility required or tools to make a safe, effective choice for protecting the network.

Today would be a different story. Containers and Kubernetes provide the best choice in the ongoing battle between process and speed of execution. Instead of being in conflict, these two concepts can work together. With container native tools like the MapR Persistent Application Client Container (PACC) and MapR Data Fabric for Kubernetes, access to the data required to secure the enterprise can be at the speed by which threats evolve, without having to make compromises in speed or safety of production jobs. Containers and Kubernetes need to be an integral part of of your security data platform.

Related Links:


This blog post was published April 25, 2018.
Categories

50,000+ of the smartest have already joined!

Stay ahead of the bleeding edge...get the best of Big Data in your inbox.


Get our latest posts in your inbox

Subscribe Now