NOW AVAILABLE - NEW FREE COURSE:
12 min read
Editor’s Note: This is an excerpt from the book, "A Practical Guide to Microservices and Containers: Mastering the Cloud, Data, and Digital Transformation" – you can download the book for free here.
Quantium is an early technology adopter that is not afraid of trying new things. This approach has helped Quantium to establish itself as an innovator with a stellar ability to enable its customers, both internally and externally. Quantium has embraced many of the principles and technologies that are associated with next-generation applications and enterprise architecture. They have development and IT teams, which are seeing the payoff in improved business agility, better decision-making, and faster results.
A few years ago, they made the decision to move much of their development work from a Microsoft SQL Server environment to a MapR big data platform using containers. The idea was to give developers more control over their own environments while enabling innovations to be easily shared.
"The old way of coding would be going to the IT department and asking them to spin up a VM. You’d have to wait a week for that," said Gerard Paulke, a platform architect at Quantium. "If you used up all the RAM, you’d have to ask for another VM. Managing resources was very difficult."
Shared infrastructure had other shortcomings as well. If a VM went down, so did all the processes running on it, and there was no guarantee that other VMs would be able to pick up the load. Version control was a chore. Developers couldn’t use the latest versions of their favorite tools until they had been installed and tested by the IT organization. And upgrades could break software that had been created to work with earlier versions of those tools.
Containers now provide much of the functionality that was formerly served by virtual machines. Developers have the freedom to not only launch their own environments whenever they want but also to work with their preferred tools without interfering with others. "For example, we can have multiple versions of the Hive metastore in different containers," without causing conflicts, Paulke said. "It’s agile and resilient."
Quantium created a common base Docker image that has all the components needed for secure development. Developers can use these shells as a template for constructing their own environments.
"If developers want to try something new, they can just spin up an edge node," Paulke said. "If they like it, we can containerize it and put it into our app store for anyone to launch." Sharing containers enables everyone to benefit from each other’s innovations. Automation and orchestration tools have taken over from human administrators to handle the deployment, scaling, and management of containerized applications. Apache Mesos and Apache Marathon, which are precursors to Kubernetes, provide isolation and resource allocation, so containers don’t conflict with each other. If a VM fails, any running containers are automatically shifted to another VM. Orchestration software also automatically finds the resources needed by any given container, so that it launches smoothly. "From the user’s perspective, their services are always available," Paulke said. "We’re running hundreds of applications, and literally one person can manage the whole infrastructure."
For others who are interested in adopting containers, Quantium advises designing the platform to be self-service from the ground up. Use a basic set of common container images that developers can check out of a library, and expose as much as possible through well-documented APIs.
Bare-metal servers should be identically configured to the greatest degree possible, so that automation can be handled smoothly in software, using tools like Puppet and Ansible. Use playbooks to track changes to the environment and enable backtracking to earlier versions, if necessary.
Finally, talk a lot, and listen even more. In moving to an agile environment, "I’ve found people issues are the biggest issues," Paulke said. Developers need to get comfortable with the idea of self-service, and IT administrators must learn to give up some control. Once they see how fast and flexible the new world of development can be, however, they won’t want to go back.
Q: How did you get started with this "digital transformation?"
A: We had to tear down all the data silos and put all of our data in one linearly scalable platform. The benefits that came along with the elimination of the data silos were simplified scalability, enhanced tool-sharing across the organization, and the ability to accommodate multiple workload types, such as batch, real-time, and ad hoc reporting.
Q: If a friend asked you what she should focus on in order to succeed with such a massive overhaul or "digital transformation," what would you tell her?
A: Start with the data architecture. Start with a small project, pick a workload that has demonstrable business value, and see the project through to production. This helps your team to build confidence with the new technologies. Be sure to involve everyone. Set milestones, and talk about what you learn at each stage.
Q: Microservices is a pretty hot topic right now. What are some of the look-out-for-this or gotchas that people should think about?
A: Open source platforms that aren’t fully mature may contain bugs that you will have to fix yourself. Be willing to dig into the source code. Microservices are best developed, tested, deployed, and managed in containers, so be sure to maximize container security. Ensure that access is restricted to administrators only. Regularly review containers for security holes before moving into production.
There are lots of moving parts in microservices–and container-based environments. That means monitoring is not only a critical success factor but is a mandated requirement.
Continually educate business users on the benefits of a microservice architecture. Use a hands-on approach, so users have something tangible to work with.
Q: This new approach to enterprise architecture opens the door to a simplified management approach and an easier path to a highly available system, not only for the data platform but for the microservices. What needs to be done to support this within your business?
A: Run a minimum of three to five masters to ensure high resiliency in the face of server outages. Multi-master is the only way to go. The more masters you have, the less pressure is involved in resolving master hardware issues. Staffing and skills development are critical to creating a modern data architecture: support and train existing staff on how to use new platforms. Hire or contract with people with relevant experience, where necessary. Necessary skills will likely include Linux, Hadoop, network design and management, systems architecture design, Kubernetes, Docker, Python, Java, Scala, and Apache Spark.
Train people on how to best utilize the new distributed platform. Among the skills you will need are data partitioning, data replication, snapshots, mirroring, Spark and SQL query design, and knowing how to utilize YARN and orchestration tools like Kubernetes (k8s).
Q: Once all of this data is brought together into a converged data platform, how do you enable the users to leverage the data?
A: There are a number of tools that enable broader data access. Apache Spark as a general purpose, distributed data processing and machine learning engine. Apache Zeppelin for notebook style development. Distributed machine learning engines, such as H20. Distributed query engines, such as Apache Drill. Libraries, such as XGBoost.
Q: You mentioned earlier that monitoring is a mandated requirement of this enterprise architecture. What are some tips for monitoring a large numbers of applications deployed in containers?
A: Log application data for all containers to a path on the distributed file system. If you can write the logs directly to your distributed file system, you will not need to build and manage log shipping infrastructure, which is a major time saver and allows your team to focus on the real problems. Once the logs are on the distributed file system, feed those logs to ElasticSearch to enable yourself to build monitoring jobs around log events. Grafana works well to create visualizations and to set alerting thresholds. Use container health checks in Kubernetes to automatically re-deploy containers that become unhealthy. Export Kubernetes metrics to monitor for things such as container restart rates and the number of applications that are pending launch.
Q: Containers are pretty awesome and a great way to create isolation from the underlying hardware infrastructure. How do you manage all those container images you create?
A: When moving to containers, use a repository like Artifactory or Nexus. This enables you to maintain a repository of containers in git, and upon committing to master, you can build a new image via Jenkins and then push it to the repository. For container deployment, take an app store approach. Users can self-provision applications onto the platform from the app store as well as destroy and restart applications.
Use Kubernetes as the orchestrator of your container/microservice architecture. Although newer than some alternatives, it has better functionality and a larger community supporting and enhancing it.
Users who can launch containers effectively have root access. Use container templates that provide a strict level of security. Build a framework that limits the actions a user can perform on applications. For example, you can control which containers users can deploy onto the platform through the use of an app store. Users must be given explicit permission to deploy containerized applications onto the platform.
Q: Microservices sound a lot like service-oriented architecture. They have been attempted for years and always failed to reach their potential. What’s different this time?
A: Microservices are self-contained and easy to deploy, scale, and move. Containers enable you to isolate resources, such as CPU and storage, and therefore achieve better resource utilization over statically partitioned services. Containers also isolate software dependencies, permitting more flexibility for developers and operations. This brings the Q&A to an end. I would like to thank Quantium for taking the time to share such great information with all of the technologists reading this blog. If you have any questions, please put them in the comments section below.
Stay ahead of the bleeding edge...get the best of Big Data in your inbox.