There are many challenges facing organizations today as they initiate or continue their journey to digital transformation as well as the vital replatforming of the IT infrastructure. Whatever the specific problems that you and your organization may be facing, rest assured: you are not alone. These once-in-a-generation infrastructure overhauls are never easy. Not only does IT management have to deal with traditional and legacy infrastructure that is inadequate for the very high scale and low latency requirements of emerging technologies, but it also must come to terms with legacy ‘thinking’ as well as legacy skill sets. It is a multifaceted challenge, but one that must be overcome for the sake of the long-term viability of the enterprise.
Three emerging technologies merit particular attention:
One of the most compelling advantages of cloud computing is developer productivity. As noted above, developers can quickly spin up their own cloud instances, provision the tools they want, and scale up and down easily. When considering cloud computing, there are two important terms to be acquainted with: data gravity and cloud neutrality. The former refers to how heavy data becomes as it grows. Imagine trying to move a gigabyte of data from one cloud to another–no big deal; try it with a terabyte, now a petabyte–additionally start adding up the costs for moving data out from one cloud into another. Ladder term strikes at the core of vendor lock-in. If you want to be nimble and run your software on multiple clouds, you don’t want to get stuck on one API provided by one cloud vendor. Use caution when choosing a point solution to build your software solution. It may look appealing at the onset of the project, but it can cause a lot of pain later.
One of the biggest challenges organizations have today with big data, which includes IoT data, is that it is increasingly of the semi and unstructured variety. Legacy RDBMS simply cannot aggregate, store, and process this data efficiently or effectively, certainly not in high volumes. The data volumes are unprecedented and growing at warp speed, in some cases doubling annually by some estimates. IT is under tremendous pressure to deal with the volume, variety, and velocity of new data, while at the same time pressured to deliver more personalization and better service to the customer base. This complexity of big data environments is really ridiculous, and it creates a seriously fragile system to maintain.
This environment itself is highly dynamic, with frequent new releases of the many different components that make up the big data world. Other challenges abound:
Containers are an ideal tool for developers to use when shifting between on-premises, private cloud, and public cloud architectures. Because containers are independent of the underlying operating system and infrastructure, they can be moved quickly and with minimal disruption. Some organizations even use multiple public clouds, and shift workloads back and forth, depending upon price and special offers from the service providers. Containers make this process simple.
You can think of containers as an operating system virtualization in which workloads share operating systems (OS) resources. Though they have been around for just a few years, their adoption rate and acceptance is impressive, to say the least. In one major global study of 1100 senior IT and business executives, 40% of respondents said they are already using containers in production, half of those with mission-critical workloads. Only 13% say they have no current plans to use containers this year, with the remainder all making container usage plans.
Containers can do a lot of what virtual machines (VMs) cannot do in a development environment. They can be launched or abandoned in real time instantly, which VMs cannot. Unlike with VMs, there is no requirement for OS overhead in the container environment. And containers are destined to play a major role in facilitating the seamless transfer of development from one environment or platform to another.
However, there are issues and challenges with containers as well. For one, most containerized applications today are stateless. This could be an issue for stateful applications, but there are workarounds, like leveraging a converged data platform. These workarounds include solutions that provide the reliable storage needed to support stateful applications.
As with big data platforms, containers do provoke data security concerns, but this is not unusual with relatively new technologies deployed in critical areas, such as application development. The simplest example is where two different containerized applications are launched on the same physical server and the applications leverage the local server storage. Extra effort must be made to ensure data is properly secured. One of those applications could be a web server and another a financial application. This can be an issue with sensitive information.
Using a persistent data store with security built-in can minimize risk. The data store should have a pre-built, certified container image with predefined permissions, security tickets, secure authentication at the container level, encryption, and support configuration via Dockerfile scripts. Don’t be afraid of getting into these technologies; just make sure to keep yourself well-educated on the state of the technologies and the hurdles others have faced while implementing their solutions. Make sure you share as much of your experience as possible to help others.
Stay ahead of the bleeding edge...get the best of Big Data in your inbox.