BMC Control-M for Hadoop is the market leader Hadoop automation solution that replaces OOZIE, simplifies Hadoop batch processing and connect with your enterprise workflows. Native integration with Hadoop and enterprise applications, along with a simple and intuitive interface makes it easy for staff to create jobs, collaborate, schedule and manage workflows. Integration with Pig, Hive, Sqoop, MapReduce, HDFS and additional apache projects, simplifies the process of developing workflows for Hadoop applications, accelerating application implementation. With the ability to monitor and manage Hadoop workflows with predictive analytics and automated alerts, early problem detection and quick resolution improve on-time service delivery of your Big Data analytics.
BMC Software helps leading companies around the world put technology at the forefront of business transformation, improving the delivery and consumption of digital services. From mainframe to cloud to mobile, BMC delivers innovative IT management solutions that have enabled more than 15,000 customers to leverage complex technology into extraordinary business performance—increasing their agility and exceeding anything they previously thought possible. As Apache Hadoop deployments grow and expand within organizations, BMC solutions provide organizations the ability to more easily deploy and manage their big data environments.
Batch processes deliver a significant portion of the insights from Hadoop and big data applications. BMC Control-M automates the building and delivery of Hadoop batch services by connecting Hadoop to traditional applications and technologies such as the file transfer, relational databases, Informatica, SAP and others. As the leader in enterprise workload automation, Control-M delivers ease of use through graphical and mobile interfaces, the highest levels of service quality through service level management and auditing and reporting for comprehensive governance.
BMC Control-M for Hadoop key benefits:
Native support for HDFS, MapReduce, Pig, Hive and Sqoop
Reduce scripting required to develop, test and run Hadoop applications
Manage entire business processes that include Hadoop, Business Intelligence and Analytics, ERPs, File Transfers, Relational Databases and other “traditional” IT components
Gain sophisticated SLA Management, Forecasting, Auditing and Reporting
Provide simple and intuitive access to Hadoop and enterprise workflows via a graphical web application and mobile app (iOS and Android) with Control-M Self Service
Another key requirement for managing Hadoop technology in the enterprise is the need to identify all components and to understand how they mesh with other applications and infrastructure to deliver services. The BMC Atrium Discovery and Dependency Mapping helps organizations identify all traditional technologies and infrastructure as well as Hadoop clusters and then map relationships among all these components. This is essential to organizations that adhere to ITIL or any other IT best practice.
BMC Discovery and Dependency Mapping key benefits:
Restore service faster by replacing dependence on tribal knowledge with reliable configuration and relationship data
Minimize change risks by empowering your Change Advisory Board with trusted dependency data to evaluate change impact
Reduce software audit prep time and easily prove inventory accuracy to vendors, reducing risk of non-compliance penalties
Prevent outages when moving data center assets for consolidation, cloud, and virtualization projects
Control-M for Hadoop should be installed on a Control-M/Agent on Linux and on Control-M/EM client, followed by setting a connection profile per Hadoop cluster, all via intuitive user interface. Please contact BMC support to get Control-M for Hadoop Administrator Guide
Once you successfully validated your connection profile, you can now define your first Hadoop job and workflow such as PIG, Hive and many other. Each job type will request for relevant parameters, all via intuitive user interface. For example: In the case of Hive, browse and select your Hive script, provide the hive script parameters, and optionally define environmental properties as well as pre/post commands such as HDFS put. Now you can order and execute your job or entire workflow whether immediately or according to your scheduling definitions.
For any assistance please contact BMC support