3 min read
Like most commercial cloud platforms, Google Cloud offers a range of different storage options. The most common options are persistent disk volumes attached to Virtual Machine instances or object store buckets accessed via the Google Storage APIs. Until recently, disk volumes were the only supported storage for Hadoop deployments in the Google Cloud. That situation changed for the better with the release of the Google Cloud Storage Connector for Hadoop.
The connector enables the Hadoop cluster to access Google Storage buckets via the standard Hadoop File System interface. Users can then access their data in Google Storage buckets just as they would access data ingested directly into a Hadoop cluster.
Integrating the connector with the MapR Distribution for Apache Hadoop follows the standard procedure, while bringing with it all the operational and performance advantages of the top-ranked Hadoop distribution (see the Forrester Wave Report and Google MinuteSort record).
Here are some steps to get you started quickly with the connector:
For basic cluster deployment, use the MapR setup scripts available on github (https://github.com/mapr/gce). The deployed instances will be authorized to access the Google Cloud Storage buckets within your account. Use the configure-gcs.sh script also available in the above github repository to configure the cluster nodes. The script must be executed on all cluster nodes.
Here are the steps:
After you complete these steps, your cluster will have full access to any data in your Google storage buckets. You can access it via the simple Hadoop command line, a custom map-reduce job, or even as Hive tables for structured queries.
Stay ahead of the bleeding edge...get the best of Big Data in your inbox.