MapR Connector Import and Export Options

Import Options

Import Option Description
--as-avrodatafile, --as-textfile The format of a to-be-imported data file in MapR-FS. An 'hcat' or 'hive' job type supports 'rcfile', 'orcfile' and 'textfile' file formats. To set the file format, you need to use the -D command line option. For example:
sqoop import -D tdch.fileformat="orcfile" --connect jdbc:teradata://<hosname>/database=test --connection-manager org.apache.connectors.td.TeradataManager …
sqoop import  --connect jdbc:teradata://<hostname>/database=test --connection-manager org.apache.connectors.td.TeradataManager --as-avrodatafile ...
--target-dir Mapr-FS destination directory.
--num-mappers The number of mappers for the import job. The default value is 4.
--query The SQL query to select data from a Teradata database. This option works only with the textfile and avrofile formats.
--table The name of the source table in a Teradata system from which the data is imported.
--columns The names of columns to import from the source table in a Teradata system, in comma-separated format. For example:
sqoop import  --connect jdbc:teradata://<hostname>/database=test --connection-manager org.apache.connectors.td.TeradataManager --username test --password test --table test  --columns id,name 
--hive-table The name of the target table in Hive or HCatalog.
--fields-terminated-by The field separator to use with the imported files. This parameter is only applicable with the textfile file format. The default value is \t.
--split-by The column of the table used to split work units.
--map-column-hive Override mapping from SQL to Hive type for configured columns.
--where WHERE clause to use during import.
--staging-table1

The table for staging data before insertion into the destination table. Only applicable when using input-method split.by.partition

--num-partitions-for-staging-table1 The number of partitions to create when auto-creating the staging table. Only applicable when using input-method split.by.partition.
--staging-database1 The database for creating the staging table. Only applicable when using input-method split.by.partition.
--staging-force1 Option to force the connector to create a staging table, if supported. Only applicable when using input-method split.by.partition.
--input-method1 The input method to use to transfer data from Teradata.
Supported values:
  • split.by.amp
  • split.by.value
  • split.by,partition
  • split.by.hash
--batch-size1 The number of row processed per batch.
--access-lock1 Option to apply access lock on the database.
--query-band1 The arbitrary query bands to be set for all queries the connector runs. Specify the query bands using a semicolon-separated key=value format.
--skip-xviews1 Option to switch to the non-X version of system views to obtain metadata.
--date-format1 Custom format for date columns.
--time-format1 Custom format for time columns.
--timestamp-format1 Custom format for timestamp columns.
Use the -D command line option to set the file format. For example:
-D tdch.fileformat="fileformat"
The following Sqoop import options are unsupported:
  • --append
  • --compression-codec
  • --direct
  • --direct-split-size
  • --compress, -z
  • --check-column
  • --incremental
  • --last-value
  • --mysql-delimiters
  • --optionally-enclosed-by
  • --hive-delims-replacement
  • --hive-drop-import-delims
  • --hive-partition-key
  • --hive-partition-value
  • --column-family
  • --hbase-create-table
  • --hbase-row-key
  • --hbase-table
  • --map-column-java
  • --fetch-size
  • --as-sequencefile

Export Options

Export Option Description
--table The name of the target table in a Teradata system.
--export-dir The directory of to-be-exported source files in MapR-FS.
--num-mappers The number of mappers for the export job. The default value is 4.
--columns The names of fields to export to the target table in a Teradata system, in comma-separated format. For export from MapR-FS, you can only use this option with the avrofile format.
--staging-table The table in which data will be staged before being inserted into the destination table.
--keep-staging-table1 Option specifying that the connector retain the staging table after failures.
--staging-database1 The database for creating the staging table.
--staging-force1 Option to force the connector to create a staging table, if supported.
--output-method1 The output method to use to transfer data to Teradata.
Supported values:
  • batch.insert
  • internal.fastload
--query-band1 The arbitrary query bands to be set for all queries that the connector runs. Specify the query bands using a semicolon-separated key=value format.
--error-table1 Prefix name for error tables. Only applicable when using output-method internal.fastload.
--error-database1 Override for the default error database name. Only applicable when using output-method internal.fastload.
--fastload-socket-hostname1 Hostname or IP address of the host on which Sqoop runs. If not set, the connector auto-detects the host. Only applicable when using output-method internal.fastload.
--fastload-socket-port1 The host port that fastload tasks use to synchronize state. Only applicable when using output-method internal.fastload.
--fastload-socket-timeout1 The timeout value the server socket uses for fastload task connections. Only applicable for output-method internal.fastload.
--skip-xviews1 Option to switch to the non-X version of system views to obtain metadata.
--date-format1 Custom format for date columns.
--time-format1 Custom format for time columns.
--timestamp-format1 Custom format for timestamp columns.
Use the -D command line option to set the file format. For example:
-D tdch.fileformat="fileformat"
Supported export file format values are:
  • textfile (default format)
  • avrofile
  • orcfile
  • refile
The following Sqoop export options are unsupported:
  • --batch
  • --clear-staging-table
  • --direct
  • --update-key
  • --update-mode
  • --input-lines-terminated-by
  • --input-optionally-enclosed-by
  • --map-column-java
  • --as-sequencefile
1 Only available starting in Sqoop-1.4.6-1707.