Java Code Examples for org.apache.hadoop.hive.metastore.api.Table#getPartitionKeysSize()

The following examples show how to use org.apache.hadoop.hive.metastore.api.Table#getPartitionKeysSize() . You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: HiveClientWrapper.java    From pxf with Apache License 2.0 6 votes vote down vote up
/**
 * Populates the given metadata object with the given table's fields and partitions,
 * The partition fields are added at the end of the table schema.
 * Throws an exception if the table contains unsupported field types.
 * Supported HCatalog types: TINYINT,
 * SMALLINT, INT, BIGINT, BOOLEAN, FLOAT, DOUBLE, STRING, BINARY, TIMESTAMP,
 * DATE, DECIMAL, VARCHAR, CHAR.
 *
 * @param tbl      Hive table
 * @param metadata schema of given table
 */
public void getSchema(Table tbl, Metadata metadata) {

    int hiveColumnsSize = tbl.getSd().getColsSize();
    int hivePartitionsSize = tbl.getPartitionKeysSize();

    LOG.debug("Hive table: {} fields. {} partitions.", hiveColumnsSize, hivePartitionsSize);

    // check hive fields
    try {
        List<FieldSchema> hiveColumns = tbl.getSd().getCols();
        for (FieldSchema hiveCol : hiveColumns) {
            metadata.addField(HiveUtilities.mapHiveType(hiveCol));
        }
        // check partition fields
        List<FieldSchema> hivePartitions = tbl.getPartitionKeys();
        for (FieldSchema hivePart : hivePartitions) {
            metadata.addField(HiveUtilities.mapHiveType(hivePart));
        }
    } catch (UnsupportedTypeException e) {
        String errorMsg = "Failed to retrieve metadata for table " + metadata.getItem() + ". " +
                e.getMessage();
        throw new UnsupportedTypeException(errorMsg);
    }
}
 
Example 2
Source File: DropTableService.java    From circus-train with Apache License 2.0 6 votes vote down vote up
/**
 * Drops the table and its associated data. If the table is unpartitioned the table location is used. If the table is
 * partitioned then the data will be dropped from each partition location.
 * 
 * @throws Exception if the table or its data can't be deleted.
 */
public void dropTableAndData(
    CloseableMetaStoreClient client,
    String databaseName,
    String tableName,
    DataManipulator dataManipulator)
  throws Exception {
  Table table = getTable(client, databaseName, tableName);
  if (table != null) {
    String replicaLocation = table.getSd().getLocation();
    if (table.getPartitionKeysSize() == 0) {
      deleteData(dataManipulator, replicaLocation);
    } else {
      deletePartitionData(client, table, dataManipulator);
    }
    removeTableParamsAndDrop(client, table, databaseName, tableName);
  }
}
 
Example 3
Source File: HiveCatalog.java    From flink with Apache License 2.0 4 votes vote down vote up
private boolean isTablePartitioned(Table hiveTable) {
	return hiveTable.getPartitionKeysSize() != 0;
}
 
Example 4
Source File: Replica.java    From circus-train with Apache License 2.0 4 votes vote down vote up
private boolean isUnpartitioned(Table table) {
  return table.getPartitionKeysSize() == 0;
}
 
Example 5
Source File: HiveCatalog.java    From flink with Apache License 2.0 4 votes vote down vote up
private static boolean isTablePartitioned(Table hiveTable) {
	return hiveTable.getPartitionKeysSize() != 0;
}