Skip to content

Commit

Permalink
Merged branch 'v116'. Following changes:
Browse files Browse the repository at this point in the history
 * Methods to check whether row key exists
 * Method to 'enable' table.
 * Removed deprecated rowkey annotation
 * Fixed generic type for persist and delete methods
  • Loading branch information
m-manu committed May 26, 2020
1 parent 9f3ea66 commit e1e3431
Show file tree
Hide file tree
Showing 9 changed files with 165 additions and 82 deletions.
45 changes: 26 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,15 @@

[![Build Status](https://api.travis-ci.org/flipkart-incubator/hbase-orm.svg?branch=master&status=passed)](https://travis-ci.org/github/flipkart-incubator/hbase-orm)
[![Coverage Status](https://coveralls.io/repos/github/flipkart-incubator/hbase-orm/badge.svg?branch=master)](https://coveralls.io/github/flipkart-incubator/hbase-orm?branch=master)
[![Maven Central](https://img.shields.io/badge/sonatype-1.15-orange.svg)](https://oss.sonatype.org/content/repositories/releases/com/flipkart/hbase-object-mapper/1.15/)
[![Maven Central](https://img.shields.io/badge/sonatype-1.16-orange.svg)](https://oss.sonatype.org/content/repositories/releases/com/flipkart/hbase-object-mapper/1.15/)
[![License](https://img.shields.io/badge/License-Apache%202-blue.svg)](./LICENSE.txt)

## Introduction
An ultra-light-weight HBase ORM library that enables:
HBase ORM is a light-weight, thread-safe and performant library that enables:

1. object-oriented access of HBase rows (Data Access Object) with minimal code and good testability
2. reading from and/or writing to HBase tables in Hadoop MapReduce jobs


## Usage
Let's say you've an HBase table `citizens` with row-key format of `country_code#UID`. Now, let's say this table is created with three column families `main`, `optional` and `tracked`, which may have columns (qualifiers) `uid`, `name`, `salary` etc.

Expand Down Expand Up @@ -115,14 +114,15 @@ See source files [Citizen.java](./src/test/java/com/flipkart/hbaseobjectmapper/t

### Serialization / Deserialization mechanism

* Serialization and deserialization are handled through 'codecs'.
* The default codec (called [BestSuitCodec](./src/main/java/com/flipkart/hbaseobjectmapper/codec/BestSuitCodec.java)) included in this library has the following behavior:
* uses HBase's native methods to serialize objects of data types `Boolean`, `Short`, `Integer`, `Long`, `Float`, `Double`, `String` and `BigDecimal` (see: [Bytes](https://hbase.apache.org/2.0/devapidocs/org/apache/hadoop/hbase/util/Bytes.html))
* uses [Jackson's JSON serializer](https://en.wikipedia.org/wiki/Jackson_(API)) for all other data types
* serializes `null` as `null`
* To customize serialization/deserialization behavior, you may define your own codec (by implementing the [Codec](./src/main/java/com/flipkart/hbaseobjectmapper/codec/Codec.java) interface) or you may extend the default codec.
* The optional parameter `codecFlags` (supported by both `@HBColumn` and `@HBColumnMultiVersion` annotations) can be used to pass custom flags to the underlying codec. (e.g. You may want your codec to serialize field `Integer id` in `Citizen` class differently from field `Integer id` in `Employee` class)
* The default codec class `BestSuitCodec` takes a flag `BestSuitCodec.SERIALIZE_AS_STRING`, whose value is "serializeAsString" (as in the above `Citizen` class example). When this flag is set to `true` on a field, the default codec serializes that field (even numerical fields) as strings.
* Your custom codec may take other such flags to customize serialization/deserialization behavior at a **class field level**.
* Your custom codec may take other such flags as inputs to customize serialization/deserialization behavior at a **class field level**.

## Using this library for database access (DAO)
This library provides an abstract class to define your own [data access object](https://en.wikipedia.org/wiki/Data_access_object). For example, you can create one for `Citizen` class in the above example as follows:
Expand All @@ -132,7 +132,7 @@ import org.apache.hadoop.hbase.client.Connection;
import java.io.IOException;

public class CitizenDAO extends AbstractHBDAO<String, Citizen> {
// in above, String is the row type of Citizen
// in above, String is the 'row type' of Citizen

public CitizenDAO(Connection connection) throws IOException {
super(connection); // if you need to customize your codec, you may use super(connection, codec)
Expand Down Expand Up @@ -235,10 +235,10 @@ Read data from HBase using HBase's native `Get`:

```java
Get get1 = citizenDao.getGet("IND#2"); // returns object of HBase's Get corresponding to row key "IND#2", to enable advanced read patterns
counterDAO.getOnGets(get1);
Counter counter1 = counterDAO.getOnGet(get1);

Get get2 = citizenDao.getGet("IND#2").setTimeRange(1, 5).setMaxVersions(2); // Advanced HBase row fetch
counterDAO.getOnGets(get2);
Counter counter2 = counterDAO.getOnGet(get2);
```

Manipulate and persist an object back to HBase:
Expand Down Expand Up @@ -307,15 +307,15 @@ Once instantiated, you may do the following DDL operations:
hbAdmin.createTable(Citizen.class);
// Above statement creates table with name and column families specification as per the @HBTable annotation on the Citizen class

hbAdmin.tableExists(Citizen.class); // returns true
hbAdmin.tableExists(Citizen.class); // returns true/false

hbAdmin.disableTable(Citizen.class);

hbAdmin.deleteTable(Citizen.class);

```

Note that **all** of the above are very heavy and time-consuming operations.
Note that DDL operations on HBase are typically heavy and time-consuming.

## Using this library in MapReduce jobs

Expand All @@ -325,6 +325,7 @@ If your MapReduce job is reading from an HBase table, in your `map()` method, HB
```java
T readValue(ImmutableBytesWritable rowKey, Result result, Class<T> clazz)
```
where `T` is your bean-like class that extends this library's `HBRecord` interface (e.g. `Citizen` class above).

For example:

Expand All @@ -336,11 +337,13 @@ Citizen e = hbObjectMapper.readValue(key, value, Citizen.class);
If your MapReduce job is writing to an HBase table, in your `reduce()` method, object of your bean-like class can be converted to HBase's `Put` (for row contents) and `ImmutableBytesWritable` (for row key) using below methods:

```java
ImmutableBytesWritable getRowKey(HBRecord<R> obj)
ImmutableBytesWritable getRowKey(T record)
```
```java
Put writeValueAsPut(HBRecord<R> obj)
Put writeValueAsPut(T record)
```
where `T` is your bean-like class that extends this library's `HBRecord` interface (e.g. `Citizen` class above).

For example, below code in Reducer writes your object as one HBase row with appropriate column families and columns:

```java
Expand All @@ -353,11 +356,13 @@ If your MapReduce job is reading from an HBase table, you would want to unit-tes
Object of your bean-like class can be converted to HBase's `Result` (for row contents) and `ImmutableBytesWritable` (for row key) using below methods:

```java
ImmutableBytesWritable getRowKey(HBRecord<R> obj)
ImmutableBytesWritable getRowKey(T record)
```
```java
Result writeValueAsResult(HBRecord<R> obj)
Result writeValueAsResult(T record)
```
where `T` is your bean-like class that extends this library's `HBRecord` interface (e.g. `Citizen` class above).

Below is an example of unit-test of a Mapper using [MRUnit](https://attic.apache.org/projects/mrunit.html):

```java
Expand All @@ -382,6 +387,8 @@ HBase's `Put` object can be converted to your object of you bean-like class usin
```java
T readValue(ImmutableBytesWritable rowKey, Put put, Class<T> clazz)
```
where `T` is your bean-like class that extends this library's `HBRecord` interface (e.g. `Citizen` class above).


Below is an example of unit-test of a Reducer using [MRUnit](https://attic.apache.org/projects/mrunit.html):

Expand All @@ -398,11 +405,11 @@ CitizenSummary citizenSummary = hbObjectMapper.readValue(
```

## Advantages
* Your application code will be clean and minimal.
* Your application code will be **clean** and **minimal**.
* Your code need not worry about HBase methods or serialization/deserialization at all, thereby helping you maintain clear [separation of concerns](https://en.wikipedia.org/wiki/Separation_of_concerns).
* Classes are **thread-safe**. You just have to instantiate your DAO classes once at the start of your application and use them anywhere!
* Light weight: This library depends on just HBase Client and few other small libraries. It has very low overhead and hence is very fast.
* Customizability/Extensibility: Want to use HBase native methods directly in some cases? No problem. Want to customize ser/deser in general or for a given class field? No problem. This library is high flexible.
* Classes are **thread-safe**. You just have to instantiate your DAO classes once at the start of your application and use them anywhere throughout the life-cycle of your application!
* **Light weight**: This library depends on just [hbase-client](https://mvnrepository.com/artifact/org.apache.hbase/hbase-client) and few other small libraries. It has very low overhead and hence is very fast.
* Customizability/Extensibility: Want to use HBase's native methods directly in some cases? You can do that. Want to customize serializatoin/deserialization for a given type or for a specific given class field? You can do that too. This library is highly flexible.

## Limitations
Being an *object mapper*, this library works for pre-defined columns only. For example, this library doesn't provide ways to fetch:
Expand All @@ -417,7 +424,7 @@ Add below entry within the `dependencies` section of your `pom.xml`:
<dependency>
<groupId>com.flipkart</groupId>
<artifactId>hbase-object-mapper</artifactId>
<version>1.15</version>
<version>1.16</version>
</dependency>
```

Expand All @@ -428,7 +435,7 @@ See artifact details: [com.flipkart:hbase-object-mapper on **Maven Central**](ht
To build this project, follow below simple steps:

1. Do a `git clone` of this repository
2. Checkout latest stable version `git checkout v1.15`
2. Checkout latest stable version `git checkout v1.16`
3. Execute `mvn clean install` from shell

### Please note:
Expand Down
16 changes: 5 additions & 11 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,14 @@
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<name>HBase ORM</name>
<description>
An ultra-light-weight HBase ORM library that enables:
HBase ORM is a light-weight, thread-safe and performant library that enables:
[1] object-oriented access of HBase rows (Data Access Object) with minimal code and good testability
[2] reading from and/or writing to HBase tables in Hadoop MapReduce jobs
</description>
<modelVersion>4.0.0</modelVersion>
<groupId>com.flipkart</groupId>
<artifactId>hbase-object-mapper</artifactId>
<version>1.15</version>
<version>1.16</version>
<url>https://github.com/flipkart-incubator/hbase-orm</url>
<scm>
<url>https://github.com/flipkart-incubator/hbase-orm</url>
Expand Down Expand Up @@ -84,12 +84,6 @@
<version>${version.hbase}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>${version.hadoop}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-testing-util</artifactId>
Expand All @@ -105,13 +99,13 @@
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<version>5.5.2</version>
<version>5.6.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<version>2.23.4</version>
<version>3.3.3</version>
<scope>test</scope>
</dependency>
</dependencies>
Expand All @@ -129,7 +123,7 @@
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<version>3.1.1</version>
<version>3.2.0</version>
<executions>
<execution>
<id>attach-javadocs</id>
Expand Down
65 changes: 48 additions & 17 deletions src/main/java/com/flipkart/hbaseobjectmapper/AbstractHBDAO.java
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ public Get getGet(R rowKey) {
/**
* Fetch an HBase row for a given {@link Get} object
*
* @param get HBase's Get object, typically formed using the {@link #getGet(Serializable) getGet} method
* @param get HBase's Get object, typically formed using the {@link #getGet(Serializable) getGet(R)} method
* @return HBase row, deserialized as object of your bean-like class (that implements {@link HBRecord})
* @throws IOException When HBase call fails
*/
Expand Down Expand Up @@ -507,7 +507,7 @@ public Increment getIncrement(R rowKey) {
* Performs HBase {@link Table#increment} on the given {@link Increment} object <br>
* <br>
* <b>Note</b>: <ul>
* <li>You may construct {@link Increment} object using the {@link #getIncrement(Serializable) getIncrement} method</li>
* <li>You may construct {@link Increment} object using the {@link #getIncrement(Serializable) getIncrement(R)} method</li>
* <li>Unlike the {@link #increment(Serializable, String, long)} methods, this method skips some validations (hence, be cautious)</li>
* </ul>
*
Expand All @@ -533,7 +533,7 @@ public T increment(Increment increment) throws IOException {
* @return <b>Partial object</b> containing (only) value of field that was appended
* @throws IOException When HBase call fails
* @see Table#append(Append)
* @see #append(Serializable, Map)
* @see #append(Serializable, Map) append(R, Map)
*/
public T append(R rowKey, String fieldName, Object valueToAppend) throws IOException {
Map<String, Object> one = new HashMap<>(1);
Expand All @@ -551,10 +551,10 @@ public T append(R rowKey, String fieldName, Object valueToAppend) throws IOExcep
* @return <b>Partial object</b> containing (only) values of fields that were appended
* @throws IOException When HBase call fails
* @see Table#append(Append)
* @see #append(Serializable, String, Object)
* @see #append(Serializable, String, Object) append(R, String, Object)
*/
public T append(R rowKey, Map<String, Object> valuesToAppend) throws IOException {
Append append = new Append(toBytes(rowKey));
Append append = getAppend(rowKey);
for (Map.Entry<String, Object> e : valuesToAppend.entrySet()) {
String fieldName = e.getKey();
Field field = getField(fieldName);
Expand All @@ -567,10 +567,7 @@ public T append(R rowKey, Map<String, Object> valuesToAppend) throws IOException
hbObjectMapper.valueToByteArray((Serializable) value, hbColumn.codecFlags())
);
}
try (Table table = getHBaseTable()) {
Result result = table.append(append);
return hbObjectMapper.readValueFromResult(result, hbRecordClass);
}
return append(append);
}


Expand All @@ -588,8 +585,8 @@ public Append getAppend(R rowKey) {
* Performs HBase's {@link Table#append} on the given {@link Append} object <br>
* <br>
* <b>Note</b>: <ul>
* <li>You may construct {@link Append} object using the {@link #getAppend(Serializable) getAppend} method</li>
* <li>Unlike the {@link #append(Serializable, String, Object)} and related methods, this method skips some validations. So, use this only if you need access to HBase's native methods.</li>
* <li>You may construct {@link Append} object using the {@link #getAppend(Serializable) getAppend(R)} method</li>
* <li>Unlike the {@link #append(Serializable, String, Object) append(R, String, Object)} and related methods, this method skips some validations. So, use this only if you need access to HBase's native methods.</li>
* </ul>
*
* @param append HBase's {@link Append} object
Expand Down Expand Up @@ -622,7 +619,7 @@ public List<T> get(R startRowKey, R endRowKey) throws IOException {
* @return Row key of the persisted object, represented as a {@link String}
* @throws IOException When HBase call fails
*/
public R persist(HBRecord<R> record) throws IOException {
public R persist(T record) throws IOException {
Put put = hbObjectMapper.writeValueAsPut0(record);
try (Table table = getHBaseTable()) {
table.put(put);
Expand All @@ -640,7 +637,7 @@ public R persist(HBRecord<R> record) throws IOException {
public List<R> persist(List<T> records) throws IOException {
List<Put> puts = new ArrayList<>(records.size());
List<R> rowKeys = new ArrayList<>(records.size());
for (HBRecord<R> record : records) {
for (T record : records) {
puts.add(hbObjectMapper.writeValueAsPut0(record));
rowKeys.add(record.composeRowKey());
}
Expand Down Expand Up @@ -670,7 +667,7 @@ public void delete(R rowKey) throws IOException {
* @param record Object to delete
* @throws IOException When HBase call fails
*/
public void delete(HBRecord<R> record) throws IOException {
public void delete(T record) throws IOException {
this.delete(record.composeRowKey());
}

Expand Down Expand Up @@ -698,7 +695,7 @@ public void delete(R[] rowKeys) throws IOException {
*/
public void delete(List<T> records) throws IOException {
List<Delete> deletes = new ArrayList<>(records.size());
for (HBRecord<R> record : records) {
for (T record : records) {
deletes.add(new Delete(toBytes(record.composeRowKey())));
}
try (Table table = getHBaseTable()) {
Expand Down Expand Up @@ -851,7 +848,7 @@ public NavigableMap<R, NavigableMap<Long, Object>> fetchFieldValues(R startRowKe
}

/**
* Fetch column values for a given array of row keys (bulk variant of method {@link #fetchFieldValue(Serializable, String)})
* Fetch column values for a given array of row keys (bulk variant of method {@link #fetchFieldValue(Serializable, String) fetchFieldValue(R, String)})
*
* @param rowKeys Array of row keys to fetch
* @param fieldName Name of the private variable of your bean-like object (of a class that implements {@link HBRecord}) whose corresponding column needs to be fetched
Expand Down Expand Up @@ -882,7 +879,7 @@ public Map<R, NavigableMap<Long, Object>> fetchFieldValues(R[] rowKeys, String f
get.addColumn(hbColumn.familyBytes(), hbColumn.columnBytes());
gets.add(get);
}
Map<R, NavigableMap<Long, Object>> map = new HashMap<>(rowKeys.length, 1.0f);
Map<R, NavigableMap<Long, Object>> map = new LinkedHashMap<>(rowKeys.length, 1.0f);
try (Table table = getHBaseTable()) {
Result[] results = table.get(gets);
for (Result result : results) {
Expand All @@ -901,4 +898,38 @@ public Map<R, NavigableMap<Long, Object>> fetchFieldValues(R[] rowKeys, String f
public byte[] toBytes(R rowKey) {
return hbObjectMapper.rowKeyToBytes(rowKey, hbTable.getCodecFlags());
}

/**
* Check whether a row exists or not
*
* @param rowKey Row key
* @return <code>true</code> if row with given row key exists
* @throws IOException When HBase call fails
*/
public boolean exists(R rowKey) throws IOException {
try (Table table = getHBaseTable()) {
return table.exists(new Get(
toBytes(rowKey)
));
}
}

/**
* Check whether specified rows exist or not
*
* @param rowKeys Row keys
* @return Array with <code>true</code>/<code>false</code> values corresponding to whether row with given row keys exist
* @throws IOException When HBase call fails
*/
public boolean[] exists(R[] rowKeys) throws IOException {
List<Get> gets = new ArrayList<>(rowKeys.length);
for (R rowKey : rowKeys) {
gets.add(new Get(
toBytes(rowKey)
));
}
try (Table table = getHBaseTable()) {
return table.exists(gets);
}
}
}
Loading

0 comments on commit e1e3431

Please sign in to comment.