Microservices News

Interview with Rick Hightower about QBit Microservices and Java Reactive programming with Reakt

This is a short interview with Rick Hightower about Reakt - Java Reactive Programming lib for Promises and Streams.

Why did you create Reakt?

Good question. Many of the ideas in Reakt were already in QBit. The problem is that QBit is a full microservices lib with health, monitoring, a typed actor system, service discovery, etc. QBit’s focus in on reactive microservices with Java.

Read the rest of the interview here: Interview about Reakt Reactive Java Lib with Promises and Streams and QBit Reactive Microservices Lib. 

Kafka Clustering, Consumer Failover, Broke Failover

posted May 16, 2017, 7:59 AM by Rick Hightower

This tutorial covers Kafka clustering and replicated topic. It demonstrates consumer failover and broker failover. It also demonstrates load balancing Kafka consumers. The article shows how, with many groups, Kafka acts like a Publish/Subscribe MOM. But, when you put all of our consumers in the same group, Kafka will load share the messages to the consumers in the same group like a MOM queue. This Kafka tutorial demonstrates how Kafka consumer failover and Kafka broker failover.  

Apache Avro Basics

posted May 9, 2017, 10:31 AM by Rick Hightower

Avro is a popular data format that is used in the Hadoop/BigData world and with Spark. Increasingly Kafka is used to capturing real-time data for analytics as well as becoming a backbone for Microservice to Microservice communication.  

The Kafka Schema Registry provides a repository for Record meta-data and schema. With the Kafka registry, you can post and get Avro schemas. The Kafka Schema Registry "stores a versioned history of all schemas, provides multiple compatibility settings and allows the evolution of schemas according to the configured compatibility setting".  The Kafka Schema Registry, enable Kafka clients to handle schema evolution of Kafka records using the Avro format.

Apache Avro Basics

Apache Avro™ is a data serialization system. Avro provides data structures, binary data format, container file for storing persistent data and provides RPC capabilities. Avro does not require code generation to use. Avro is polyglot like you would expect and integrates well with JavaScript, Python, Ruby, C, C#, C++ and Java. Avro gets used in the Hadoop ecosystem as well as by Kafka.

Avro is similar to Thrift, Protocol Buffers, JSON, etc. Unlike Thrift and Protocol Buf, Avro does not require code generation. Avro needs less encoding as part of the data since it stores names and types in the schema reducing duplication. Avro supports the evolution of schemas.  

Apache Avro Schema

When Avro files store data and also stores schema. Avro RPC is also based on schema, and IDL. Part of the RPC protocol exchanges schemas as part of the handshake. Avro schemas and IDL are written in JSON.

Avro data format (wire format and file format) is defined by Avro schemas. When deserializing data, the schema is used. Data is serialized based on the schema, and schema is sent with data or in the case of files stored with the data. Avro data plus schema is fully self-describing data format.

Let’s take a look at an example Avro schema.


Example schema for an Employee record

{"namespace": "com.cloudurable.phonebook",

  "type": "record",
  "name": "Employee",

    "fields": [

        {"name": "firstName", "type": "string"},

        {"name": "lastName", "type": "string"},

        {"name": "age",  "type": "int"},

        {"name": "phoneNumber",  "type": "string"}

Avro schema generation tools

Avro comes with a set of tools for generating Java classes for Avro types that you define in Avro schema. There are plugins for Maven and Gradle to generate code based on Avro schemas that use the Avro tools, and integrate them with your build. 

This gradle-avro-plugin is a Gradle plugin that uses Avro tools to do Java code generation for Apache Avro. This gradle plugin supports Avro schema files (.avsc), and Avro RPC IDL (.avdl). For Kafka you only need avsc schema files.

build.gradle - example using gradle-avro-plugin

plugins {
    id "com.commercehub.gradle.plugin.avro" version "0.9.0"

group 'cloudurable'
version '1.0-SNAPSHOT'
apply plugin: 'java'
sourceCompatibility = 1.8

dependencies {
    compile "org.apache.avro:avro:1.8.1"
    testCompile group: 'junit', name: 'junit', version: '4.11'

repositories {

avro {
    createSetters = false
    fieldVisibility = "PRIVATE"

Notice that we did not generate setter methods, and we made the fields private. This makes the instances somewhat immutable.

Running gradle build will generate the Employee.java.


Generated Avro code

package com.cloudurable.phonebook;

import org.apache.avro.specific.SpecificData;

public class Employee extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
  private static final long serialVersionUID = -6112285611684054927L;
  public static final org.apache.avro.Schema SCHEMA$ = new    
  public static org.apache.avro.Schema getClassSchema() { return SCHEMA$; }
  private java.lang.String firstName;
  private java.lang.String lastName;
  private int age;
  private java.lang.String phoneNumber;

The gradle plugin calls the Avro utilities which generates the files and puts them under build/generated-main-avro-java

Let’s use the generated class as follows to construct an Employee instance.

Using the new Employee class

Employee bob = Employee.newBuilder().setAge(35)

assertEquals("Bob", bob.getFirstName());

The Employee class has a constructor and has a builder. We can use the builder to build a new Employee instance.

Next we want to write the Employees to disk.

Writing a list of employees to an Avro file

final List<Employee> employeeList = ...
final DatumWriter<Employee> datumWriter = new SpecificDatumWriter<>(Employee.class);
final DataFileWriter<Employee> dataFileWriter = new DataFileWriter<>(datumWriter);

try {
            new File("employees.avro"));
    employeeList.forEach(employee -> {
        try {
        } catch (IOException e) {
            throw new RuntimeException(e);

} finally {

The above demonstrates serializing an Employee list to disk. In Kafka, we will not be writing to disk directly. We are just showing how so you have a way to test Avro serialization, which is helpful when debugging schema incompatibilities. Note we create a DatumWriter, which converts Java instance into an in-memory serialized format. SpecificDatumWriter is used with generated classes like Employee.DataFileWriter writes the serialized records to the employee.avro file.

Now let’s demonstrate how to read data from an Avro file.

Reading a list of employees from an avro file

final File file = new File("employees.avro");
final List<Employee> employeeList = new ArrayList<>();
final DatumReader<Employee> empReader = new SpecificDatumReader<>(Employee.class);
final DataFileReader<Employee> dataFileReader = new DataFileReader<>(file, empReader);

while (dataFileReader.hasNext()) {
    employeeList.add(dataFileReader.next(new Employee()));

The above deserializes employees from the employees.avro file into a java.util.List of Employee instances. Deserializing is similar to serializing but in reverse. We create a SpecificDatumReader to converts in-memory serialized items into instances of our generated Employee class. The DatumReader reads records from the file by calling next. Another way to read is using forEach as follows:

Reading a list of employees from an avro file using forEach

final DataFileReader<Employee> dataFileReader = new DataFileReader<>(file, empReader);


You can use a GenericRecord instead of generating an Employee class as follows.

Using GenericRecord to create an Employee record

final String schemaLoc = "src/main/avro/com/cloudurable/phonebook/Employee.avsc";
final File schemaFile = new File(schemaLoc);
final Schema schema = new Schema.Parser().parse(schemaFile);

GenericRecord bob = new GenericData.Record(schema);
bob.put("firstName", "Bob");
bob.put("lastName", "Smith");
bob.put("age", 35);
assertEquals("Bob", bob.get("firstName"));

You can write to Avro files using GenericRecords as well.

Writing GenericRecords to an Avro file

final List<GenericRecord> employeeList = new ArrayList<>();

final DatumWriter<GenericRecord> datumWriter = new GenericDatumWriter<>(schema);
final DataFileWriter<GenericRecord> dataFileWriter = new DataFileWriter<>(datumWriter);

try {
            new File("employees2.avro"));
    employeeList.forEach(employee -> {
        try {
        } catch (IOException e) {
            throw new RuntimeException(e);
} finally {

You can read from Avro files using GenericRecords as well.

Reading GenericRecords from an Avro file

final File file = new File("employees2.avro");
final List<GenericRecord> employeeList = new ArrayList<>();
final DatumReader<GenericRecord> empReader = new GenericDatumReader<>();
final DataFileReader<GenericRecord> dataFileReader = new DataFileReader<>(file, empReader);

while (dataFileReader.hasNext()) {


Avro will validate the data types when it serializes and deserializes the data.

Using the wrong type

GenericRecord employee = new GenericData.Record(schema);
employee.put("firstName", "Bob" + index);
employee.put("lastName", "Smith"+ index);
//employee.put("age", index % 35 + 25);
employee.put("age", "OLD");

Stack trace from above

org.apache.avro.file.DataFileWriter$AppendWriteException: java.lang.ClassCastException:
java.lang.String cannot be cast to java.lang.Number

 at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
 at com.cloudurable.phonebook.EmployeeTestNoGen.lambda$testWrite$1(EmployeeTestNoGen.java:71)
 at java.util.ArrayList.forEach(ArrayList.java:1249)
 at com.cloudurable.phonebook.EmployeeTestNoGen.testWrite(EmployeeTestNoGen.java:69)
Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Number
 at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:117)
 at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
 at org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:153)
 at org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:143)
 at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:105)
 at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
 at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60)
 at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:302)

If you left out a required field like firstName, then you would get this.

Stack trace from leaving out firstName

Caused by: java.lang.NullPointerException: null of string in field firstName of com.cloudurable.phonebook.Employee
 at org.apache.avro.generic.GenericDatumWriter.npe(GenericDatumWriter.java:132)
 at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:126)
 at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
 at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60)

The Avro schema and IDL specification document describes all of the supported types.

With Avro schema, you can define records, arrays, enums, unions, maps and you can use primitive types like Strings, Int, Boolean, Decimal, Timestamp, Date, and more.

Next, Let’s add to the Employee schema and show some of the different types that Avro supports.

Employe Schema

 {"namespace": "com.cloudurable.phonebook",
  "type": "record",
  "name": "Employee",
  "fields": [
    {"name": "firstName", "type": "string"},
    {"name": "nickName", "type": ["null", "string"], "default" : null},
    {"name": "lastName", "type": "string"},
    {"name": "age",  "type": "int"},
    {"name": "emails", "default":[], "type":{"type": "array", "items": "string"}},
    {"name": "phoneNumber",  "type":
      [ "null",
        { "type": "record",   "name": "PhoneNumber",
        "fields": [
          {"name": "areaCode", "type": "string"},
          {"name": "countryCode", "type": "string", "default" : ""},
          {"name": "prefix", "type": "string"},
          {"name": "number", "type": "string"}
    {"name":"status", "default" :"SALARY", "type": { "type": "enum", "name": "Status",
              "symbols" : ["RETIRED", "SALARY", "HOURLY", "PART_TIME"]}

The Employee schema uses default values, arrays, primitive types, Records within records, enums, and more. It also uses a Union type to represent a value that is optional. 

What follows are some classes that are generated from the above schema. 

PhoneNumber record

package com.cloudurable.phonebook;

import org.apache.avro.specific.SpecificData;

public class PhoneNumber extends org.apache.avro.specific.SpecificRecordBase ...{
  private static final long serialVersionUID = -3138777939618426199L;
  public static final org.apache.avro.Schema SCHEMA$ =
                   new org.apache.avro.Schema.Parser().parse("{\"type\":\"record\",\"name\":...
  public static org.apache.avro.Schema getClassSchema() { return SCHEMA$; }
   private java.lang.String areaCode;
   private java.lang.String countryCode;
   private java.lang.String prefix;
   private java.lang.String number;

Status enum

package com.cloudurable.phonebook;
public enum Status {


Avro provides fast data serialization. It supports data structures like Supports Records, Maps, Array, and basic types. You can use it directly or use Code Generation. Avro allows schema support to Kafka which we will demonstrate in another article. Kafka uses Avro with its Schema Registry. 

Enjoy this slide deck about Avro or this SlideShare by Jean-Paul on Avro/Kafka.  If you like this article check out my friends Kafka training course.

High-speed, Reactive Microservices 2017

posted May 3, 2017, 10:51 AM by Rick Hightower   [ updated May 4, 2017, 2:52 PM ]

This talk covers how we built a set of high-speed reactive microservices and maximized cloud/hardware costs while meeting objectives in resilience and scalability. Talks about Akka, Kafka, QBit, in-memory computing, from a practitioners point of view. Based on the talks delivered by Geoff Chandler, Jason Daniel, and Rick Hightower at JavaOne 2016 and FinTech at Scale 2017, but updated. 

Reactive Java using Reakt by example

posted May 5, 2016, 9:57 PM by Rick Hightower

Reactive Java code examples using Reakt (with links to docs for further info)

This is a presentation of different Reakt features in the context of a real application. I have renamed the classnames and such, but this is from an in-progress microservice. It demonstrates where you would use the Reakt pieces to build a reactive Java application.
This covers usage of Reakt:
  • blocking promises
  • Promises.all
  • invokable promise
  • Expected values
  • Circuit breakers
  • Working with the Reactor
  • Working with streams
  • Using AsyncSuppliers to create downstream services
  • Reakt Guava integration
  • Using promise.thenMap

Please read the full article here: Reactive Java using Reakt by example

QBit Microservices Lib 1.5.0 was Released

posted Apr 22, 2016, 10:21 PM by Rick Hightower   [ updated Apr 22, 2016, 10:25 PM ]

QBit Microservices Lib 1.5.0.RELEASE

QBit Microservices Lib now supports Reakt invokable promises for local and remote client proxies. 
This gives a nice fluent API for async programming.

Invokeable promise

               .then((employee)-> {...}).catchError(...).invoke();

QBit callbacks are now also Reakt Callbacks without breaking the QBit contract for Callbacks.

See Reakt Invokable Promises for more details.

A full write up on QBit Invokable Promise is pending, but the curious can see ReaktInterfacesTest Service Queue, ServiceBundle for more details, and the Remote Websocket Reakt interfaces for remote access proxies.

  • 683 Use Metrik for metrics system
  • 682 Support Reakt with Websocket RPC proxies
  • 680 Support Inovkable promises on Service Queues
  • 679 Testing for Inovkable promises on proxies
  • 678 Fix health check logging
  • 676 Remote proxies support Reakt Callbacks and promises
  • 675 Local proxies support Reakt Inokable promises
  • 674 Local proxies support Reakt callbacks
  • 673 Remote proxies support callback
  • 672 Get rid of boiler plate code for Reactor, StatsCollector and Health Check

Reakt Reactive Java - Invokable Promise - Fluent Reactive Java

posted Apr 22, 2016, 10:19 PM by Rick Hightower

Reakt, Java reactive programming lib, supports invokable promises so service methods could return a Promise.

Invokable Promise

               .then((employee)-> {...}).catchError(...).invoke();

NOTE: QBit Microservices Lib version 1.4 (coming soon) will support generation of client stubs (local and remote) that return Reakt Promises. Having the client proxy method return the promise instead of taking a callback. With QBit you will be able to use Callback's on the service side and Promises on the client proxy side. Callbacks are natural for the service, and Promises are natural for the client.

But you do not have to use QBit to use Reakt's invokable promises.

Let's walk through an example of using a Reakt invokable promises.

Let's say we are developing a service discovery service with this interface.

ServiceDiscovery interface (Example service that does lookup)

    interface ServiceDiscovery {
        Promise<URI> lookupService(URI uri);

        default void shutdown() {}
        default void start() {}

Note that ServiceDiscovery.lookupService is just an example likeEmployeeService.lookupEmployee was just an example.

Notice the lookupService returns a Promise (Promise<URI> lookupService(URI uri)). To call this, we can use an invokable promise. The service side will return an invokable promise, and the client of the service will use that Promise to register its thenthenExpectthenMap, and/orcatchError handlers.

Here is the service side of the invokable promise example.

Service Side of invokable promise

    class ServiceDiscoveryImpl implements ServiceDiscovery {

        public Promise<URI> lookupService(URI uri) {
            return invokablePromise(promise -> {
                if (uri == null) {
                    promise.reject("URI was null");
                } else {

Notice to create the invokablePromise we use Promises.invokablePromise which takes aPromise Consumer (static <T> Promise<T> invokablePromise(Consumer<Promise<T>> promiseConsumer)).

The lookupService example returns a Promise and it does not execute its body until theinvoke (Promise.invoke) on the client side of the equation is called.

Let's look at the client side.

Client side of using lookupService.


The syntax this::handleSuccess is a method reference which can be used as Java 8 lambda expressions. We use method references to make the example shorter.

Here is a complete example with two versions of our example service.

Complete example from unit tests

package io.advantageous.reakt.promise;

import org.junit.After;
import org.junit.Before;
import org.junit.Test;

import java.net.URI;
import java.util.Queue;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicReference;

import static io.advantageous.reakt.promise.Promises.*;
import static org.junit.Assert.*;

public class InvokablePromise {

    final URI successResult = URI.create("http://localhost:8080/employeeService/");
    ServiceDiscovery serviceDiscovery;
    ServiceDiscovery asyncServiceDiscovery;
    URI empURI;
    CountDownLatch latch;
    AtomicReference<URI> returnValue;
    AtomicReference<Throwable> errorRef;

    public void before() {
        latch = new CountDownLatch(1);
        returnValue = new AtomicReference<>();
        errorRef = new AtomicReference<>();
        serviceDiscovery = new ServiceDiscoveryImpl();
        asyncServiceDiscovery = new ServiceDiscoveryAsyncImpl();
        empURI = URI.create("marathon://default/employeeService?env=staging");

    public void after() {

    public void await() {
        try {
            latch.await(10, TimeUnit.SECONDS);
        } catch (InterruptedException e) {
            throw new IllegalStateException(e);

    public void testServiceWithReturnPromiseSuccess() {
        assertNotNull("We have a return", returnValue.get());
        assertNull("There were no errors", errorRef.get());
        assertEquals("The result is the expected result", successResult, returnValue.get());

    public void testServiceWithReturnPromiseFail() {


        assertNull("We do not have a return", returnValue.get());
        assertNotNull("There were  errors", errorRef.get());

    public void testAsyncServiceWithReturnPromiseSuccess() {
        assertNotNull("We have a return from async", returnValue.get());
        assertNull("There were no errors form async", errorRef.get());
        assertEquals("The result is the expected result form async", successResult, returnValue.get());

    public void testAsyncServiceWithReturnPromiseFail() {


        assertNull("We do not have a return from async", returnValue.get());
        assertNotNull("There were  errors from async", errorRef.get());

    @Test (expected = IllegalStateException.class)
    public void testServiceWithReturnPromiseSuccessInvokeTwice() {
        final Promise<URI> promise = serviceDiscovery.lookupService(empURI).then(this::handleSuccess)

    public void testIsInvokable() {
        final Promise<URI> promise = serviceDiscovery.lookupService(empURI).then(this::handleSuccess)

        assertTrue("Is this an invokable promise", promise.isInvokable());

    private void handleError(Throwable error) {

    private void handleSuccess(URI uri) {

    interface ServiceDiscovery {
        Promise<URI> lookupService(URI uri);

        default void shutdown() {

        default void start() {

    class ServiceDiscoveryImpl implements ServiceDiscovery {

        public Promise<URI> lookupService(URI uri) {
            return invokablePromise(promise -> {

                if (uri == null) {
                    promise.reject("URI was null");
                } else {

    class ServiceDiscoveryAsyncImpl implements ServiceDiscovery {

        final ExecutorService executorService;

        final Queue<Runnable> runnables;

        final AtomicBoolean stop;

        public ServiceDiscoveryAsyncImpl() {
            executorService = Executors.newSingleThreadExecutor();
            runnables = new LinkedTransferQueue<>();
            stop = new AtomicBoolean();

        public Promise<URI> lookupService(URI uri) {
            return invokablePromise(promise -> {
                runnables.offer(() -> {
                    if (uri == null) {
                        promise.reject("URI was null");
                    } else {

        public void shutdown() {

        public void start() {
            executorService.submit((Runnable) () -> {

                try {
                } catch (InterruptedException e) {
                while (true) {
                    if (stop.get())break;
                    Runnable runnable = runnables.poll();
                    while (runnable != null) {
                        runnable = runnables.poll();


Reakt website

Related Projects

Further reading

What is Microservices Architecture?

QBit Java Micorservices lib tutorials

The Java microservice lib. QBit is a reactive programming lib for building microservices - JSON, HTTP, WebSocket, and REST. QBit uses reactive programming to build elastic REST, and WebSockets based cloud friendly, web services. SOA evolved for mobile and cloud. ServiceDiscovery, Health, reactive StatService, events, Java idiomatic reactive programming for Microservices.

Find more tutorial on QBit.

Reactive ProgrammingJava MicroservicesRick Hightower

High-speed microservices consulting firm and authors of QBit with lots of experience with Vertx - Mammatus Technology

Highly recommended consulting and training firm who specializes in microservices architecture and mobile development that are already very familiar with QBit and Vertx as well as iOS and Android - About Objects

Java Microservices Architecture

Microservice Service Discovery with Consul

Microservices Service Discovery Tutorial with Consul

Reactive Microservices

High Speed Microservices

Java Microservices Consulting

Microservices Training

Reactive Microservices Tutorial, using the Reactor

QBit is mentioned in the Restlet blog

Reakt 2.0.0 was released - Reakt is reactive interfaces for Java.

posted Apr 13, 2016, 12:27 AM by Rick Hightower   [ updated Apr 13, 2016, 12:28 AM ]

Reakt is reactive interfaces for Java which includes:

The emphasis is on defining interfaces that enable lambda expressions, and fluent APIs for asynchronous programming for Java.

Note: This mostly just provides the interfaces not the implementations. There are some starter implementations but the idea is that anyone can implement this. It is all about interfaces. There will be adapters for Vertx, RxJava, Reactive Streams, etc. There is support for Guava Async (used by Cassandra) and the QBit microservices lib. Czar Maker uses Reakt for its reactive leadership election.

Have a question?

Reakt Mailing List

Getting started

Using from maven


Using from gradle

compile 'io.advantageous:reakt:2.0.0.RELEASE'

Fluent Promise API

  Promise<Employee> promise = promise()
                .then(e -> saveEmployee(e))
                .catchError(error -> 
                     logger.error("Unable to lookup employee", error));

  employeeService.lookupEmployee(33, promise);

Or you can handle it in one line.

Fluent Promise API example 2

        promise().then(e -> saveEmployee(e))
                 .catchError(error -> logger.error(
                                           "Unable to lookup ", error))

Promises are both a callback and a Result; however, you can work with Callbacks directly.

Using Result and callback directly

        employeeService.lookupEmployee(33, result -> {
            result.then(e -> saveEmployee(e))
                  .catchError(error -> {
                    logger.error("Unable to lookup", error);

In both of these examples, lookupEmployee would look like:

Using Result and callback directly

   public void lookupEmployee(long employeeId, Callback<Employee> callback){...}

You can use Promises to transform into other promises.

Transforming into another type of promise using thenMap

        Promise<Employee> employeePromise = Promises.<Employee>blockingPromise();

        Promise<Sheep> sheepPromise = employeePromise
                .thenMap(employee1 -> new Sheep(employee1.getId()));

The thenMap will return a new type of Promise.

You can find more examples in the reakt wiki.

We also support working with streams.

Promise concepts

This has been adapted from this article on ES6 promises. A promise can be:

  • fulfilled The callback/action relating to the promise succeeded
  • rejected The callback/action relating to the promise failed
  • pending The callback/action has not been fulfilled or rejected yet
  • completed The callback/action has been fulfilled/resolved or rejected

Java is not single threaded, meaning that two bits of code can run at the same time, so the design of this promise and streaming library takes that into account.

There are three types of promises:

  • Callback promises
  • Blocking promises (for testing and legacy integration)
  • Replay promises (allow promises to be handled on the same thread as caller)

Replay promises are the most like their JS cousins. Replay promises are usually managed by the Reakt Reactor and supports environments like Vert.x and QBit. See the wiki for more details on Replay promises.

It is common to make async calls to store data in a NoSQL store or to call a remote REST interface or deal with a distributed cache or queue. Also Java is strongly typed so the library that mimics JS promises is going to look a bit different. We tried to use similar terminology where it makes sense.

Events and Streams are great for things that can happen multiple times on the same object — keyup, touchstart, or event a user action stream from Kafka, etc.

With those events you don't really care about what happened before when you attached the listener.

But often times when dealing with services and data repositories, you want to handle a response with a specific next action, and a different action if there was an error or timeout from the responses. You essentially want to call and handle a response asynchronously and that is what promises allow.

This is not our first time to bat with Promises. QBit has had Promises for a few years now. We just called them CallbackBuilders instead. We wanted to use more standard terminology and wanted to use the same terminology and modeling on projects that do not use QBit like Conekt, Vert.x, RxJava, and reactive streams.

At their most basic level, promises are like event listeners except:

A promise can only succeed or fail once. A promise cannot succeed or fail twice, neither can it switch from success to failure. Once it enters its completed state, then it is done.


Reakt Guava Bridge which allows libs that use Guava async support to now have a modern Java feel.

Cassandra Reakt example

register(session.executeAsync("SELECT release_version FROM system.local"), 
  promise().thenExpect(expected -> 
     gui.setMessage("Cassandra version is " +
  ).catchError(error -> 
     gui.setMessage("Error while reading Cassandra version: " 
     + error.getMessage())

QBit 1 ships with a bridge and QBit 2will use Reakt as its primary reactive callback mechanism.

Conekt, a slimmed down fork of Vert.x, will also use Reakt.

See QBit microservices lib for more details.

See our wiki for more details on Reakt.

Further reading

What is Microservices Architecture?

QBit Java Micorservices lib tutorials

Czar Maker Consul

posted Apr 13, 2016, 12:24 AM by Rick Hightower

Czar Maker Consul - Reactive Java Leadership Election

Czar Maker is a nice set of interfaces for Leader Election.

There is one Czar Maker Consul implementation of this interface that uses Consul. You could use the interface to implement leader election with zookeeper or etcd. Consul and etcd use the RAFT algorithm to present a reliable kv storage (Zookeeper uses a similar technique as Consul and etcd).
Czar uses Reakt, a Java reactive, streaming API, with callbacks and promises that is Java 8 and Lambda friendly
Czar also uses QBit microservices as its HTTP/IO lib.

Czar Maker Consul is a Java lib for leadership election. Czar Maker Consul uses Consul to do leadership election.

Getting Started




compile 'io.advantageous.czarmaker:czar-maker-consul:0.1.0.RELEASE'

Sample usage

import io.advantageous.consul.Consul;
import io.advantageous.czarmaker.Endpoint;
import io.advantageous.czarmaker.consul.*;
import io.advantageous.qbit.util.TestTimer;
import io.advantageous.reakt.promise.Promise;
import io.advantageous.reakt.promise.Promises;
import io.advantageous.reakt.reactor.Reactor;
import io.advantageous.reakt.reactor.TimeSource;


    private final long sessionTTL = 10;
    private final long newLeaderCheckInterval = 5;
    private ConsulLeadershipElector leadershipElector;
    private Reactor reactor;
    private TestTimer testTimer;
    private Consul consul;

        consul = Consul.consul();
        testTimer = new TestTimer();
        reactor = Reactor.reactor(Duration.ofSeconds(30), new TestTimeSource(testTimer));
        final String serviceName = "foo";

        ConsulLeadershipProvider provider = new ConsulLeadershipProvider(serviceName, consul, TimeUnit.SECONDS, sessionTTL);

        leadershipElector = new ConsulLeadershipElector(provider, serviceName, reactor, TimeUnit.SECONDS,
                sessionTTL, newLeaderCheckInterval);

        /** Get the current leader. */
        Promise<Endpoint> promise = Promises.<Endpoint>blockingPromise();


        /** Elect this endpoint as the current leader. */
        Promise<Boolean> selfElectPromise = Promises.<Boolean>blockingPromise();
        leadershipElector.selfElect(new Endpoint("foo.com", 9091), selfElectPromise);

        assertTrue("We are now the leader", selfElectPromise.get());

        /** Get the current leader again.  */
        Promise<Endpoint> getLeaderPromise = Promises.<Endpoint>blockingPromise();

        /** See if it present. */

        /** See if it has the host foo.com. */
        assertEquals("foo.com", getLeaderPromise.get().getHost());

        /** See if the port is 9091. */
        assertEquals(9091, getLeaderPromise.get().getPort());



        /** Elect a new leader. */
        leadershipElector.selfElect(new Endpoint("foo2.com", 9092), selfElectPromise);

QBit 1.0.0 is now released

posted Apr 13, 2016, 12:22 AM by Rick Hightower

Support was added for Consul Session to support leader election.

All of add-on libs into QBit Extensions and now the core is a lot smaller.

There was a flurry on new releases to improve the core RPC WebSocket lib.

How to Identify you are not using Microservice Architecture

posted Nov 28, 2015, 1:24 PM by Rick Hightower   [ updated Nov 28, 2015, 1:26 PM ]

How to Identify you are not using Microservice Architecture.

Assume everyone starts with 100 points.

Everyone loses ten points for each assertion that is true. 

Not using Microservices hit list

  1. You don’t deploy to production more than once a month
  2. You always have to deploy a release chain of services
  3. You deploy multiple EAR or WAR files to the same Java EE container
  4. You make a blocking call to another service while servicing requests to your service 
  5. You make a blocking call to a database while servicing requests to your service
  6. Your service has more than 20 classes
  7. You share a database between more than one set of services or applications 
  8. You don’t deploy to a containerized or virtualized environment
  9. You don’t utilize service discovery
  10. You can have cascading failures
  11. You don’t implement circuit breakers and are not fault tolerant
  12. You don’t use async calls 
  13. You use more than 200 threads in a single service
  14. You don’t monitor health status of your services 
  15. You don't monitor statistics of your services
  16. You don’t have distributed logging
  17. You don’t use HTTP and JSON
  18. You don't use messaging
  19. Documentation for your services do not include curl commands
  20. You use WSDL
  21. You use XML as a transport
  22. You use an ESB 
  23. You use BPEL
  24. You use EAR and WAR files
  25. You use a Java EE container

If your score is below 70%, you are not using Microservices Architecture.

Microservices should be:

  1. Quickly deployable
  2. Deployed often
  3. Small
  4. Monitored: health, distributed logging, statistics
  5. Use Service Discovery to find other services
  6. Use async communication
  7. Prevent cascading failure and be fault tolerant
  8. Use web technologies like JSON, HTTP, WebSocket or messaging
Being async, using service discovery, preventing cascading failures, and being fault tolerant are all part of Reactive Microservices.

Read more about the history of microservices, how it is different from SOA, and just what are Microservices.

1-10 of 32