Monday, September 15, 2014

Netflix Public API - Rest in peace

Netflix decision to  retire its Public API may be based on its own business and IT strategy, however, since it is the front-runner  in Web API trend, this decision needs to be assessed in a broader sense.  Specifically, what this means to the enterprises API program and their vision to increase API exposure.

Web API is a rapidly growing trend, where Enterprises offer programmatic access of their data, services and resources to the developers: internal teams, external partners, or public third-party developers. Web APIs allow access to data and functionality, that is typically available via enterprises webapps (website),  to other consumers such as internal webapps, portals, mobile and B2B partners.

Having realized the value of APIs, through reuse, where a single resource endpoint can service various consumers, the Enterprises are now looking at ways to expand their API program to a wider consumer base. I think this expansion needs a careful thought, and can greatly learn from the Netflix API program, that had to shut-off one of its API consumers (the third-party developers). Netflix may get away from its decision by  upsetting a handful of developers, however, the mainstream enterprises cannot do that without negatively affecting their business relationship and bottom lines.

Every API consumer brings in some cost and complexity that impacts API design and manageability. This sounds counterintuitive, wherein, API design needs to be consumer-agnostic, and a well-designed API should serve any consumer. Again, looking at Netflix and based on my experience with API design over the  last few years, this expectation cannot be met easily.

In my experience, API design, inadvertently gets influenced by the consumers that it is initially developed for; a browser-based webapp, mobile, external partners, etc. The optimization needed for different consumers has to be handled at the API design level. On one hand, API need to handle strict security and policy controls for external users, while on the other hand it needs to  handle course-grained/auto-discovery for an internal webapp. To serve all types of consumers, API needs to stay fine-grained, but that may make the consuming webapp too chatty and result in performance issues. In reality, designing a single API that handles all optimizations for all its consumers gets cumbersome.


Netflix VP of Edge Engineering, Daniel Jacobson, in his blog  Why rest keeps me up at night, points out similar complexity when trying to design one-size-fits-all APIs. Here are a few extracts from his blog:

Our REST API, while very capable of handling the requests from our devices in a generic way, is optimized for none of them.

That means that each device potentially has to work a little harder (or sometimes a lot harder) to get the data needed to create great user experiences because devices are different from each other.

Because of the differences in these devices, Netflix UI teams would often have to do a range of things to get around our REST API to better serve the users of the device. Sometimes, the API team would be required to extend the base service to handle special cases, often resulting in spaghetti code or undocumented features. "

The design solution to handle consumer-specific complexities are expensive, either consumer has to do extra-work, or services has consumer-specific rules, or an extra proxy (intermediary) is required to handle consumer-specific features.


At the end, adding API consumer has cost implications that need to be assessed against the business value of API expansion.

While Daniel Jacobson may have solved his problem by shutting-down Public APIs and is having a good night's sleep, some of us still need to find a better way to rest.

Thursday, June 19, 2014

REST API documentation- HTMLWadlGenerator for CXF

In Apache CXF, one can generate a Wadl for any registered resource by appending ?_wadl to the resource url. This Wadl provides an excellent source of real-time REST API documentation, but the output format is not reader-friendly.

I extended the default CXF WadlGenerator to support text/html mediaType using Wadl XSL stylesheet. 

Here is the code for HTMLWadlGenerator, which can be registered as a jaxrs:provider with the jaxrs:server. To see the output, append ?_wadl&_type=text/html  to the resource url.

This would give a nice looking HTML page of REST API documentation for the registered resources.

Sunday, June 8, 2014

Configure Solr -Suggester

Solr includes an autosuggest component, Suggestor. From Solr 4.7 onwards, the implementation of this Suggestor is changed. The old SpellChecker based search component is replaced with a new suggester that utilizes Lucene suggester module. The latest Solr download is preconfigured with this new suggester, but the documentation on the Solr wiki is still of the previous  SpellCheck version.

It took me sometime to understand the new suggester and get it working.

There are two configurations for suggester, a search component and a request handler:
<searchComponent name="suggest" class="solr.SuggestComponent">
            <lst name="suggester">
      <str name="name">mySuggester</str>
      <str name="lookupImpl">FuzzyLookupFactory</str>      <!-- org.apache.solr.spelling.suggest.fst -->
      <str name="dictionaryImpl">DocumentDictionaryFactory</str>     <!-- org.apache.solr.spelling.suggest.HighFrequencyDictionaryFactory -->
      <str name="field">cat</str>
      <str name="weightField">price</str>
      <str name="suggestAnalyzerFieldType">string</str>
    </lst>
  </searchComponent>

  <requestHandler name="/suggest" class="solr.SearchHandler" startup="lazy">
    <lst name="defaults">
      <str name="suggest">true</str>
      <str name="suggest.count">10</str>
    </lst>
    <arr name="components">
      <str>suggest</str>
    </arr>
  </requestHandler>

To check the suggester, index few documents with good test values for  cat field, which is set as the suggestion field.

The url for getting suggestions


(use suggest.build=true for the first time)

In my case this returns 
<response>
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">13</int>
</lst>
<str name="command">build</str>
<lst name="suggest">
<lst name="mySuggester">
<lst name="A">
<int name="numFound">2</int>
<arr name="suggestions">
<lst>
<str name="term">A Clash of Kings</str>
<long name="weight">0</long>
<str name="payload"/>
</lst>
<lst>
<str name="term">A Game of Thrones</str>
<long name="weight">0</long>
<str name="payload"/>
</lst>
</arr>
</lst>
</lst>
</lst>
</response>

Since a default suggester  is not configured, suggest.dictionary is required, without it, you will get an exception: No suggester named default was configured

You can configure default suggestor in the SolrConfig.xml
  <requestHandler name="/suggest" class="solr.SearchHandler" startup="lazy">
    <lst name="defaults">
      <str name="suggest">true</str>
      <str name="suggest.count">10</str>
                  <str name="suggest.dictionary">mySuggester</str>
    </lst>
    <arr name="components">
      <str>suggest</str>
    </arr>
  </requestHandler>

Now you should be able to get suggeston, without having to specify dictionary in the URL.



Sunday, April 13, 2014

Akka Java for large-scale event processing

We are designing a large scale distributed event-driven system for real-time data replication across transactional databases. The data(messages) from the source system undergoes a series of  transformations and routing-logic before reaching its destination. These transformations are multi-process and multi-threaded operations, comprising of smaller stateless steps and tasks that can be performed concurrently. There is no shared state across processes instead, the state transformations are persisted in the database, and each process pulls its work-queue directly from the database. 

Based on this, we needed a technology that supported distributed event processing, routing and concurrency on the  Java + Spring platform, the three options considered were, MessageBroker (RabbitMQ), Spring Integration and Akka

RabitMQ: MQ was the first choice because it is the traditional and proven solution for messaging/event-processing. RabbitMQ, because it is popular light-weight open source option with commercial support from a vendor we already use. I was pretty  impressed with RabbitMQ, it was easy to use, lean, yet supported advance distribution and messaging features. The only thing that it lacked for us, was the ability to persist messages in Oracle.  

Even though RabbitMQ is Open Source (free), for enterprise use, there is a substantial cost factor to it. As MQ is an additional component in the middleware stack, it requires dedicated staff for administration and maintenance, and  a commercial support for the product. Also, setup and configuration of MesageBroker has its own complexity and involves cross-team coordination.

MQs are primarily EAI products and provide cross-platform (multi-language, multi-protocol) support. They might be too bulky and expensive when used just as asynchronous concurrency and parallelism solution.

Spring Integration:  Spring has a few modules that provide scalable asynchronous execution.
Spring TaskExecutor  provides asynchronous processing with lightweight thread pool options.
Spring Batch  allows distributed asynchronous processing via the Job Launcher and Job Repository. 
Spring Integration extends it further by providing EAI features, messaging, routing and mediation capabilities.

While all three Spring modules have some of the required feature, it was difficult to get everything together. Like this user, I was expecting Spring Integration would have RMI-like remoting capability.

Akka Java: Akka is a toolkit and runtime for building highly concurrent, distributed, and fault tolerant event-driven applications on the JVM. It has a Java API and I decided to give it a try.

Akka  was easy to get started, I found Activator quite helpful. Akka is based on Actor Model, which is  a message-passing paradigm of achieving concurrency without shared-objects and blocking. In Akka, rather than invoking an object directly, a message is constructed and send it to the object (called an actor) by way of an actor reference. This design greatly simplifies  concurrency management. 

However, the simplicity does not mean that a traditional lock-based concurrent program (thread/synchronization) can be  converted into Akka with few code changes. One needs to design their Actor System by defining smaller tasks, messages and communication between the them.  There is a learning curve for Akka’s concepts and Actor Model paradigm. It is comparatively small, given the complexity of concurrency and parallelism that it abstracts.

Akka offers the right level of abstraction, where you do not have to worry about thread and synchronization of shared-state, yet you get full flexibility and control to write your custom concurrency solution.   

Besides  simplicity, I thought the real power of Akka is, remoting and its ability to  distribute actors across multiple nodes for high scalability. Akka's Location Transparency and Fault Tolerance make it easy to scale and distribute  application without code changes. 

I was able to build a PoC for my multi-process and multi-threading use-case, fairly easily.  I still need to work out Spring injection in Actors.

A few words of caution, Akka’s Java code has a lot of typecasting due to Scala’s type system and achieving object mutability could be tricky. I am tempted to reuse my existing JPA entities (mutable) as messages for reduced database calls.
Also, Akka community, is geared towards Scala and there is less material on Akka Java.

In spite of all this, Akka Java seems cheaper, faster and efficient option out of the three.

Thursday, February 13, 2014

Setup RabbitMQ and Pika Python client on MacOS

There is an excellent tutorial on RabbitMQ,  however, I thought it lacked detailed steps on installation and setup of RabbitMQ server and Pika Python client. I would like to share the steps on MacOS.


Installing and Running RabbitMQ


RabbitMQ is available in homebrew

brew install rabbitmq

once installed, run the server

sudo  /usr/local/sbin/rabbitmq-server

verify RabbitMQ is running 

http://localhost:15672/ should give a login prompt, login using guest/guest


Installing Pika

Install python if you don't already have it,

brew install python

The Python formula comes with Pip and setuptools, check

/Library/Python/<yourversion>/site-packages

install pika

sudo pip install pika=0.9.8

I also ran

sudo easy_install pika

it pulls-in additional packages for pika.

That is it, you should be able to use Pika in your python programs.

You should be all set now and can start with the RabbitMQ tutorial.