Apr 18, 2015

TDD, Code review and Economics of Software Quality

To understand the value of Junits (developer tests), try maintaining, or worst, refactoring a code base that has none. The cost of  maintaining such code is so high, that in most cases, it gets replaced instead of being improved or enhanced. The developer tests leads to ease of  maintenance and thus enable change. They are now a critical part of software development, most enterprises have adopted them and have moved from "no" tests to "some" test organization, but the road beyond that is unclear.  The industry prescribed techniques (Uncle Bob TDD rules and 100 % code coverage) are difficult to adopt for large enterprises which have a massive code base and globally distributed teams. Enterprises needs a way to standardize testing practices, which can be easily implemented and enforced across internal development teams and external outsourced development partners.

Code Coverage metric provides an ability to define a specific coverage target and measure it in an automated way, but it has its own limitations.
The developer tests, are not cheap, being "developer" tests, they take developer’s time and effort, which would otherwise be spent on adding features and functions. A large test suite would increase development  cost due to increase in test execution time and it would have its own maintenance overhead.

Whenever tests are written merely to attain high coverage, they lead to excessive tests for trivial and obvious functionality, and insufficient tests for critical or change-prone code. Also, not all code needs same coverage, there might be framework and boilerplate code that does not require extensive coverage, whereas some code may need more than 100% code coverage, such as test with a wide range of dataset. There might be other project specific attributes that influence test coverage too. Therefore, a flat coverage target may not work in all situations.

The other issue of writing tests for coverage is that the tests are retro-fitted as compared to test first approach of TDD. It is not only challenging to write tests for a code that was not designed for testability but also, the benefits of TDD (test first) are not realized.

TDD is a code design process that produces testable and high-quality code. In TDD, the developer is not just  implementing the feature, but by writing tests, he is also designing modular, decoupled and testable code. A developer would find it hard to test a unit that is doing too much or is tightly coupled with other units, and would be forced to refine the code. The multiple iterations of writing code, tests and refactoring, also leads to better self-review. The developer invests  a lot more thought in the code design and would find issues early that would otherwise go undetected.

When tests are retro-fitted, these benefits are not realized. Retrofitting tests is about documenting what the code does, rather than using tests as a code design tool. But doing TDD for all the code, all the time can slow down the development. Not all code is critical enough to need TDD,  some tests can be retrofitted, like integration tests. In order to expedite development, the application can be released for integration (UI) and QA, and integration tests can be added later to document the system behavior.

So how does one verify that TDD is practiced when and where required, and there is sufficient coverage for the code when and where required. I think instead of relying on automated tools, the code review process can be expanded to review tests for quality, coverage and TDD practices.

The coverage tools and TDD cannot check the quality of tests, ie if tests properly assert and verify the code. Only a manual review can check such test quality issues.

The review process would also promote TDD. If the code submitted for review has no tests, it would  suggest that the developer did not consider testing while development and the code might not be testable (maintainable), also self-review did not occur. The reviewer can reject such code since if the code is important enough to be peer reviewed, it is important enough to be self-reviewed. The reviewer can check if the coverage for the code is sufficient or unnecessary.

This review would also increase the efficiency of the code review. The code  reviewer would review the code in the context of the tests and get better understanding of the code, thus providing better feedback.

The cost effective way to achieve software testability is to promote TDD, and instead of relying on automated tools, piggyback on existing code review process to promote and ensure TDD. 

Sep 15, 2014

Netflix Public API - Rest in peace

Netflix decision to  retire its Public API may be based on its own business and IT strategy, however, since it is the front-runner  in Web API trend, this decision needs to be assessed in a broader sense.  Specifically, what this means to the enterprises API program and their vision to increase API exposure.

Web API is a rapidly growing trend, where Enterprises offer programmatic access of their data, services and resources to the developers: internal teams, external partners, or public third-party developers. Web APIs allow access to data and functionality, that is typically available via enterprises webapps (website),  to other consumers such as internal webapps, portals, mobile and B2B partners.

Having realized the value of APIs, through reuse, where a single resource endpoint can service various consumers, the Enterprises are now looking at ways to expand their API program to a wider consumer base. I think this expansion needs a careful thought, and can greatly learn from the Netflix API program, that had to shut-off one of its API consumers (the third-party developers). Netflix may get away from its decision by  upsetting a handful of developers, however, the mainstream enterprises cannot do that without negatively affecting their business relationship and bottom lines.

Every API consumer brings in some cost and complexity that impacts API design and manageability. This sounds counterintuitive, wherein, API design needs to be consumer-agnostic, and a well-designed API should serve any consumer. Again, looking at Netflix and based on my experience with API design over the  last few years, this expectation cannot be met easily.

In my experience, API design, inadvertently gets influenced by the consumers that it is initially developed for; a browser-based webapp, mobile, external partners, etc. The optimization needed for different consumers has to be handled at the API design level. On one hand, API need to handle strict security and policy controls for external users, while on the other hand it needs to  handle course-grained/auto-discovery for an internal webapp. To serve all types of consumers, API needs to stay fine-grained, but that may make the consuming webapp too chatty and result in performance issues. In reality, designing a single API that handles all optimizations for all its consumers gets cumbersome.

Netflix VP of Edge Engineering, Daniel Jacobson, in his blog  Why rest keeps me up at night, points out similar complexity when trying to design one-size-fits-all APIs. Here are a few extracts from his blog:

Our REST API, while very capable of handling the requests from our devices in a generic way, is optimized for none of them.

That means that each device potentially has to work a little harder (or sometimes a lot harder) to get the data needed to create great user experiences because devices are different from each other.

Because of the differences in these devices, Netflix UI teams would often have to do a range of things to get around our REST API to better serve the users of the device. Sometimes, the API team would be required to extend the base service to handle special cases, often resulting in spaghetti code or undocumented features. "

The design solution to handle consumer-specific complexities are expensive, either consumer has to do extra-work, or services has consumer-specific rules, or an extra proxy (intermediary) is required to handle consumer-specific features.

At the end, adding API consumer has cost implications that need to be assessed against the business value of API expansion.

While Daniel Jacobson may have solved his problem by shutting-down Public APIs and is having a good night's sleep, some of us still need to find a better way to rest.

Jun 19, 2014

REST API documentation- HTMLWadlGenerator for CXF

In Apache CXF, one can generate a Wadl for any registered resource by appending ?_wadl to the resource url. This Wadl provides an excellent source of real-time REST API documentation, but the output format is not reader-friendly.

I extended the default CXF WadlGenerator to support text/html mediaType using Wadl XSL stylesheet. 

Here is the code for HTMLWadlGenerator, which can be registered as a jaxrs:provider with the jaxrs:server. To see the output, append ?_wadl&_type=text/html  to the resource url.

This would give a nice looking HTML page of REST API documentation for the registered resources.

Jun 8, 2014

Configure Solr -Suggester

Solr includes an autosuggest component, Suggestor. From Solr 4.7 onwards, the implementation of this Suggestor is changed. The old SpellChecker based search component is replaced with a new suggester that utilizes Lucene suggester module. The latest Solr download is preconfigured with this new suggester, but the documentation on the Solr wiki is still of the previous  SpellCheck version.

It took me sometime to understand the new suggester and get it working.

There are two configurations for suggester, a search component and a request handler:
<searchComponent name="suggest" class="solr.SuggestComponent">
            <lst name="suggester">
      <str name="name">mySuggester</str>
      <str name="lookupImpl">FuzzyLookupFactory</str>      <!-- org.apache.solr.spelling.suggest.fst -->
      <str name="dictionaryImpl">DocumentDictionaryFactory</str>     <!-- org.apache.solr.spelling.suggest.HighFrequencyDictionaryFactory -->
      <str name="field">cat</str>
      <str name="weightField">price</str>
      <str name="suggestAnalyzerFieldType">string</str>

  <requestHandler name="/suggest" class="solr.SearchHandler" startup="lazy">
    <lst name="defaults">
      <str name="suggest">true</str>
      <str name="suggest.count">10</str>
    <arr name="components">

To check the suggester, index few documents with good test values for  cat field, which is set as the suggestion field.

The url for getting suggestions

(use suggest.build=true for the first time)

In my case this returns 
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">13</int>
<str name="command">build</str>
<lst name="suggest">
<lst name="mySuggester">
<lst name="A">
<int name="numFound">2</int>
<arr name="suggestions">
<str name="term">A Clash of Kings</str>
<long name="weight">0</long>
<str name="payload"/>
<str name="term">A Game of Thrones</str>
<long name="weight">0</long>
<str name="payload"/>

Since a default suggester  is not configured, suggest.dictionary is required, without it, you will get an exception: No suggester named default was configured

You can configure default suggestor in the SolrConfig.xml
  <requestHandler name="/suggest" class="solr.SearchHandler" startup="lazy">
    <lst name="defaults">
      <str name="suggest">true</str>
      <str name="suggest.count">10</str>
                  <str name="suggest.dictionary">mySuggester</str>
    <arr name="components">

Now you should be able to get suggeston, without having to specify dictionary in the URL.

Apr 13, 2014

Akka Java for large-scale event processing

We are designing a large scale distributed event-driven system for real-time data replication across transactional databases. The data(messages) from the source system undergoes a series of  transformations and routing-logic before reaching its destination. These transformations are multi-process and multi-threaded operations, comprising of smaller stateless steps and tasks that can be performed concurrently. There is no shared state across processes instead, the state transformations are persisted in the database, and each process pulls its work-queue directly from the database. 

Based on this, we needed a technology that supported distributed event processing, routing and concurrency on the  Java + Spring platform, the three options considered were, MessageBroker (RabbitMQ), Spring Integration and Akka

RabitMQ: MQ was the first choice because it is the traditional and proven solution for messaging/event-processing. RabbitMQ, because it is popular light-weight open source option with commercial support from a vendor we already use. I was pretty  impressed with RabbitMQ, it was easy to use, lean, yet supported advance distribution and messaging features. The only thing that it lacked for us, was the ability to persist messages in Oracle.  

Even though RabbitMQ is Open Source (free), for enterprise use, there is a substantial cost factor to it. As MQ is an additional component in the middleware stack, it requires dedicated staff for administration and maintenance, and  a commercial support for the product. Also, setup and configuration of MesageBroker has its own complexity and involves cross-team coordination.

MQs are primarily EAI products and provide cross-platform (multi-language, multi-protocol) support. They might be too bulky and expensive when used just as asynchronous concurrency and parallelism solution.

Spring Integration:  Spring has a few modules that provide scalable asynchronous execution.
Spring TaskExecutor  provides asynchronous processing with lightweight thread pool options.
Spring Batch  allows distributed asynchronous processing via the Job Launcher and Job Repository. 
Spring Integration extends it further by providing EAI features, messaging, routing and mediation capabilities.

While all three Spring modules have some of the required feature, it was difficult to get everything together. Like this user, I was expecting Spring Integration would have RMI-like remoting capability.

Akka Java: Akka is a toolkit and runtime for building highly concurrent, distributed, and fault tolerant event-driven applications on the JVM. It has a Java API and I decided to give it a try.

Akka  was easy to get started, I found Activator quite helpful. Akka is based on Actor Model, which is  a message-passing paradigm of achieving concurrency without shared-objects and blocking. In Akka, rather than invoking an object directly, a message is constructed and send it to the object (called an actor) by way of an actor reference. This design greatly simplifies  concurrency management. 

However, the simplicity does not mean that a traditional lock-based concurrent program (thread/synchronization) can be  converted into Akka with few code changes. One needs to design their Actor System by defining smaller tasks, messages and communication between the them.  There is a learning curve for Akka’s concepts and Actor Model paradigm. It is comparatively small, given the complexity of concurrency and parallelism that it abstracts.

Akka offers the right level of abstraction, where you do not have to worry about thread and synchronization of shared-state, yet you get full flexibility and control to write your custom concurrency solution.   

Besides  simplicity, I thought the real power of Akka is, remoting and its ability to  distribute actors across multiple nodes for high scalability. Akka's Location Transparency and Fault Tolerance make it easy to scale and distribute  application without code changes. 

I was able to build a PoC for my multi-process and multi-threading use-case, fairly easily.  I still need to work out Spring injection in Actors.

A few words of caution, Akka’s Java code has a lot of typecasting due to Scala’s type system and achieving object mutability could be tricky. I am tempted to reuse my existing JPA entities (mutable) as messages for reduced database calls.
Also, Akka community, is geared towards Scala and there is less material on Akka Java.

In spite of all this, Akka Java seems cheaper, faster and efficient option out of the three.