Android: Dagger 1 and 2 living together

In this article I will give a quick explanation on how we can have Dagger 1 and 2 siting and working togetherLet me just clear this up: I will not talk about the benefits of neither dependency injection nor Dagger, since I have done it in the past. Also, the title of this post may sound a little bit misleading up front, but the idea is that hopefully by the end of it, you will understand its reason.


This is the first question that comes to my mind at the moment of this writing. Well… let me give you some more context here: you have a big android/java codebase using Dagger 1 and want to reorganize your dependency injection approach and migrate to Dagger 2.

You have a couple of options:

  • Do it in one shot, for instance, send a big PR (the team is not going to be very happy when reviewing it and might lead to a lot of conflicts with existing development branches and features).
  • Do it gradually in small pieces (divide and conquer: always break down a big problem into smaller ones, specially in big codebases with many people contributing to it.).


Just for the record, the picture reflects our SoundCloud Listeners Android app main module which has 240k lines of code approximately (without counting internally developed libraries) and at least 20 contributors to the main codebase.

Solving the puzzle

When trying to use both Dagger versions together, you might run into different situations like classpath clashes and conflicts or transitive dependencies issues. So, in order to avoid them, we have to somehow relocate Dagger 2 packages: this sounds scary but don’t give up and continue reading, I promise there is light at the end of the tunnel.

With that being said, we are going to use a gradle plugin called “Shadow” created by John Engelman.
Basically the idea is to use the shadow functionality to achieve our goal, as it is described in the plugin official documentation:

“Dependency bundling and relocation is the main use case for library authors. The goal of a bundled library is to create a pre-packaged dependency for other libraries or applications to utilize. Often in these scenarios, a library may contain a dependency that a downstream library or application also uses. In some cases, different versions of this common dependency can cause an issue in either the upstream library or the downstream application. These issues often manifest themselves as binary incompatibilities in either the library or application code. By utilizing Shadow’s ability to relocate the package names for dependencies, a library author can ensure that the library’s dependencies will not conflict with the same dependency being declared by the downstream application.”

Putting all pieces together

Now we are ready to go: given our android application project, the first step is to add 2 plain java library modules which are going to represent both dagger 2 main library and compiler, as you can see in the following picture:


The idea is simple: we relocate Dagger 2 packages and you can see the magic happening if you sneak into the build.gradle file (shadowJar configuration block) of any of the recently added projects:

The script talks by itself here, and now we have our “shadowed” Dagger 2 version ready to be used in our project. Something to keep in mind is that in order to differentiate what is being injected by Dagger 1 and Dagger 2 we will have to use different annotations: this is necessary to avoid clashes. This will also bring upsides and downsides, but continue reading, we will get back to it later on.

Now it is time to setup our main application dependencies:

Dagger 1 remains the same but we have our new friend coming from compiled “relocated” projects. This should work out of the box.

Alternative setup

If you do not want to deal with adding 2 extra library components to your project, another way is to use the 2 jars generated by my sample project, place them into /libs or any other folder (or you can also use a Repository Manager like a Nexus Server for example) and setup the dependencies like this:

Migration process

Before you jump on this journey, here is a little reminder: this is worth if you really have a big codebase and you want to do the migration gradually, otherwise always keep it simple and neither do over engineering nor reinvent the wheel. You have been warned ;).

Now that you decided to hopefully continue, here is an example of a class that can be injected by both Daggers (more code in my sample project on github):

By looking at this class you can tell that it is a Singleton (@Singleton annotation for Dagger 1 and @Singleton2 annotation for Dagger 2). In this case we are not saving any global state (which is good) but there might be cases where you cannot avoid situation and might run into weird and unexpected behavior due to the fact that you have 2 different instances of the same class being injected, sitting in different dependency graphs. For instance, this is not longer a Singleton and it is definitely dangerous, so make sure you to take responsibility on this and your tests cover all these cases.

When the migration is over you can remove Dagger 1 and setup Dagger 2 dependency as you would do with any other one. Afterwards, you will have to get rid of all Dagger 1 related annotations plus the ones that Dagger 2 do not understand anymore and replace them:

  • @Inject2 …with… @Inject
  • @Singleton2 …with… @Singleton
  • @Component2 …with… @Component
  • @Provider2 …with… @Provider
  • @Qualifier2 …with… @Qualifier
  • @Scope2 …with… @Scope
  • @Lazy2 …with… @Lazy

Hopefully at this point you will have everything up and running and the job done little by little without much suffering and pain so happy migration!


In this article we have learned and prepared the terrain to make Dagger 1 and 2 live together in a friendly way. Keep in mind that this solutions does not only apply to Dagger but to any other similar potential migration process you might face in the future.

Finally I hope you find this article useful, and as usual any feedback is very welcome and important. You can find me on twitter: @fernando_cejas.


How to use Optional values on Java and Android

First of all, this is not a new topic and a lot has already been discussed about it.
With that being said, in this article, I want to explain what Optional<T> is, expose a few use case scenarios, compare different alternatives (in other languages) and finally, I do want to show you how we can effectively make use of the (inexistent for now) Optional<T> API on Android (although this can be applied to any Java project, specially those ones targeting Java 7).

To get started, let me quote this (retrieved from the Official Java 8 documentation):
“A wise man once said you are not a real Java programmer until you’ve dealt with a null pointer exception, which is the source of many problems because it is often used to denote the absence of a value”.

Although this statement is true and Java 8 documentation refers to the use of Optional<T> as NullPointerException saver, in my opinion, it is not only useful to minimize the impact of NPE, but to create more meaningful and readable APIs.

Additionally, it is well known that not being careful when using null values can lead to a variety of bugs, and for instance, null is ambiguos, and we don’t always have a clear meaning for it: is it an inexistent value? For example, when a Map.get() method returns null, can mean the value is absent or the value is present and null.

We will try to answer to these questions in this little journey. Let’s get our hands dirty then!

What is an Optional?

First definition from Java 8 documentation:
“Optional object is used to represent null with absent value. It provides various utility methods to facilitate code to handle values as ‘available’ or ‘not available’ instead of checking null values”.

This is another similar definition from the Official Guava documentation:
“An immutable object that may contain a non-null reference to another object. Each instance of this type either contains a non-null reference, or contains nothing (in which case we say that the reference is “absent”); it is never said to “contain null”. A non-null Optional<T> reference can be used as a replacement for a nullable T reference. It allows you to represent “a T that must be present” and a “a T that might be absent” as two distinct types in your program, which can aid clarity”.

In a nutshell, the Optional Type API provides a container object which is used to contain not-null objects. Let’s see a quick example so you get a better understanding of what I am talking about:

As you can see we are wrapping an object of type <T> inside an Optional<T> so we can check its existence later. In other words, Optional<T> forces you to care about the value, since in order to retrieve it, you have to call the get() method (as a good practice always check the presence of it first or return a default value). Just to be clear, we are using Guava‘s Optional<T> here.
Don’t worry much if you still don’t understand, we will explore more afterwards.

Java 8, Scala, Groovy and Kotlin Optional/Option APIs

As I mentioned above, in this article we will focus on Guava Optional<T>, although it is worth to give a quick view to what other programming languages have to offer.

Let’s have a look at what Groovy and Kotlin bring up. These 2 languages offer similar approaches for null safety: ‘Elvis Operator‘. They have added some syntactic sugar and syntax look similar in both of them. Let’s check this piece of Kotlin code: when we have a nullable reference r, we can say “if r is not null, use it, otherwise use some non-null value x”:

Along with the complete if-expression, this can be expressed with the Elvis Operator, written ?::

If the expression to the left of ?: is not null, the elvis operator returns it, otherwise it returns the expression to the right. Note that the right-hand side expression is evaluated only if the left-hand side is null. For the record, Kotlin also has a check for null conditions at compilation time.
You can dive deeper by checking the official documentation and, by the way I’m not neither a Groovy nor Kotlin guy, so I will leave this to the experts :).

On both Java 8 and Scala sides we find a monadic approach for Optional<T> (Java) and Option[T] (Scala), allowing us to use flatMap(), map(), etc. This means we can compose data stream using Optional<T> in a Functional Programming style. Kotlin also offers an OptionIF<T> monad with the same purpose.
Let’s have a quick look at this Scala example from Sean Parsons for a better understanding:

Last but not least we have Optional<T> from Guava. In favor of it, let’s say that its simplified API fits perfectly the Java 7 model: there is only one way to use it which is the imperative one, since in fact, was developed for Java that lacks first-class functions.

I guess so far so good, but there is no Android Java 7 sample code… Ok, you are right, but you will have to keep on reading, so be patient, there is more coming up. Also if you are wondering whether in order to use it on Android you will have to compile Guava and its 20k methods, the answer is NO, there is an alternative to bring up Optional<T> to the game.

How can we use Optional<T> in Android?

First point to raise here is that we are stuck to Java 7 so there is no built-in Optional<T> and we have to aks for help to 3rd party libraries unfortunately…
Our first player is Guava, which for Android might not be a good catch, specially (as mentioned above) because of the 20k methods that brings up to your .apk (I’m pretty sure you have heard of 65k method limit issue ;)).

The second option is to use Arrow, which is an lightweight open source library I created by gathering and including useful stuff I use in my day to day Android development plus some other utilities I wrote myself, such as annotations for code decoration, etc. You can check the project, documentation and features on Github. One thing to remark and shout loudly is that ALL CREDITS GO TO THE CREATORS OF THESE AWESOME APIs.

How do we create Optional<T>?

The Optional<T> API is pretty straightforward:

Here are the Optional<T> query methods:


It is time for code samples and use cases so don’t leave the room yet.

Case scenario #1

This is a well known historical Tony Hoare‘s phrase when he created the null reference:
“I call it my billion-dollar mistake. It was the invention of the null reference in 1965. I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement.”

The main issue with the following code is that relies on a null reference to indicate the absence of a registration number (a bad practice), so we can fix this by using Optional<T> and we print according whether or not the value is present:

The most obvious use case is for avoiding meaningless nullsCheck full class implementation on Github.
Let’s move forward to the next scenario.

Case scenario #2

Let’s say we need to parse a JSON file coming from an API response (something very common in Mobile Development). In this case we can use Optional<T> in our Entity ir order to force the client to care about the existence of the value before using or doing anything with it.
Check out “nickname” field and getter in the following sample code:

Complete sample class on Github.

Case scenario #3

This is another use case we usually stumble upon at @SoundCloud in our Android application.
When we need to construct our feed or any list of items and show them at UI level (presentation models), we have items coming from different data sources, and some of them might be Optional<T>, like for example, a Facebook invitation, a promoted track, etc.

Check this little example, which tries to emulate the above situation in a very simplified way (for learning purpose) using RxJava:

The most important part here is that when we combine both Observables<T> (tracks() and ads() method) we use flatMap() and filter() operators to determine whether or not we are gonna emit ads and for instance, display them at UI level (I’m using Java 8 lambdas here to make the code more readable):

Check out the full implementation on Github.


To wrap up, in software development there are no silver bullets and as programmers we tend to overthink and overuse things so don’t pollute your code with Optional<T> everywhere, use them carefully where it makes sense.

Also let me quote Joshua Bloch in his talk ‘How to Design a Good API and Why it Matters‘:
“APIs should be easy to use and hard to misuse: It should be easy to do simple things; possible to do complex things; and impossible, or at least difficult, to do wrong things.”
I completely agree with this, and from an API design standpoint, Optional<T> is a good example of a well design API: it will help you address and protect from NullPointerException issues (although not fully eliminate them), write concise and readable code and additionally will provide a more meaningful codebase.

Sample Code

You can find all the sample code in a Github repo I created for this purpose: and visit the Arrow project repo to make use of Optional<T> in Android:


Debugging RxJava on Android

Debugging is the process of finding and resolving bugs or defects that prevent correct operation of computer software (Wikipedia).

Nowadays debugging is not an easy task, specially with all the complexity around current systems: Android is not an exception to this rule and since we are dealing with asynchronous executions, that becomes way harder.

As you might know, at @SoundCloud, we are heavily using RxJava as one of our core components for Android Development, so in this article I am gonna walk you through the way we debug Rx Observables and Subscribers.

Give a warm welcome to Frodo

Let me get started by introducing Frodo, but first, if you already watched Matthias Käppler talk at GOTO Conference (if you haven’t yet, I strongly recommend it), you may have noticed that he talks about someone called Gandalf (minute 41:15). All right, I have to say that in the beginning, Gandalf was my failed attempt to create an Aspect Oriented Library for Android, fortunately after working hard and receiving useful feedback, it became an Android Development Kit we use at @SoundCloud. However, I wanted to have something smaller that solves only one problem, so I decided to extract RxJava Logging specifics that I have been working on, and give life to Frodo.

Frodo is no more than an Android Library for Logging RxJava Observables and Subscribers (for now), let’s say Gandalf’s little son or brother. It was actually inspired by Jake Wharton’s Hugo Library.


Debugging RxJava

First of all, I assume that you have basic knowledge about RxJava and its core components: Observables and Subscribers.

Debugging is a cross cutting concern and we know how frustrating and painful could be. Additionally, many times you have to write code (that is not part of your business logic) in order to debug stuff, which make things even more complicated, specially when it comes to asynchronous code execution.

Frodo was born to achieve this and avoid writing code for debugging RxJava objects. It is based on Java Annotations and relies on a Gradle Plugin that detects when the Debug build type of your application is compiled, and weaves code, which is gonna print RxJava Objects logging information on the android logcat output. For instance, it is safe to keep Frodo annotations in your codebase even when you are generating Release versions of your Android App. So now, let’s get our hands dirty and have a taste of it.

Using Frodo

To use Frodo the first thing we need to do is to simply apply a Gradle Plugin to our Android Project like this:

As you can see, we add “com.fernandocejas.frodo:frodo-plugin:0.8.1” to the classpath and afterwards we apply the plugin ‘com.fernandocejas.frodo’.
That should be enough to have access to the Java annotations provided by the Library.

Inspecting @RxLogObservable

The first core functionality of Frodo is to log RxJava Observables through @RxLogObservable Java annotation. Let’s say we have a method that returns an Observable which will emit a list of some sort of DummyClass:

Then we subscribe to our sample observable:

When compiling and running our application, this is the information we are gonna see on the logcat:

Basically this means that we subscribed to an Observable returned by the list() method in ObservableSample class. Then we get information about the emitted items, schedulers and events triggered by the annotated Observable.

Inspecting @RxLogSubscriber

Let’s now explore what @RxLogSubscriber is capable of.
To put an example, let’s create a RxJava dummy Subscriber and annotate it with @RxLogSubscriber.

Forget about the backpressure name of this Subscriber for now, since this topic deserves a whole article. Just know that this Subscriber will only request 16 elements and it is gonna do nothing with the items it receives on the onNext() method. Even though that, we still wanna see what is going on when it subscribes to any Observable which emits Integer values:

Here is when we subscribe to our SampleObservable:

Again when we compile and run our application, this is what we get from the logcat output:

Information here includes each of the items received, number of elements, schedulers, execution time and events triggered.

As you can see this information is useful in cases of backpressure, or to see in which thread the items are being emitted or when we wanna se if our Subscriber has subscribed successfully, thus avoiding memory leaks for example.

Frodo under the hood

In this article, I’m not gonna explain in details how the library internally works, however, if you are curious about it, you can check an article I wrote last year which includes an example with the same approach I am using for Frodo.

You can also look into a presentation I prepared as an introduction for both AOP and the Library or even better, dive into the source code.

Disclaimer: Early stage

Frodo was just born and there is a long way ahead of it. It is still in a very early stage, so you might find issues or things to improve.

Actually, one of the main reasons why it was open source, was to receive feedback/input from the community in order to improve it, make it better and more useful. I have to say that I’m very excited and I have already used it in 3 different projects without many problems (check the known issues section below for more information). Of course pull requests are very welcome too.

Known issues

So far, there is a well known issue: since Frodo relies on a Gradle Plugin (as explained earlier) to detect Android Debug build variant and weave code, if you make use of Android Library Projects, when you build your Application (even the debug build type), the official Android Gradle Plugin will always generate release versions of all the Android Library projects included in your solution, thus, this stops Frodo from injecting generated code in annotated methods/classes. Of course this is not gonna make your app to crash but you won’t see any output on the logcat. There is a workaround for this but be careful if you use it, since you do not wanna ship a release version of your app with business objects being logged all over the place and exposing critical information.
Just add this flag to the android section in the build.gradle file of you Android Library Project:

Frodo Example Application

The repository includes a sample app where you can see different use cases, such as Observable errors and other logging information. I have also enabled Frodo in my Android Clean Architecture repo if you wanna have a look into it.

Wrapping up

This is pretty much I have to offer in this article, and I hope you have found Frodo useful.
The first version is out and you can find the repository of the project here:
As always, any feedback is welcome. PRs as well if you wanna contribute. See you soon.

Useful links

Architecting Android…The evolution

Hey there! After a while (and a lot of feedback received) I decided it was a good time to get back to this topic and give you another taste of what I consider a good approach when it comes to architecting modern mobile applications (android in this case).

Before getting started, I assume that you already read my previous post about Architecting Android…The clean way? If not, this is a good opportunity to get in touch with it in order to have a better understanding of the story I’m going to tell you right here:


Architecture evolution

Evolution stands for a gradual process in which something changes into a different and usually more complex or better form.

Said that, software evolves and changes over the time and indeed an architecture. Actually a good software design must help us grow and extend our solution by keeping it healthy without having to rewrite everything (although there are cases where this approach is better, but that is a topic for another article, so let’s focus in what I pointed out earlier, trust me).

In this article, I am going to walk you through key points I consider necessary and important, to keep the sanity of our android codebase. Keep in mind this picture and let’s get started.


Reactive approach: RxJava

I’m not going to talk about the benefits of RxJava here (I assume you already had a taste of it), since there are a lot articles and badasses of this technology that are doing an excellent job out there. However, I will point out what makes it interesting in regards of android applications development, and how it has helped me evolve my first approach of clean architecture.

First, I opted for a reactive pattern by converting use cases (called interactors in the clean architecture naming convention) to return Observables<T> which means all the lower layers will follow the chain and return Observables<T> too.

As you can see here, all use cases inherit from this abstract class and implement the abstract method buildUseCaseObservable() which will setup an Observable<T> that is going to do the hard job and return the needed data.

Something to highlight is the fact that on execute() method, we make sure our Observable<T> executes itself in a separate thread, thus, minimizing how much we block the android main thread. The result is push back on the Android main thread through the android main thread scheduler.

So far, we have our Observable<T> up and running, but, as you know, someone has to observe the data sequence emitted by it. To achieve this, I evolved presenters (part of MVP in the presentation layer) into Subscribers which would “react” to these emitted items by use cases, in order to update the user interface.

Here is how the subscriber looks like:

Every subscriber is an inner class inside each presenter and implements a DefaultSubscriber<T> created basically for default error handling.

After putting all pieces in place, you can get the whole idea by having a look at the following picture:


Let’s enumerate a bunch of benefits we get out of this RxJava based approach:

  • Decoupling between Observables and Subscribers: makes maintainability and testing easier.
  • Simplified asynchronous tasks: java threads and futures are complex to manipulate and synchronize if more than one single level of asynchronous execution is required, so by using schedulers we can jump between background and main thread in an easy way (with no extra effort), especially when we need to update the UI. We also avoid what we call a “callback hell”, which makes our code unreadable and hard to follow up.
  • Data transformation/composition: we can combine multiple Observables<T>  without affecting the client, which makes our solution more scalable.
  • Error handling: a signal is emitted to the consumer when an error has occurred within any Observable<T>.

From my point of view there is one drawback, and indeed a price to pay, which has to do with the learning curve for developers who are not familiar with the concept. However, you get very valuable stuff out of it. Reactive for the win!

Dependency Injection: Dagger 2

I’m not going to talk much of dependency injection cause I have already written a whole article, which I strongly recommend you to read, so we can stay on the same page here.

Said that, it is worth mentioning, that by implementing a dependency injection framework like Dagger 2 we gain:

  • Components reuse, since dependencies can be injected and configured externally.
  • When injecting abstractions as collaborators, we can just change the implementation of any object without having to make a lot of changes in our codebase, since that object instantiation resides in one place isolated and decoupled.
  • Dependencies can be injected into a component: it is possible to inject mock implementations of these dependencies which makes testing easier.

Lambda expressions: Retrolambda

No one will complain about making use of Java 8 lambdas in our code,  and even more when they simplify it and get rid of a lot of boilerplate, as you can see in this piece of code:

However, I have mixed feelings here and will explain why. It turns out that at @SoundCloud we had a discussion around Retrolambda, mainly whether or not to use it and the outcome was:

  1. Pros:
    • Lambdas and method references.
    • Try with resources.
    • Dev karma.
  2. Cons:
    • Accidental use of Java 8 APIs.
    • 3rd part lib, quite intrusive.
    • 3rd part gradle plugin to make it work with Android.

Finally we decided it was not something that would solve any problems for us: your code looks better and more readable but it was something we could live without, since nowadays all the most powerful IDEs contain code folding options which cover this need, at least in an acceptable manner.

Honestly, the main reason why I used it here, was more to play around it and have a taste of lambdas on Android, although I would probably use it again for a spare time project. I will leave the decision up to you. I am just exposing my field of vision here. Of course the author of this library deserves my kudos for such an amazing job.

Testing approach

In terms of testing, not big changes in relation with the first version of the example:

  • Presentation layer: UI tests with Espresso 2 and Android Instrumentation.
  • Domain layer: JUnit + Mockito since it is a regular Java module.
  • Data layer: Migrated test battery to use Robolectric 3 + JUnit + Mockito. Tests for this layer used to live in a separate Android Module, since back then (at the moment of the first version of the example), there was no built-in unit test support and setting up a framework like robolectric was complicated and required a serie of hacks to make it work properly.

Fortunately that is part of the past and now everything works out of the box so I could relocated them inside the data module, specifically into its default test location: src/test/java folder.

Package organization

I consider code/package organization one of the key factors of a good architecture: package structure is the very first thing encountered by a programmer when browsing source code. Everything flows from it. Everything depends on it.

We can distinguish between 2 paths you can take to divide up your application into packages:

  • Package by layer: Each package contains items that usually aren’t closely related to each other. This results in packages with low cohesion and low modularity, with high coupling between packages. As a result, editing a feature involves editing files across different packages. In addition, deleting a feature can almost never be performed in a single operation.
  • Package by feature: It uses packages to reflect the feature set. It tries to place all items related to a single feature (and only that feature) into a single package. This results in packages with high cohesion and high modularity, and with minimal coupling between packages. Items that work closely together are placed next to each other. They aren’t spread out all over the application.

My recommendation is to go with packages by features, which bring these main benefits:

  • Higher Modularity
  • Easier Code Navigation
  • Minimizes Scope

It is also interesting to add that if you are working with feature teams (as we do at @SoundCloud), code ownership will be easier to organize and more modularized, which is a win in a growing organization where many developers work on the same codebase.


As you can see, my approach looks like packages organized by layer: I might have gotten wrong here (and group everything under ‘users’ for example) but I will forgive myself in this case, because this sample is for learning purpose and what I wanted to expose, were the main concepts of the clean architecture approach. DO AS I SAY, NOT AS I DO :).

Extra ball: organizing your build logic

We all know that you build a house from the foundations up. The same happens with software development, and here I want to remark that, from my perspective, the build system (and its organization) is an important piece of a software architecture.

On Android, we use gradle, which is a platform agnostic build system and indeed, very powerful. The idea here is to go through a bunch of tips and tricks that can simplify your life when it comes to how organize the way you build your application:

  • Group stuff by functionality in separate gradle build files.


Thus, you can use “apply from: ‘buildsystem/ci.gradle’” to plug that configuration to any gradle build file. Do not put everything on only one build.gradle file otherwise you will start creating a monster. Lesson learned.

  • Create maps of dependencies

This is very useful if you wanna reuse the same artifact version across different modules in your project, or maybe the other way around, where you have to apply different dependency versions to different modules. Another plus one, is that you also control the dependencies in one place and, for instance, bumping an artifact version is pretty straightforward.

Wrapping up

That is pretty much I have for now, and as a conclusion, keep in mind there are no silver bullets. However, a good software architecture will help us keep our code clean and healthy, as well as scalable and easy to maintain.

There is a few more things I would like to point out and they have to do with attitudes you should take when facing a software problem:

  • Respect SOLID principles.
  • Do not over think (do not do over engineering).
  • Be pragmatic.
  • Minimize framework (android) dependencies in your project as much as you can.

Source code

  1. Clean architecture github repository – master branch
  2. Clean architecture github repository – releases

Further reading:

  1. Architecting Android..the clean way
  2. Tasting Dagger 2 on Android
  3. The Mayans Lost Guide to RxJava on Android
  4. It is about philosophy: Culture of a good programmer



  1. RxJava wiki by Netflix
  2. Framework bound by Uncle Bob
  3. Gradle user guide
  4. Package by feature, not layer

Tasting Dagger 2 on Android

Hey! Finally I decided that was a good time to get back to the blog and share what I have dealing with for the last weeks. In this occasion I would like to talk a bit about my experience with Dagger 2, but first I think that really worth a quick explanation about why I believe that dependency injection is important and why we should definitely use it in our android applications.

By the way, I assume that you have have a basic knowledge about dependency injection in general and tools like Dagger/Guice, otherwise I would suggest you to check some of the very good tutorials out thereLet’s get our hands dirty then!

Why dependency injection?

The first (and indeed most important) thing we should know about it is that has been there for a long time and uses Inversion of Control principle, which basically states that the flow of your application depends on the object graph that is built up during program execution, and such a dynamic flow is made possible by object interactions being defined through abstractions. This run-time binding is achieved by mechanisms such as dependency injection or a service locator.

Said that we can get to the conclusion that dependency injection brings us important benefits:

  • Since dependencies can be injected and configured externally we can reuse those components.
  • When injecting abstractions as collaborators, we can just change the implementation of any object without having to make a lot of changes in our codebase, since that object instantiation resides in one place isolated and decoupled.
  • Dependencies can be injected into a component: it is possible to inject mock implementations of these dependencies which makes testing easier.

One thing that we will see is that we can manage the scope of our instances created, which is something really cool and from my point of view, any object or collaborator in your app should not know anything about instances creation and lifecycle and this should be managed by our dependency injection framework.

What is JSR-330?

Basically dependency injection for Java defines a standard set of annotations (and one interface) for use on injectable classes in order to to maximize reusability, testability and maintainability of java code.
Both Dagger 1 and 2 (also Guice) are based on this standard which brings consistency and an standard way to do dependency injection.

Dagger 1

I will be very quick here because this version is out of the purpose of this article. Anyway, Dagger 1 has a lot to offer and I would say that nowadays is the most popular dependency injector used on Android. It has been created by Square inspired by Guice.

Its fundamentals are:

  • Multiple injection points: dependencies, being injected.
  • Multiple bindings: dependencies, being provided.
  • Multiple modules: a collection of bindings that implement a feature.
  • Multiple object graphs: a collection of modules that implement a scope.

Dagger 1 uses compile time to figure out bindings but also uses reflection, and although it is not used to instantiate objects, it is used for graph composition. All this process happens at runtime, where Dagger tries to figure out how everything fits together, so there is a price to pay: inefficiency sometimes and difficulties when debugging.

Dagger 2

Dagger 2 is a fork from Dagger 1 under heavy development by Google, currently version 2.0. It was inspired by AutoValue project (, useful if you are tired of writing equals and hashcode methods everywhere).
From the beginning, the basic idea behind Dagger 2, was to make problems solvable by using code generation, hand written code, as if we were writing all the code that creates and provides our dependencies ourselves.

If we compare this version with its predecessor, both are quite similar in many aspects but there are also important differences that worth mentioning:

  • No reflection at all: graph validation, configurations and preconditions at compile time.
  • Easy debugging and fully traceable: entirely concrete call stack for provision and creation.
  • More performance: according to google they gained 13% of processor performance.
  • Code obfuscation: it uses method dispatch, like hand written code.

Of course all this cool features come with a price, which makes it less flexible: for instance, there is no dynamism due to the lack of reflection.

Diving deeper

To understand Dagger 2 it is important (and probably a bit hard in the beginning) to know about the fundamentals of dependency injection and the concepts of each one of these guys (do not worry if you do not understand them yet, we will see examples):

  • @Inject: Basically with this annotation we request dependencies. In other words, you use it to tell Dagger that the annotated class or field wants to participate in dependency injection. Thus, Dagger will construct instances of this annotated classes and satisfy their dependencies.
  • @Module: Modules are classes whose methods provide dependencies, so we define a class and annotate it with @Module, thus, Dagger will know where to find the dependencies in order to satisfy them when constructing class instances. One important feature of modules is that they have been designed to be partitioned and composed together (for instance we will see that in our apps we can have multiple composed modules). 
  • @Provide: Inside modules we define methods containing this annotation which tells Dagger how we want to construct and provide those mentioned dependencies.
  • @Component: Components basically are injectors, let’s say a bridge between @Inject and @Module, which its main responsibility is to put both together. They just give you instances of all the types you defined, for example, we must annotate an interface with @Component and list all the @Modules that will compose that component, and if any of them is missing, we get errors at compile time. All the components are aware of the scope of dependencies it provides through its modules. 
  • @Scope: Scopes are very useful and Dagger 2 has has a more concrete way to do scoping through custom annotationsWe will see an example later, but this is a very powerful feature, because as pointed out earlier, there is no need that every object knows about how to manage its own instances. An scope example would be a class with a custom @PerActivity annotation, so this object will live as long as our Activity is alive. In other words, we can define the granularity of your scopes (@PerFragment, @PerUser, etc). 
  • @Qualifier: We use this annotation when the type of class is insufficient to identify a dependency. For example in the case of Android, many times we need different types of context, so we might define a qualifier annotation “@ForApplication” and “@ForActivity”, thus when injecting a context we can use those qualifiers to tell Dagger which type of context we want to be provided.

Shut up and show me the code!

I guess it is too much theory for now, so let’s see Dagger 2 in action, although it is a good idea to first set it up by adding the dependencies in our build.gradle file:

As you can see we are adding javax annotations, compiler, the runtime library and the apt plugin, which is necessary, otherwise the dagger annotation processor might not work properly, especially I encountered problems on Android Studio.

Our example

A few months ago I wrote an article about how to implement uncle bob’s clean architecture on Android, which I strongly recommend to read so you get a better understanding of what we are gonna do here. Back then, I faced a problem when constructing and providing dependencies of most of the objects involved in my solution, which looked something like this (check out the comments):

As you can see, the way to address this problem is to use a dependency injection framework. We basically get rid of that boilerplate code (which is unreadable and understandable): this class must not know anything about object creation and dependency provision.

So how do we do it? Of course we use Dagger 2 features… Let me picture the structure of my dependency injection graph:

Let’s break down this graphic and explain its parts plus some code.

Application Component: A component whose lifetime is the life of the application. It injects both AndroidApplication and BaseActivity classes.

As you can see, I use the @Singleton annotation for this component which constraints it to one-per-application. You might be wondering why I’m exposing the Context and the rest of the classes. This is actually an important property of how components work in Dagger: they do not expose types from their modules unless you explicitly make them available. In this case in particular I just exposed those elements to subgraphs and if you try to remove any of them, a compilation error will be triggered.

Application Module: This module provides objects which will live during the application lifecycle, that is the reason why all of @Provide methods use a @Singleton scope.

Activity Component: A component which will live during the lifetime of an activity.

The @PerActivity is a custom scoping annotation to permit objects whose lifetime should conform to the life of the activity to be memorized in the correct component. I really encourage to do this as a good practice, since we get these advantages:

  • The ability to inject objects where and activity is required to be constructed.
  • The use of singletons on a per-activity basis.
  • The global object graph is kept clear of things that can be used only in activities.

You can see the code below:

Activity Module: This module exposes the activity to dependents in the graph. The reason behind this is basically to use the activity context in a fragment for example.

User Component: A scoped @PerActivity component that extends ActiviyComponent. Basically I use it in order to injects user specific fragments. Since ActivityModule exposes the activity to the graph (as mentioned earlier), whenever an activity context is needed to satisfy a dependency, Dagger will get it from there and inject it: there is no need to re define it in sub modules.

User Module: A module that provides user related collaborators. Based on the example, it will provide user use cases basically.

Putting everything together

Now we have our dependency injection graph implementation, how do we inject dependencies? Something we need to know is that Dagger give us a bunch of options to inject dependencies:

  1. Constructor injection: by annotating the constructor of our class with @Inject.
  2. Field injection: by annotating a (non private) field of our class with @Inject.
  3. Method injection: by annotating a method with @Inject.

This is also the order used by Dagger when binding dependencies and it is important because it might happen that you have some strange behavior or even NullPointerExceptions, which means that your dependencies might not have been initialized at the moment of the object creation. This is common on Android when using field injection in Activities or Fragments, since we do not have access to their constructors.

Getting back to our example, let’s see how we can inject a member to our BaseActivity. In this case we do it with a class called Navigator which is responsible for managing the navigation flow in our app:

Since Navigator is bound by field injection it is mandatory to be provided explicitly in our ApplicationModule using @Provide annotation. Finally we initialize our component and call the inject() method in order to inject our members. We do this in the onCreate() method of our Activity by calling getApplicationComponent(). This method has been added here for reusability and its main purpose is to retrieve the ApplicationComponent which was initialized in the Application object.

Let’s do the same with a presenter in a Fragment. In this case the approach is a bit different since we are using a per-activity scoped component. So our UserComponent which will inject UserDetailsFragment will reside in our UserDetailsActivity:

We have to initialize it this way in the onCreate() method of the activity:

As you can see when Dagger processes our annotations, creates implementations of our components and rename them adding a “Dagger” prefix. Since this is a composed component, when constructing it, we must pass in all its dependencies (both components and modules). Now that our component is ready, we just make it accesible in order to satisfy the fragment dependencies:

We bind UserDetailsFragment dependencies by getting the created component and calling the inject() method passing the Fragment as a parameter:

For the complete example, check the repository on github. There is also some refactor happening and I can tell you that one of the main ideas (taken from the official examples) is to have an interface as a contract which will be implemented by every class that has a component. Something like this:

Thus, the client (for example a Fragment) can get the component (from the Activity) and use it:

The use of generics here makes mandatory to do the casting but at least is gonna fail fast whether the client cannot get a component to use. Just ping me if you have any thoughts/ideas on how to solve this in a better way.

Dagger 2 code generation

After having a taste of Dagger’s main features, let’s see how does its job under the hood. To illustrate this, we are gonna take again the Navigator class and see how it is created and injected.
First let’s have a look at our DaggerApplicationComponent which is an implementation of our ApplicationComponent:

Two important things: the first one is that since we are gonna inject our activity, we have a members injector (which Dagger translates to BaseActivity_MembersInjector):

Basically, this guy contains providers for all the injectable members of our Activity so when we call inject() will take the accessible fields and bind the dependencies.

The second thing, regarding our DaggerApplicationComponent, is that we have a Provider<Navigator> which is no more than interface which provides instances of our class and it is constructed by a ScopedProvider (in the initialize() method) which will memorize the scope of the created class.

Dagger also generated a Factory called ApplicationModule_ProvideNavigatorFactory for our Navigator which is passed as a parameter to the mentioned ScopedProvider in order to get scoped instances of our class.

This class is actually very simple, it delegates to our ApplicationModule (which contains our @Provide method()) the creation of our Navigator class.

In conclusion, this really looks like hand-written code and it is very easy to understand which makes it easy to debug. There is still much to explore here and a good idea is start debugging and see how Dagger deal with dependency binding.



Honestly not too much to say here: for unit tests, I do not think is necessary to create any injector so I do not use Dagger, and by injecting mock collaborators manually works fine till now but when it comes to end-to-end integration tests, Dagger could make more sense: you can replace modules with others that provide mocks.
I will appreciate any experience here to add it as part of the article.

Wrapping up

So far we have had a taste on what Dagger is capable of doing, but still there is a long way ahead of us, so I strongly recommend to read the documentation, watch videos and have a look at the examples. This was actually a small sample for learning purpose and I hope you have found it useful. Remember that any feedback is always welcome.

Source code:


Further reading:

  1. Architecting Android..the evolution
  2. Architecting Android..the clean way?
  3. The Mayans Lost Guide to RxJava on Android
  4. It is about philosophy: Culture of a good programmer


Architecting Android…The clean way?.
Dagger 2, A New Type of Dependency Injection.
Dependency Injection with Dagger 2.
Dagger 2 has Components.
Dagger 2 Official Documentation.

RxJava Observable tranformation: concatMap() vs flatMap()

After a while I decided that was time to get back for some writing. As you may know at @SoundCloud we do a strong use of the reactive approach, but to be honest, I am not here to talk about RxJava itself because there are great articles out there to read about it (here and here) and great people to follow as well such as Ben Christesen, Matthias Käppler and many others.
I also consider myself a ‘newbie’ in reactive programming and now I am at that stage where you start seeing the benefits of this approach and want to make every single object reactive, which is very dangerous, so if you are in the same level as me, just keep an eye on it, and use it wherever makes sense, you are advised.
Let’s get started with the article then…

Observable transformation

There are times where you have an Observable which you are subscribed to and you want to transform the results (remember that everything is a stream in Reactive Programming).
When it comes to observable transformation, the values from the sequences we consume are not always in the format or shape we need or each value needs to be expanded either into a richer object or into more values, so we can do this by applying a function to each element returned by your observable which will convert all of the items emitted by it into Observables and merge the result. Do not worry if you do not understand yet (it took me a while to think in reactive), we will see an example in a bit.

The problem

I was retrieving a set of values from the database and applying a function to each of them that was suppose to both transform them in other objects asynchronously and also preserve their order. Last step was to convert them into a list needed by the UI to display the results. The behavior I had was not the expected one and here is why: I was using Observable.flatMap() which does not preserve the order of the elements.

A simple example

Let me put a simple example to demonstrate the mentioned behavior. Let’s say we have an Observable emitting a set of Integers and we want to calculate the square of each of those values:

Here our DataManager class has a method that returns an Observable which emits numbers from 2 to 10. Then we want to calculate the square of those values so here is our function to apply to each of them:

This will take an Integer as entry, will generate an Observable<Integer>, merge them and emit the results. As you can see we are using a call to dataManager.squareOf() method which is asynchronous (for demonstration purpose) and looks something like this:

Of course this works, but not as expected (at least the way I wanted), the order of the elements is not preserved (logcat output):


Observable flatMap() vs concatMap()

Both methods look pretty much the same, but there is a difference: operator usage when merging the final results. Here is some stuff from the official documentation:


The flatMap() method creates a new Observable by applying a function that you supply to each item emitted by the original Observable, where that function is itself an Observable that emits items, and then merges the results of that function applied to every item emitted by the original Observable, emitting these merged results. Note that flatMap() may interleave the items emitted by the Observables that result from transforming the items emitted by the source Observable. If it is important that these items not be interleaved, you can instead use the similar concatMap() method.


As you can see, the two functions are very similar and the subtle difference is how the output is created (after the mapping function is applied). flatMap() uses merge operator while concatMap() uses concat operator meaning that the last one cares about the order of the elements, so keep an eye on that if you need ordering :).

Merge operator

Combine multiple Observables into one.

Concat operator

Concatenate two or more Observables sequentially.

Problem solved

Observable concatMap() for the salvation! The problem was easily solved by just switching to a concatMap() method. I know you may argue why I did not read the documentation first, which is very well explained by the way (kudos to the RxJava contributors!!!), but sometimes we are lazy or that is the last place we look into. Here is a picture with the final results and some test I did (you can find the sample code below):



That is my two cents and hope it helps. As always here is the sample code of the sample app and other useful information that worth read.

  1. Source code:
  2. Functional Reactive Programming on Android With RxJava
  3. Grokking RxJava
  4. Top 7 Tips for RxJava on Android
  5. Mastering Observables
  6. React Conference London

Remember that any feedback is very welcome, such as better ways of addressing this problem or any issue you may find.

Architecting Android…The clean way?

Over the last months and after having friendly discussions at Tuenti with colleagues like @pedro_g_s and @flipper83 (by the way 2 badass of android development), I have decided that was a good time to write an article about architecting android applications.
The purpose of it is to show you a little approach I had in mind in the last few months plus all the stuff I have learnt from investigating and implementing it.

Getting Started

We know that writing quality software is hard and complex: It is not only about satisfying requirements, also should be robust, maintainable, testable, and flexible enough to adapt to growth and change. This is where “the clean architecture” comes up and could be a good approach for using when developing any software application.
The idea is simple: clean architecture stands for a group of practices that produce systems that are:

  • Independent of Frameworks.
  • Testable.
  • Independent of UI.
  • Independent of Database.
  • Independent of any external agency.

It is not a must to use only 4 circles (as you can see in the picture), because they are only schematic but you should take into consideration the Dependency Rule: source code dependencies can only point inwards and nothing in an inner circle can know anything at all about something in an outer circle.

Here is some vocabulary that is relevant for getting familiar and understanding this approach in a better way:

  • Entities: These are the business objects of the application.
  • Use Cases: These use cases orchestrate the flow of data to and from the entities. Are also called Interactors.
  • Interface Adapters: This set of adapters convert data from the format most convenient for the use cases and entities. Presenters and Controllers belong here.
  • Frameworks and Drivers: This is where all the details go: UI, tools, frameworks, etc.

For a better and more extensive explanation, refer to this article or this video.

Our Scenario

I will start with a simple scenario to get things going: simply create an small app that shows a list of friends or users retrieved from the cloud and, when clicking any of them, a new screen will be opened showing more details of that user.
I leave you a video so you can have the big picture of what I’m talking about:

Android Architecture

The objective is the separation of concerns by keeping the business rules not knowing anything at all about the outside world, thus, they can can be tested without any dependency to any external element.
To achieve this, my proposal is about breaking up the project into 3 different layers, in which each one has its own purpose and works separately from the others.
It is worth mentioning that each layer uses its own data model so this independence can be reached (you will see in code that a data mapper is needed in order to accomplish data transformation, a price to be paid if you do not want to cross the use of your models over the entire application).
Here is an schema so you can see how it looks like:


NOTE: I did not use any external library (except gson for parsing json data and junit, mockito, robolectric and espresso for testing). The reason was because it made the example a bit more clear. Anyway do not hesitate to add ORMs for storing disk data or any dependency injection framework or whatever tool or library you are familiar with, that could make your life easier. (Remember that reinventing the wheel is not a good practice).

Presentation Layer

Is here, where the logic related with views and animations happens. It uses no more than a Model View Presenter (MVP from now on), but you can use any other pattern like MVC or MVVM. I will not get into details on it, but here fragments and activities are only views, there is no logic inside them other than UI logic, and this is where all the rendering stuff takes place.
Presenters in this layer are composed with interactors (use cases) that perform the job in a new thread outside the android UI thread, and come back using a callback with the data that will be rendered in the view.


If you want a cool example about Effective Android UI that uses MVP and MVVM, take a look at what my friend Pedro Gómez has done.

Domain Layer

Business rules here: all the logic happens in this layer. Regarding the android project, you will see all the interactors (use cases) implementations here as well.
This layer is a pure java module without any android dependencies. All the external components use interfaces when connecting to the business objects.


Data Layer

All data needed for the application comes from this layer through a UserRepository implementation (the interface is in the domain layer) that uses a Repository Pattern with a strategy that, through a factory, picks different data sources depending on certain conditions.
For instance, when getting a user by id, the disk cache data source will be selected if the user already exists in cache, otherwise the cloud will be queried to retrieve the data and later save it to the disk cache.
The idea behind all this is that the data origin is transparent for the client, which does not care if the data is coming from memory, disk or the cloud, the only truth is that the data will arrive and will be got.


NOTE: In terms of code I have implemented a very simple and primitive disk cache using the file system and android preferences, it was for learning purpose. Remember again that you SHOULD NOT REINVENT THE WHEEL if there are existing libraries that perform these jobs in a better way.

Error Handling

This is always a topic for discussion and could be great if you share your solutions here.
My strategy was to use callbacks, thus, if something happens in the data repository for example, the callback has 2 methods onResponse() and onError(). The last one encapsulates exceptions in a wrapper class called “ErrorBundle”: This approach brings some difficulties because there is a chains of callbacks one after the other until the error goes to the presentation layer to be rendered. Code readability could be a bit compromised.
On the other side, I could have implemented an event bus system that throws events if something wrong happens but this kind of solution is like using a GOTO, and, in my opinion, sometimes you can get lost when you’re subscribed to several events if you do not control that closely.


Regarding testing, I opted for several solutions depending on the layer:

  • Presentation Layer: used android instrumentation and espresso for integration and functional testing.
  • Domain Layer: JUnit plus mockito for unit tests was used here.
  • Data Layer: Robolectric (since this layer has android dependencies) plus junit plus mockito for integration and unit tests.

Show me the code

I know that you may be wondering where is the code, right? Well here is the github link where you will find what I have done. About the folder structure, something to mention, is that the different layers are represented using modules:

  • presentation: It is an android module that represents the presentation layer.
  • domain: A java module without android dependencies.
  • data: An android module from where all the data is retrieved.
  • data-test: Tests for the data layer. Due to some limitations when using Robolectric I had to use it in a separate java module.


As Uncle Bob says, “Architecture is About Intent, not Frameworks” and I totally agree with this statement. Of course there are a lot of different ways of doing things (different implementations) and I’m pretty sure that you (like me) face a lot of challenges every day, but by using this technique, you make sure that your application will be:

  • Easy to maintain.
  • Easy to test.
  • Very cohesive.
  • Decoupled.

As a conclusion I strongly recommend you give it a try and see and share your results and experiences, as well as any other approach you’ve found that works better: we do know that continuous improvement is always a very good and positive thing.
I hope you have found this article useful and, as always, any feedback is very welcome.

Source code

  1. Clean architecture github repository – master branch
  2. Clean architecture github repository – releases

Further reading:

  1. Architecting Android..the evolution
  2. Tasting Dagger 2 on Android
  3. The Mayans Lost Guide to RxJava on Android
  4. It is about philosophy: Culture of a good programmer

Links and Resources

  1. The clean architecture by Uncle Bob
  2. Architecture is about Intent, not Frameworks
  3. Model View Presenter
  4. Repository Pattern by Martin Fowler
  5. Android Design Patterns Presentation

Aspect Oriented Programming in Android

Aspect-oriented programming entails breaking down program logic into “concerns” (cohesive areas of functionality). This means, that with AOP, we can add executable blocks to some source code without explicitly changing it. This programming paradigm pretends that “cross-cutting concerns” (the logic needed at many places, without a single class where to implement them) should be implemented once and injected it many times into those places.

Code injection becomes a very important part of AOP: it is useful for dealing with the mentioned “concerns” that cut across the whole application, such as logging or performance monitoring, and, using it in this way, should not be something used rarely as you might think, quite the contrary; every programmer will come into a situation where this ability of injecting code, could prevent a lot of pain and frustration.

AOP is a paradigm that has been with us for many years, and I found it very useful to apply it to Android. After some investigation I consider that we can get a lot of advantages and very useful stuff when making use of it.

Terminology (Mini glossary)

Before we get started, let’s have a look at some vocabulary that we should take into account:

  • Cross-cutting concerns: Even though most classes in an OO model will perform a single, specific function, they often share common, secondary requirements with other classes. For example, we may want to add logging to classes within the data-access layer and also to classes in the UI layer whenever a thread enters or exits a method. Even though each class has a very different primary functionality, the code needed to perform the secondary functionality is often identical.
  • Advice: The code that is injected to a class file. Typically we talk about before, after, and around advices, which are executed before, after, or instead of a target method. It’s possible to make also other changes than injecting code into methods, e.g. adding fields or interfaces to a class.
  • Joint point: A particular point in a program that might be the target of code injection, e.g. a method call or method entry.
  • Pointcut: An expression which tells a code injection tool where to inject a particular piece of code, i.e. to which joint points to apply a particular advice. It could select only a single such point – e.g. execution of a single method – or many similar points – e.g. executions of all methods marked with a custom annotation such as @DebugTrace.
  • Aspect: The combination of the pointcut and the advice is termed an aspect. For instance, we add a logging aspect to our application by defining a pointcut and giving the correct advice.
  • Weaving: The process of injecting code – advices – into the target places – joint points.

This picture summarizes a bit a few of this concepts:

Aspect Oriented Programming

So…where and when can we apply AOP?

Some examples of cross-cutting concerns are:

  • Logging
  • Persistance
  • Performance monitoring
  • Data Validation
  • Caching
  • Many others

And in relation with “when the magic happens”, the code can be injected at different points in time:

  • At run-time: your code has to explicitly ask for the enhanced code, e.g. by using a Dynamic Proxy (this is arguably not true code injection). Anyway here is an example I created for testing it.
  • At load-time: the modification are performed when the target classes are being loaded by Dalvik or ART. Byte-code or Dex-code weaving.
  • At build-time: you add an extra step to your build process to modify the compiled classes before packaging and deploying your application. Source-code weaving.

Depending on the situation you will be choosing one or the other :).

Tools and Libraries

There are a few tools and libraries out there that help us use AOP:

  • AspectJ: A seamless aspect-oriented extension to the Javatm programming language (works with Android).
  • Javassist for Android: An android porting of the very well known java library Javassist for bytecode manipulation.
  • DexMaker: A Java-language API for doing compile time or runtime code generation targeting the Dalvik VM.
  • ASMDEX: A bytecode manipulation library as ASM but it handles the DEX bytecode used by Android executables.

Why AspectJ?

For our example below I have chosen AspectJ for the following reasons:

  • Very powerful.
  • Supports build time and load time code injection.
  • Easy to use.


Let’s say we want to measure the performance of a method (how long takes its execution). For doing this we want to mark our method with a @DebugTrace annotation and want to see the results using the logcat transparently without having to write code in each annotated method. Our approach is to use AspectJ for this purpose.
This is what is gonna happen under the hood:

  • The annotation will be processed in a new step we are adding to our compilation fase.
  • Necessary boilerplate code will be generated and injected in the annotated method.

I have to say here that while I was researching I found Jake Wharton’s Hugo Library that it is suppose to do the same, so I refactored my code and looks similar to it, although mine is a more primitive and simpler version (I have learnt a lot by looking at its code by the way).


Project structure

We will break up our sample application into 2 modules, the first will contain our android app and the second will be an android library that will make use of AspectJ library for weaving (code injection).
You may be wondering why we are using an android library module instead of a pure java library: the reason is that for AspectJ to work on Android we have to make use of some hooks when compiling our app and this is only possible using the android-library gradle plugin. (Do not worry about this yet, cause I will be giving some more details later).

Creating our annotation

We first create our Java annotation. This annotation will be persisted in the class (RetentionPolicy.CLASS) file and we will be able to annotate any constructor or method with it (ElementType.CONSTRUCTOR and ElementType.METHOD). So our file will look like this:

Our StopWatch for performance monitoring

I have created a simple class that encapsulates time start/stop. Here is our class:

DebugLog Class

I just decorated the “android.util.Log” cause my first idea was to add some more functionality to the android log. Here it is:

Our Aspect

Now it is time to create our aspect class ( that will be in charge of managing the annotation processing and source-code weaving.

Some important points to mention here:

  • We declare 2 public methods with 2 pointcuts that will filter all methods and constructors annotated with “org.android10.gintonic.annotation.DebugTrace”.
  • We define the “weaveJointPoint(ProceedingJoinPoint joinPoint)” annotated with “@Around” which means that our code injection will happen around the annotated method with “@DebugTrace”.
  • The line “Object result = joinPoint.proceed();” is where the annotated method execution happens, so before this, is where we start our StopWatch to start measuring time, and after that, we stop it.
  • Finally we build our message and print it using the Android Log.

Making AspectJ work with Android

Now everything should be working, but, if we compile our sample, we will see that nothing happens.
The reason is that we have to use the AspectJ compiler (ajc, an extension of the java compiler) to weave all classes that are affected by an aspect. That’s why, as I mention before, we need to add some extra configuration to our gradle build task to make it work.
This is how our build.gradle looks like:

Our test method

Let’s use our cool aspect annotation by adding it to a test method. I have created a method inside the main activity for testing purpose. Let’s have a look at it:

Executing our application

We build and install our app on an android device/emulator by executing the gradle command:

If we open the logcat and execute our sample, we will see a debug log with:

Our first android application using AOP worked!
You can use the Dex Dump android application (from your phone), or any any other reverse engineering tool for decompiling the apk and see the source code generated and injected.


So to recap and summarize:

  • We have had a taste of Aspect Oriented programming paradigm.
  • Code Injection becomes a very important part of this approach (AOP).
  • AspectJ is a very powerful and easy to use tool for source code weaving in Android applications.
  • We have created a working example using AOP capabilities.


Aspect Oriented Programming is very powerful. Using it the right way, you can avoid duplicating a lot of code when you have “cross-cutting concerns” in your Android apps, like performance monitoring, as we have seen in our example. I do encourage you to give it a try, you will find it very useful.
I hope you like the article, the purpose of it was to share what I’ve learnt so far, so feel free to comment and give feedback, or even better, fork the code and play a bit with it.
I’m sure we can add very interesting stuff to our AOP module in the sample app. Ideas are very welcome ;).

Source Code

You can check the example app here (using AspectJ):
Also I have another AOP example for Java (you can use if for Android as well) using a Dynamic Proxy:


Unit testing asynchronous methods with Mockito

After promising (and not keeping my promise) that I would be writing and maintaining my blog, here I go again (3289423987 attempt). But lest’s forget about that…
So in this occasion I wanted to write about Mockito…yes, this mocking framework that is a ‘must have’ when writing your unit tests ;).


This is article assumes that you know what a unit test is and why you should write tests.
Also I strongly recommend this famous article from Martin Fowler that talks about test doubles, it is a must read to understand about test doubles.

Common case scenario

Sometimes we have to test methods that use callbacks, meaning that they are asynchronous by definition. These methods are not easy to test and using Thread.sleep(milliseconds) method to wait for the response is not a good practice and can convert your tests in non-deterministic ones (I have seen this many times to be honest).
So how do we do this? Mockito to the rescue!

Let’s put an example

Suppose we have a class called DummyCaller that implements a DummyCallback and has a method doSomethingAsynchronously() that delegates its functionality to a collaborator of the the class called DummyCollaborator that has a doSomethingAsynchronously(DummyCallback callback) as well, but receives a callback as a parameter (in this case our DummyCallback), so this methods creates a new thread to run his job and then gives us a result when is done.
Here is the code to understand this scenario in a better way:

Creating our test class

We have 2 options to test our asynchronous method but first we will create our test class DummyCollaboratorCallerTest (for convention we just add Test at the end of the class so this becomes part of its name).

So here we are using MockitoAnotations to initialize both Mock and ArgumentCaptor, but don’t worry about them yet, cause this is what we will be seeing next.
The only thing to take into account here is that both mock and class under test are being initialized before each test is executed in the setUp() method (using the @Before annotation).
Remember that for unit testing all the collaborators for a CUT (class under test) must be test doubles.
Let’s take a look at our 2 test solutions.

Setting up an answer for our callback

This is our test case using a doAnswer() for stubbing a method with a generic Answer. This means that since we need a callback to return immediately (synchronously), we generate an answer so when the method under test is called, the callback will be executed right away with the data we tell it to return.
Finally we call our real method and verify state and interaction.

Using an ArgumentCaptor

Our second option is to use an ArgumentCaptor. Here we treat our callback asynchronously: we capture the DummyCallback object passed to our DummyCollaborator using an ArgumentCaptor.
Finally we can make all our assertions at the test method level and call onSuccess() when we want to verify state and interaction.


The main difference between both solutions is that when using DoAnswer() we are creating an anonymous inner class, and casting (in an unsafe way) the elements from invocation.getArguments()[n] to the data type we want, but in case we modify our parameters the test will ‘fail fast’ letting know that something has happened. On the other side, when using ArgumentCaptor we probably have more control cause we can call the callbacks in the order we want in case we need it.
As interest in unit testing, this is a common case that sometimes we do not know how deal with, so in my experience using both solutions has helped me to have a robust approach when having to test asynchronous methods.
I hope you find this article useful, and as always, remember that any feedback is very welcome, as well as other ways of doing this. Of course if you have any doubt do not hesitate to contact me.

Code Sample

Here is the link where you can find this example and others. Most of them are related with Java and Android because this comes from a talk I gave a couple of months ago.
The presentation is in english but the video is in spanish (sorry for those who do not understand my argentinian accent…haha…BTW I will try to upload an english version as soon as possible…)

Further Reading

I highly recommend to take a look at Mockito documentation to have a better understanding of the framework. The documentation is very clear and has great examples. See you!

NFC on Android

Hi all! I’m so proud that finally I’ve created my new blog. Since this is my 12372983 attempt for having a development blog, I can only promise you that will put all my effort to keep it up to date. So welcome!

And now that this is my first post I just wanted to share with you a talk about Near Field Communication I gave last year which was requested by a lot of colleagues. Here it is guys! (take into account that the video is in spanish, anyway I promise to explain some examples and use cases soon).

Enjoy and feel free to comment and give very useful feedback.

You can find the source code of the example application of both the presentation and video here:

Here is the presentation (english):


Here is the video (spanish):

See you!!!