Last time we looked more in depth at CDI and how we can define beans and inject them into other beans. This time we are going to look at how we can use events to decouple the handling of actions in the system.

Read Part 1
Read Part 2

As a refresher, in our last part, we had an application that obtained a list of items, validated them and took a specific action when an invalid item was found. Lets say in the future we want to expand our system to handle all sorts of things happening when we find an invalid item. This could range from an email being sent, changes made to other data (i.e. an order canceled) or storing a list of rejections in a file or database table. To completely decouple the implementation we can use events. Events are raised by the event producer and subscribed to by event observers. Like most of CDI, event production and subscription is type safe and allows qualifiers to determine which events observers will observe.

Luckily we don’t have to change our application much to implement this, we just provide an implementation of the ItemErrorHandler that raises the event when it handles the item. We will inject an instance of the Item error handler called EventErrorHandler and create a @Notify qualifier to select it for injection.

@Retention(RetentionPolicy.RUNTIME)
@Target({FIELD,METHOD,PARAMETER,TYPE})
@Qualifier
public @interface Notify {}
@Notify
public class EventItemHandler implements ItemErrorHandler {

    @Inject
    private Event<Item> itemEvent;

    public void handleItem(Item item) {
        System.out.println("Firing Event");
        itemEvent.fire(item);
    }
}

In this item handler, we inject an instance of an Event where the event payload will be an Item. The event payload is the state data passed from the event producer to the event observer which in this case passes the rejected Item. When the invalid item is handled, we fire the event and pass in the invalid item we received. This event based item handler is injected the same as any other item handler would be so we can swap it in and out whenever we need to and also can substitute it during testing. We created a @Notify qualifier annotation to identify this error handler for injection and can use it in our item processor by adding it to the injection point.

    @Inject
    @Notify
    private ItemErrorHandler itemErrorHandler;

If we deploy this now, we will see that when the item processor hits invalid items, the event based item error handler fires the event. Currently though we don’t have anything observing the event. We can fix this by creating an observer method which is the only thing needed to observe an event. We can still re-use our existing item processors by adding an event observer method to the implementations which just calls the error handler method. For example, to make our FileErrorReporter respond to the event, we add the following observer event.

public class FileErrorReporter implements ItemErrorHandler,Serializable {

    public void eventFired(@Observes Item item) {
        handleItem(item);
    }
    ....
    ....

If you run the application now, you will see that the events are fired on the invalid objects, and the item information is being saved when this event is fired. You will also note that the lifecycle events are being observed.

INFO: Firing Event
INFO: Creating file error reporter
INFO: Saving eedemo.Item@1f84b46 [Value=34, Limit=7] to file
INFO: Closing file error reporter
INFO: Firing Event
INFO: Creating file error reporter
INFO: Saving eedemo.Item@2656da [Value=89, Limit=32] to file
INFO: Closing file error reporter

Note that the file error reporter bean is created each time the event is raised which may or may not be what we want. In this case, we don’t want to create the bean new each time since we don’t want to open and close the file for each item. We still want to open the file at the start of the process, and then close it once the process it completed. In this case, we need to consider the scope of the FileErrorReporter bean which doesn’t have a scope defined. When no scope is defined, CDI defaults it to the default pseudo dependent scope. What this means in practice is that the bean is created and destroyed over a very short space of time, typically over a method call. In our case here, the bean is created and destroyed for the duration of the event being fired. To fix this, we need to lengthen the scope of the bean by manually adding a scope annotation. We will make this bean @RequestScoped so once the bean is created the first time the event is fired, it will remain created for the duration of the request. Also, for any injection points that this bean is qualified to be injected to, the same bean instance will be injected. Here is the log when we make our scope change to the bean. Note that the bean is still only created when the event is fired.

INFO: Firing Event
INFO: Creating file error reporter
INFO: Saving eedemo.Item@1380c08 [Value=34, Limit=7] to file
INFO: Firing Event
INFO: Saving eedemo.Item@1b44f96 [Value=89, Limit=32] to file
INFO: Closing file error reporter

Let’s take the events example a little further. Right now we are observing any event for an item but chances are, there may be different types of events in the system for the items. We want to be specific in which observers subscribe to which events. Luckily CDI provides for this in a similar manner to how we determine suitable beans for injection points by using typing and qualifiers.

First, lets create a problem for ourselves by firing events off for each item we validate by adding the following code to the item processor.

    @Inject
    private Event<Item> processorEvent;

    public void execute() {
        List<Item> items = itemDao.fetchItems();
        for (Item item : items) {
            processorEvent.fire(item);
            if (!itemValidator.isValid(item)) {
                itemErrorHandler.handleItem(item);
            }
        }
    }

This will fire an event for each item we process, but we don’t want the invalid item handler to subscribe to each of those events. If we do, we will see output like the following :

INFO: Creating eedemo.EventItemHandler_$$_javassist_748
INFO: Creating file error reporter
INFO: Saving eedemo.Item@6d0baf [Value=34, Limit=7] to file
INFO: Creating eedemo.EventItemHandler
INFO: Firing Event
INFO: Saving eedemo.Item@6d0baf [Value=34, Limit=7] to file
INFO: Saving eedemo.Item@1087300 [Value=4, Limit=37] to file
INFO: Saving eedemo.Item@11674c9 [Value=24, Limit=19] to file
INFO: Saving eedemo.Item@de163a [Value=89, Limit=32] to file
INFO: Firing Event
INFO: Saving eedemo.Item@de163a [Value=89, Limit=32] to file
INFO: Closing file error reporter

For each item, the file item handler observer is being called for each item because we aren’t distinguishing the kind of events fired when the item is processed and the kind of event fired when an invalid item is found. To differentiate between them we will create an @Invalidate qualifier to qualify the event for invalid items. This annotation will be placed on the event injection point and also on the observation point method.

@Notify
@RequestScoped
public class EventItemHandler implements ItemErrorHandler {


    @Inject @Invalidated
    private Event<Item> itemEvent;

    public void handleItem(Item item) {
        System.out.println("Firing Event");
        itemEvent.fire(item);
    }
}
@Save
@RequestScoped
public class FileErrorReporter implements ItemErrorHandler,Serializable {

    public void eventFired(@Observes @Invalidated Item item) {
        handleItem(item);
    }

    ....

}

If you run the code now, you will see that we no longer call the file save handler for each item, even though we are firing the event for each item.

INFO: Firing Event
INFO: Creating file error reporter
INFO: Saving eedemo.Item@1e3aa22 [Value=34, Limit=7] to file
INFO: Firing Event
INFO: Saving eedemo.Item@172940f [Value=89, Limit=32] to file
INFO: Closing file error reporter

You can add an event observer to the item processor just to see the events are still being raised and can be observed.

@Named("itemProcessor")
@RequestScoped
public class ItemProcessor {

    ....

    public void observeItemEvent(@Observes Item item) {
        System.out.println("Item event observed for item "+item);
    }
}

This will print a message whenever an event is fired so you can see that there is a difference between the two event subscribers and also that the more general version of the observer sees all events related to the item.

Events are a great way to decouple parts of the system in a modular fashion as you can add pieces that will subscribe to events with the event producer unaware of the observer as opposed to the even producer having to call the observer manually when events are not used. For example, if someone updates an order status, you could add events to email the sales rep, or notify an account manager if a tech support issue is open for more than a week. These kinds of rules can be implemented without events, but events make it easier to decouple the business logic. Additionally, there is no compile or build time dependency and you can just add modules to your application and they will automatically start observing and producing events. Additionally, observers and producers know nothing about each other, nor do they require any configuration for them to do so.

Scheduling the Processing

One last tidbit before we move on to creating CDI driven JSF apps is that for now we are running our code from a button in a JSF page but typically this is something that would get fired off on a regular basis. With the power of the EJB 3.1 scheduler we can do just that in a new class with few lines of code. We’ll create a new stateless EJB into which we’ll inject our ItemProcessor and add a method to call the execute() method which will be called at a scheduled time.

@Stateless
public class ScheduledProcessor {

    @Inject
    private ItemProcessor itemProcessor;

    @Schedule(hour="07",minute = "00")
    public void execute() {
        System.out.println("executing scheduled job!"); 
        itemProcessor.execute();       
    }
}  

This makes use of the new EJB 3.1 @Schedule annotation and will schedule this method to be called every morning at 7am. You could make it on the hour every hour, or you could use the EJB timer service to implement your own timeout. We inject the same ItemProcessor implementation we have been using all along and call the execute() method. That is a pretty powerful transformation letting us re-use our existing code and easily incorporating EJB services with POJO managed beans into an EJB needing only a few lines of code.

Note : If you actually implement this and run it, it won’t work because of a Weld 1.0 bug whereby the request scope isn’t active during calls to EJB timeouts (see here for details) so you’ll have to wait until Weld 1.01 which should be out fairly soon.