CQS and atomicity

I’d summarise Command Query Separation as:

  • divide methods into queries and commands
  • only mutate state in commands
  • never mutate state in queries
  • make it easy to tell which is which

All of which I agree with.  The part I think is silly is:

  • command methods should always return void

The argument is that this makes it easy to identify which methods have side effects.  The downside is that if you want to get some information on how your command fared, you have to make a second call.  That’s not an issue in itself.  The issue is that the object has to keep the information about the last command in case you want it.  You’ve taken an operation that you think should be atomic, and in order to honour CQS you’ve made it into two coupled operations.  This is the worst form of coupling, because it’s hidden.

There are ways to restore atomicity to our newly dual operation.  You can make the command and the status request transactional in some way via locking or a transaction manager.  This implies that the status request is either undefined or unreliable outside of the transaction.  Things are getting worse, not better.

In practice, everybody gives themselves an out.  Tim Curry and Martin Fowler write in support of CQS yet both convince themselves that returning a value isn’t that bad as long as you do it for the right reasons.  As do many others.

Let’s look at what Betrand Meyer himself said:  “Asking a question should not change the answer”.

That’s a pretty succinct argument against writing query methods with side effects.  It doesn’t have much to say about writing command methods with return values.  Yes, always returning void from command methods makes it easy to see which methods have side effects.   But there are other ways to do that (naming conventions anyone?) which avoid the serious problems that the “always return void” rule creates.

In conclusion, I’d replace that last rule with these ones:

  • use a naming convention to identify command methods
  • the return value of a command method should only contain information about the command, not about the system state

It might be claimed that the difficulties introduced by strict adherence to CQS are a symptom of deep problems with OO as a paradigm, not of any issue with CQS as a principle.  I’m sympathetic to that view; functional and reactive programming both address the objections raised above, and do so at a paradigm level.  However, this article is aimed at developers sitting squarely in an OO paradigm and wondering if they should feel guilty for returning a status code from a command function.

Software engineering reading list

Gang of Four Design patterns


This book must be read and understood in detail by every developer.  Don’t learn the patterns.  Learn the thought process.


Eric Evans DDD


Although Evans does lay out a methodology in this book, that’ s not where the books real value lies (as Evans himself now says).    The real message is about the role of good design, with an emphasis on particular design styles, in managing software complexity.


Scott Ambler Database refactoring


Read this to cure yourself of “don’t touch the database” disease.


Fowler PoEAA


Fowler IMHO is the only true successor to the GoF, in that his pattern catalog is invariably interesting in detail.  In particular, Fowler’s set of ORM patterns are essential reading for anyone using an ORM.


Fowler refactoring



Larsen Applying UML and patterns


This is the best and clearest demonstration of how the concepts OOP and design pattern actually play out in a project.


Jim Highsmith Agile ecosystems


This book and the next one are IMHO all you ever need to read about agile methodologies.


Cockburn Crystal Clear



Beck TDD


TDD is another concept that many developers get weird ideas about.  Some people think that the point of TDD is to end up with lots of tests.  The guy who invented the concept sets the record straight.


Linda Rising Fearless Change


This isn’t really a technical book, but it’s one of the best demonstrations of the generality of the design pattern concept that I’ve seen.  The idea of design patterns is one that many developers find hard to grasp (and in fact it’s common to get the concepts completely backwards).  See how the concepts apply to a related but dissimilar field is very useful.  And also this is a great book on change management.


Kerievsky Refactoring to Patterns


This is a bit of a bonus read.  It’s not as important as the core patterns books and Fowler’s refactoring, but it is excellent example of applying higher level thought process to detailed program structures.


Adele Goldberg Smalltalk-80


OK, really nobody is going to read a 45 year old book about a dead programming language.  But this is a sentimental favourite from the most prolific group of visionaries ever to grace computer science.  It’s refreshing to look back past all the decades of nonsense that has been written about OOP and realize that in 1975 these guys really got it.

AspectJ for generating custom compiler errors

One of my favourite uses of AspectJ is to generate compile-time error messages.  This allows you to provide guidance in the IDE for developers writing new code within a framework or library. 

Here’s a quick example. BaseDTO is a base class that developers will extend. It’s used with a framework that requires a no-arg constructor (Jackson in this case, but it’s a common requirement), but when constructed explicitly, the UriInfo parameter is mandatory.

	// No-arg constructor for unmarshalling, but otherwise don't call this one
	public BaseDTO() {}
	public BaseDTO(UriInfo uriInfo) {
		this._links = new Links(uriInfo);

We can’t express that requirement in normal Java. As a result, developers can waste a lot of time debugging a new subclass. AspectJ to the rescue!

public aspect DTOChecker {
	pointcut dtoConstructor(): call(BaseDTO+.new(..)) 
	&& !call(BaseDTO+.new(javax.ws.rs.core.UriInfo, ..));
	declare error : dtoConstructor() :  "DTOChecker: Constructors for subclasses of BaseDTO must include a UriInfo parameter.";


Then, if we try to call a constructor for any subclass of BaseDTO without including a parameter of type UriInfo, we get a compile error. In Eclipse, that looks like this:

What’s a domain model for?

Intro from 2018: I wrote this article in 2009 and it’s been sitting in  my drafts ever since.  But I was inspired to look it up again by https://dzone.com/articles/the-secret-life-of-objects-information-hiding, and negatively inspired by https://medium.com/@cscalfani/goodbye-object-oriented-programming-a59cda4c0e53.  So here it is all these years later.

(Prompted by a discussion with Ben Nadel)

There’s a bit of a debate in the CF OO community. OO is good. OK, what’s it good for? You can have an OO domain model to capture all your business logic. What business logic? All I’m doing is inserting and updating records. Etc.

Then there’s all discussion about the “anemic domain model” antipattern. I want to make my beans less anemic, but I just can’t find anything to put in them!

Maybe domain models are only useful for sophisticated, simulation-based apps? CRUD apps don’t have enough business logic. Right?

Maybe not so right. My CRUD apps have lots of business logic. If I trawl through my database schema and pull out all of the constraints, defaults, foreign keys etc, that adds up to a lot of business logic. If I went the whole hog and added triggers to enforce all the more complex invariants, I would have a complex, rich domain model implemented in my database schema. And that’s without any of the personified simulation-style objects that we think of as being the sweet spot for complex domain models.

Some of the data modelling people insist that this is the only way to implement a domain model. Use database constructs for invariants, and put all the calculation logic into stored procedures. Maybe that’s the way to go for a pure CRUD application. The database will throw an exception if I violate any constraint, so my CRUD application just needs to catch those and react. However, any SQL database is such a miserable development environment that I really don’t want to lock myself into that scenario.

Let’s go to the other extreme and implement all of these invariants in our OO application. In practice we would duplicate some of the constraints in the schema, but we’ll say that our app doesn’t rely on that. In the OO world, we have a much richer programming model, so we should be able to go better than just throwing exceptions. We should be able to design our model so that many invalid operations simply aren’t available, and others return sensible defaults, nulls or result codes.

Here’s an example. I need to be able to create and update user records. My invariant is that usernames must be unique.

In a SQL domain model, I would put a uniqueness constraint on my username column. Any attempt to INSERT or UPDATE with an existing username would throw an exception. In theory this should be enough. In practice we tend to write application code to predict whether or not we are going to get an SQL exception. Not quite sure why we do this extra work, but the end result is the same.

In an OO domain model, I can constrain the available operations to make violation of the constraint impossible. First, I create a Users object that represents the set of all users. Then I make the constructor for the User object private. I can’t actually create a new user. If I want a new user, I have to ask the Users object for it.

// me = new User("jmetcher") <--- operation does not exist!!
me = Users.create("jmetcher");

This gives the Users object a chance to enforce the invariant. If there is already a user with username “jmetcher”, it can return that object, or return a null object, or return false, or even throw an exception. Probably I’d return the existing object. So that takes care of the INSERT.

What about the UPDATE? I require that the User object does not have a setter for “username”. Username is part of the logical identity of the User object, so it must be immutable. I may provide a utility method (say, on Users) to change a username, but that will be a maintenance activity – low-level, stop the world, reorganize my data kind of thing. It’s not part of the defined behaviour of a User.

me.setUsername("notjmetcher"); <--- operation does not exist!!

The domain model’s main purpose is to enforce those invariants. The lightbulb realization is that

A good domain model enforces invariants as much by its design as by its code

In this example, I’ve made my User object “richer” by hiding the constructor and taking away a setter – not by adding stuff.

There’s also a lot of discussion about validation. This cycle is taken for granted:

  • load
  • manipulate
  • validate
  • save

and then we talk a lot about where to put these responsibilities. My assertion is that we should be able to just:

  • load
  • manipulate

Save should just be automatic. Save should be the default. You should do something extra if you don’t want to save. Like:

  • load
  • copy
  • manipulate the copy

But what happened to the “validate” step? I’ve got us automatically saving things that haven’t been validated! But see above – I’ve designed the domain model so that I can’t make invalid transformations. So:

A good domain model makes direct manipulation of the domain data a safe operation.

So, what’s a domain model for? A good domain model on top of a full-featured persistence layer will:

  • Enforce invariants using a rich programming model
  • Make manipulating your data safe – without you having to remember to validate before save, or copy before manipulate, or save before exit.

Manipulating data safely while obeying invariants sounds like bread-and-butter CRUD to me.

Footnote from 2018:

What Riccardo says in the first article I linked above is so clear, at least to me.  How is it that Charles in the second article doesn’t get it?  Maybe OOP is like all design thinking, like design patterns and agile methodologies. If you can’t tolerate living in a world of judgement calls, if you can’t code to a conceptual model instead of or as well as a spec, if you think a bunch of smart people making independent decisions sounds like chaos, it’s not for you.  If you just want to know the rules, pick another door.  These are paradigms to help you write the rules.  Does anybody really think that is or can be easy?

5 essential tools for choosing a buzzword for your next listicle

Technology teams are not immune to hype and trends. <Buzzword> isn’t necessarily a new thing. A long time ago in a galaxy far away, <cool anecdote>.
We didn’t always know why things were broken, we had to examine the data to reveal the answers. It isn’t about what you call it or what tools you use.
Start with the strategy and desired outcomes.
<nice troubleshooting story>
At this point, the data reveals what is occurring.
<more nice troubleshooting stuff>
The trend towards <buzzword> tools reminds me of the craze around <every other buzzword> <since the dawn of time>.
There is no easy fix or magic pixie dust for ensuring <anything>.

Thanks and apologies to Mehdi Daoudi.  The above is a palimpsest of his article https://dzone.com/articles/practicality-of-observability – which is a good article with only a tiny bit of product placement.  But aside from the useful content, I was amused and inspired by the very first sentence.  Also as always entertained by DZone’s tagline writers, who in this case managed to take an article that is pretty strongly anti-buzzword and anti-tools-fetish, and give it a tagline that uses the buzzword du jour twice and promises toolz.



Java method overriding and visibility

This post is about a little test I set up to get my head around one aspect of method overriding in Java. A method in a superclass can call either another superclass method, or a subclass method, depending on the visibility of the methods involved.

These are the demo classes:

public class SuperClass {
	public String a() {
		return b();
	public String b() {
		return c();
	public String c() {
		return "superclass";


public class Subclass extends SuperClass {
	public String a() {
		return b() + b();
	public String c() {
		return "subclass";


new SuperClass().a() returns “superclass”. new SubClass().a() returns “subclasssubclass”.

If we change the visibility of method c() to private, however:

new SuperClass().a() returns “superclass”. new SubClass().a() returns “superclasssuperclass”.

In other words, superclass method b() will call the subclass implementation of c() if it is visible, or the superclass implementation if it is not.

Of course, if we then overrride b() in the subclass as well, things change again. Then we will see new SubClass().a() returns “subclasssubclass” no matter whether c() is public or private.

Redux, selectors, and access to state

There are a couple of things I’ve struggled a lot with in working out best practices for React/Redux:

  1. How to actually implement the advice to use selectors everywhere
  2. How to get access to state when I need it

These two things are related, because selectors in general need access to the whole state tree (I think).

So I use three basic techniques:

1. To pass state to react components, I use react-redux, where the mapStateToProps function has access to the global state.
2. To provide state to reducers, I use redux-thunk, which lets me use state-aware action creators and thereby add all required state to the action payloads.
3. Alternatively, I use the third argument to redux-react’s connect() function, mergeProps, which lets me access both global state and component properties and pass them to action creators (and through actions, to the reducers).

Here’s a very basic sketch of how these three approaches look:

// A redux-thunk action creator that uses the getState()
// function to pass state to selectors
export function actionCreator1(someState, someProps) {
	return function (dispatch, getState) {
		someMoreState = selector3(getState());

		dispatch(action1(someState, someMoreState));

// A normal action creator that just gets precalculated state
export actionCreator2 = (someState) => ({

// A redux-react function that can use global state tree to call selectors
const mapStateToProps = (state, ownProps) => ({
	state1: selector1(state),
	state2: selector2(state),

const mapDispatchToProps = (dispatch) => ({ dispatch });

const mergeProps = (stateProps, dispatchProps, ownProps) => {
	return {
		// using stateProps to pass state to action creators
		action1: () => dispatch(actionCreator1(stateProps.state1, ownProps)),
		action2: () => dispatch(actionCreator2(stateProps.state2)),

export const Container = connect(

Using these approaches, I can get access to whatever state I need, and therefore use selectors all over the place. I suspect this also lets me get away with a pretty suboptimal state tree and just paper over the gaps with global state and heavy-weight selectors. But I suspect that even with a great state tree shape and great selector design, these techniques are still going to be necessary. Maybe just less so.

ES6 nested imports (Babel+react)

With the ES 6 module system, you have a choice of whether to use a single default export:

export default DefaultObject

or potentially many named exports:

export const NondefaultObject = {}

You import these slightly differently, but otherwise they work the same:

import DefaultObject from './DefaultObject'
import {NondefaultObject} from './NondefaultObject'

const App = () => (
    <NondefaultObject />

Where things go awry is where you want to aggregate up imports, as per Jack Hsu’s excellent article on Redux application structure.

import * as do from './DefaultObject'
import * as ndo from './NondefaultObject'

const App = () => (
    <do.DefaultObject/>       // Does NOT work
    <ndo.NondefaultObject />  // works
    <do.default/>              // works

Why is it so? When you import a default export, the name of the object is actually “default”. Somewhere in the Babel/Redux/React magic factory, somebody is clever enough to use the module name as an alias for its own default export when you use that module name in a JSX tag. However, when you assign that same default export to another value and then try to use that value (as in the import * case), no such magic occurs.

AspectJ: using advised class fields

A short post to clarify something that was a little mysterious from the documentation.

AspectJ around advice typically looks something like:

	pointcut myPointCut( ) : execution(void my.package.myClass.myMethod());
	void around(): myPointCut() {
		// do some stuff
		proceed(); // call the advised method
		// do some other stuff

What if I want to call other methods or use fields from myClass in the advice? There are a few moving parts here:

	pointcut myPointCut( ) : execution(void my.package.myClass.myMethod());
	void around(my.package.myClass myClass): target(myClass) && myPointCut() {
		myClass.method1(); // do some stuff
		proceed(myClass); // call the advised method
		myClass.publicField = null; // do some other stuff

To break it down:

  1. Add a parameter to around() with the type of the advised class.
  2. Use the AspectJ target() method to populate that parameter.
  3. Use the parameter value within the advice however you like. But note that you’re limited to public accessible methods and members – despite what you might think, the advice isnot within the lexical scope of the advised class.
  4. Add the parameter value as the first parameter to proceed().

This example is for an advised method with no parameters. If the method has parameters:

	pointcut myPointCut( ) : execution(void my.package.myClass.myMethod(my.package.ParamClass param));
	void around(my.package.myClass myClass, my.package.ParamClass param): target(myClass) 
			&& args(param) && myPointCut() {
		myClass.method1(); // do some stuff
		proceed(myClass, param); // call the advised method
		myClass.publicField = null; // do some other stuff

Domain model integrity example

One of the primary design goals of a domain model is to maintain the integrity of the model data, and to do so at a higher level than simple database constraints. A good domain model should be able to guarantee semantic consistency with respect to the business domain.

Validation is an important tool for consistency guarantees, but something that is often overlooked is the role of object design. Many validation rules can be replaced by designing objects so as to make it impossible to get into an invalid state in the first place. This post is about a simple example of doing just that.

The section of the model we’re concerned with looks like this:

DM example

We have a Company object, with references to Country, State, and Region objects. Country, State and Region are related in a strict hierarchy. If we knew that all countries had states and all states had regions, Company could just store a reference to Region and the rest would be implied. But we don’t have that luxury, so we need all three references. Obviously, there are some quite strong constraints on what can be considered consistent:

  1. A company’s state, if it exists, must belong the the company’s country
  2. A company’s region, if it exists, must belong to the company’s state

It’s simple to write validation rules to enforce these constraints, but we can more elegantly enforce them by embodying the rules in the behaviour of the domain objects. Here are the setters for country, state and region within the Company object:

	public void setCountry(Country country) {
		if (this.country == null || !country.equals(this.country)) {
			this.country = country;

	public void setState(State state) {
		if (this.state == null || !this.state.equals(state)) {
			this.state= state;

	public void setRegion(Region region) {
		if (this.region == null || !this.region.equals(region)) {
			this.region = region;
			if (region != null) {

If we set the company’s region, that setter automatically takes care of setting the company’s state and country to match. If we change the company’s country, on the other hand, we don’t know what state or region were intended. However, we set them to defaults that are at least consistent. The calling module can make a more considered choice at its leisure.

So, with a little model support from the country and state – that is, the provision of a “default” option for state and region respectively – it is now completely impossible for our company to be in an inconsistent state, without ever needing to validate any inputs.

An aside about normalization

In this example, company.region is nullable, state and country are not. Obviously this example is a little denormalized – country is completely specified by specifying the state. But many models have this sort of wrinkle, especially when the underlying database can’t be refactored. We can reduce the impact of the denormalized database schema on the model by changing the setter for country to this:

	private void setCountry(Country country) {
		this.country = country;

Now we can only set the country by specifying a state. This more nearly matches the conceptual model, while retaining a country field in the company object for ORM purposes.


This is a very trivial example, but the principle is extremely powerful. A domain model often can enforce complex domain constraints simply by its built-in behaviour, either by internally adjusting its state or by simply making invalid operations unavailable. When possible, this approach is greatly preferable to reactive validation, which can tend to require either complex dirty checking, or endless revalidation of unchanging data.