Redux, selectors, and access to state

There are a couple of things I’ve struggled a lot with in working out best practices for React/Redux:

  1. How to actually implement the advice to use selectors everywhere
  2. How to get access to state when I need it

These two things are related, because selectors in general need access to the whole state tree (I think).

So I use three basic techniques:

1. To pass state to react components, I use react-redux, where the mapStateToProps function has access to the global state.
2. To provide state to reducers, I use redux-thunk, which lets me use state-aware action creators and thereby add all required state to the action payloads.
3. Alternatively, I use the third argument to redux-react’s connect() function, mergeProps, which lets me access both global state and component properties and pass them to action creators (and through actions, to the reducers).

Here’s a very basic sketch of how these three approaches look:

// A redux-thunk action creator that uses the getState()
// function to pass state to selectors
export function actionCreator1(someState, someProps) {
	return function (dispatch, getState) {
		someMoreState = selector3(getState());

		dispatch(action1(someState, someMoreState));
	};
}

// A normal action creator that just gets precalculated state
export actionCreator2 = (someState) => ({
	someState,
});

// A redux-react function that can use global state tree to call selectors
const mapStateToProps = (state, ownProps) => ({
	state1: selector1(state),
	state2: selector2(state),
});

const mapDispatchToProps = (dispatch) => ({ dispatch });

const mergeProps = (stateProps, dispatchProps, ownProps) => {
	return {
		...ownProps,
		...stateProps,
		// using stateProps to pass state to action creators
		action1: () => dispatch(actionCreator1(stateProps.state1, ownProps)),
		action2: () => dispatch(actionCreator2(stateProps.state2)),
	}	
};

export const Container = connect(
  mapStateToProps,
  mapDispatchToProps,
  mergeProps
)(Component);

Using these approaches, I can get access to whatever state I need, and therefore use selectors all over the place. I suspect this also lets me get away with a pretty suboptimal state tree and just paper over the gaps with global state and heavy-weight selectors. But I suspect that even with a great state tree shape and great selector design, these techniques are still going to be necessary. Maybe just less so.

ES6 nested imports (Babel+react)

With the ES 6 module system, you have a choice of whether to use a single default export:

export default DefaultObject

or potentially many named exports:

export const NondefaultObject = {}

You import these slightly differently, but otherwise they work the same:

import DefaultObject from './DefaultObject'
import {NondefaultObject} from './NondefaultObject'

const App = () => (
  <div>
    <DefaultObject/> 
    <NondefaultObject />
   </div>
)

Where things go awry is where you want to aggregate up imports, as per Jack Hsu’s excellent article on Redux application structure.

import * as do from './DefaultObject'
import * as ndo from './NondefaultObject'

const App = () => (
  <div>
    <do.DefaultObject/>       // Does NOT work
    <ndo.NondefaultObject />  // works
    <do.default/>              // works
   </div>
)

Why is it so? When you import a default export, the name of the object is actually “default”. Somewhere in the Babel/Redux/React magic factory, somebody is clever enough to use the module name as an alias for its own default export when you use that module name in a JSX tag. However, when you assign that same default export to another value and then try to use that value (as in the import * case), no such magic occurs.

AspectJ: using advised class fields

A short post to clarify something that was a little mysterious from the documentation.

AspectJ around advice typically looks something like:

	pointcut myPointCut( ) : execution(void my.package.myClass.myMethod());
	
	void around(): myPointCut() {
		// do some stuff
		proceed(); // call the advised method
		// do some other stuff
       
	}

What if I want to call other methods or use fields from myClass in the advice? There are a few moving parts here:

	pointcut myPointCut( ) : execution(void my.package.myClass.myMethod());
	
	void around(my.package.myClass myClass): target(myClass) && myPointCut() {
		myClass.method1(); // do some stuff
		proceed(myClass); // call the advised method
		myClass.publicField = null; // do some other stuff
       
	}

To break it down:

  1. Add a parameter to around() with the type of the advised class.
  2. Use the AspectJ target() method to populate that parameter.
  3. Use the parameter value within the advice however you like. But note that you’re limited to public accessible methods and members – despite what you might think, the advice isnot within the lexical scope of the advised class.
  4. Add the parameter value as the first parameter to proceed().

This example is for an advised method with no parameters. If the method has parameters:

	pointcut myPointCut( ) : execution(void my.package.myClass.myMethod(my.package.ParamClass param));
	
	void around(my.package.myClass myClass, my.package.ParamClass param): target(myClass) 
			&& args(param) && myPointCut() {
		myClass.method1(); // do some stuff
		proceed(myClass, param); // call the advised method
		myClass.publicField = null; // do some other stuff
       
	}

Domain model integrity example

One of the primary design goals of a domain model is to maintain the integrity of the model data, and to do so at a higher level than simple database constraints. A good domain model should be able to guarantee semantic consistency with respect to the business domain.

Validation is an important tool for consistency guarantees, but something that is often overlooked is the role of object design. Many validation rules can be replaced by designing objects so as to make it impossible to get into an invalid state in the first place. This post is about a simple example of doing just that.

The section of the model we’re concerned with looks like this:

DM example

We have a Company object, with references to Country, State, and Region objects. Country, State and Region are related in a strict hierarchy. If we knew that all countries had states and all states had regions, Company could just store a reference to Region and the rest would be implied. But we don’t have that luxury, so we need all three references. Obviously, there are some quite strong constraints on what can be considered consistent:

  1. A company’s state, if it exists, must belong the the company’s country
  2. A company’s region, if it exists, must belong to the company’s state

It’s simple to write validation rules to enforce these constraints, but we can more elegantly enforce them by embodying the rules in the behaviour of the domain objects. Here are the setters for country, state and region within the Company object:

	public void setCountry(Country country) {
		if (this.country == null || !country.equals(this.country)) {
			this.country = country;
			setState(country.getStates().getDefault());
		}
	}

	public void setState(State state) {
		if (this.state == null || !this.state.equals(state)) {
			this.state= state;
			setCountry(state.getCountry());
			setRegion(state.getRegions().getDefault());
		}
	}

	public void setRegion(Region region) {
		if (this.region == null || !this.region.equals(region)) {
			this.region = region;
			if (region != null) {
				setState(region.getState());
			}
		}
	}

If we set the company’s region, that setter automatically takes care of setting the company’s state and country to match. If we change the company’s country, on the other hand, we don’t know what state or region were intended. However, we set them to defaults that are at least consistent. The calling module can make a more considered choice at its leisure.

So, with a little model support from the country and state – that is, the provision of a “default” option for state and region respectively – it is now completely impossible for our company to be in an inconsistent state, without ever needing to validate any inputs.

An aside about normalization

In this example, company.region is nullable, state and country are not. Obviously this example is a little denormalized – country is completely specified by specifying the state. But many models have this sort of wrinkle, especially when the underlying database can’t be refactored. We can reduce the impact of the denormalized database schema on the model by changing the setter for country to this:

	private void setCountry(Country country) {
		this.country = country;
	}

Now we can only set the country by specifying a state. This more nearly matches the conceptual model, while retaining a country field in the company object for ORM purposes.

Conclusion

This is a very trivial example, but the principle is extremely powerful. A domain model often can enforce complex domain constraints simply by its built-in behaviour, either by internally adjusting its state or by simply making invalid operations unavailable. When possible, this approach is greatly preferable to reactive validation, which can tend to require either complex dirty checking, or endless revalidation of unchanging data.

The biggest challenge for older developers is…

This is a post in response to John Somnez’ article on DZone.

The biggest issue for older developers is exactly this attitude that you have to “keep up with the trends”. Note the choice of words. We’re not saying “technical improvements”. We’re really just talking fashion. Herd mentality, if you will.

Now, one of the skills you learn as you go on is how to filter out fluff. In every other field (for rhetorical values of “every”, of course) the increasing discernment of older professionals is valued. In software development it’s too often seen as inflexibility.

The chasing of the bright shiny object has been elevated to a core value of the profession. There was a prominent article a few months back by the technical lead of a household-name internet business talking about their recent reinvention of their technology platform (sorry, reference to follow if I find it). On close reading, one thing jumped out – the part of the document on the rationale for change was packed with fluffy phrases like “old hat”, “past it”, “time for a change”, and even “we were bored with Java”. That’s right – these guys went public with the admission that they spent five-figure sums of shareholder money because they were “bored”. And the punchline? Nobody called them on it. This is seen as normal, even laudable. Possibly even “visionary”.

So what do you do when you realize that much of what people around you are talking about is fluff? When you realize you’ve seen the same hype cycle 3 or 4 times? Heaven forbid you should actually say it – that’s the quickest way to get labelled a dinosaur, and unwilling to change or learn. The best you can do, as an older developer, is to try to add value, point out the pitfalls (because you’ve seen them before), and try to gently nudge the herd away from the worst cliffs. Stay positive. Keep learning, of course, because it’s never all fluff. And avoid eyerolling and audible groans wherever possible.

That, to me, is the biggest challenge of being an older developer.

Multi-project AspectJ builds with Gradle and Eclipse

Using Gradle for build/CI and Eclipse for development is a nice ecosystem with reasonable integration, but things get a bit trickier when we add multi-project builds and AspectJ into the mix. This post steps through some of the manual steps required to get it all working together.

Environment

Note: I am using the built-in gradle eclipse plugin, but not the eclipse gradle plugin

The multi-project build

For reasons beyond the scope of this post, I’m using three projects, in order of dependency:

model – a rich domain model
persistence – a project which uses AspectJ to layer a set of generic persistence-aware superclasses on top of model
modelp – a project which takes the emitted classes from persistence and adds all the necessary persistence plumbing, such as hibernate mappings, optimized DAOs, etc.

Gradle configuration

Details irrelevant to the multi-project configuration are omitted.

persistence project:

dependencies {
	ajInpath project(path: ':model', transitive: false)
}

The persistence project will then emit all of the classes from the model project woven with the aspects from persistence. Note that the upstream dependencies of model are not woven, nor are they automatically available to the persistence project. We need to use the normal gradle dependency mechanisms if we want to do that.

modelp project

Similarly:

dependencies {
	ajInpath project(path: ':persistence', transitive: false)
}

Eclipse configuration

So far so good. Gradle is pretty clever about wiring up multi-project builds. Eclipse is a little less clever, or maybe just different. So after

gradle eclipse

we still have some manual steps to do to recreate this setup in Eclipse.

AspectJ setup

Edited 7th June 2016, thanks to Daniel’s very helpful comment

Here we come to the first difference between Eclipse and Gradle. If we add the upstream project to the inpath, AspectJ will try to weave all of that project’s referenced libraries as well. In effect, Eclipse is missing the “transitive: false” argument we used in Gradle. This is (mostly) harmless (probably), but it’s slow and can throw spurious errors. So instead of adding the whole upstream project to the inpath, we add the project’s emitted class folder.

Together with adding the AspectJ nature to the project, the gradle code to configure eclipse looks like this for the modelp project:

eclipse {

	project {
		natures = ['org.eclipse.ajdt.ui.ajnature','org.eclipse.jdt.core.javanature']
		
		buildCommand 'org.eclipse.ajdt.core.ajbuilder'
	}
	
	// Add the inpath entry to the classpath
	classpath {
		file {
			withXml {
				def node = it.asNode();
				node.appendNode("classpathentry",  [kind:"lib", path:"/model/bin"])
					.appendNode("attributes", [:])
					.appendNode("attribute", [name:"org.eclipse.ajdt.inpath", value:"org.eclipse.ajdt.inpath"]);
			}
		}

	}
}

Dependent project setup

We still need the upstream project and its libraries to be available to the Eclipse compiler. The gradle eclipse plugin will take care of this if we have a normal compile project dependency in our gradle build (e.g. compile project(":model")), but we don’t necessarily need that for our gradle build. If we only have the inpath dependency the gradle eclipse plugin will miss it, so in Eclipse we also need to add the upstream project as a required project in the Java Build Path, like so:

javapath

Export exclusions

By default, adding the AspectJ nature to an Eclipse project causes it to export the AspectJ runtime (aspectjrt-x.x.x.jar). As all three of these projects are AspectJ projects, we end up with multiply defined runtimes, so we need to remove the runtime from the export list of the upstream projects.

Gradle is much better than Eclipse at dealing with complex dependency graphs. In particular, if an upstream project depends on an older version of a jar and a downstream project depends on a newer version of the same jar, the newer version will win. In Eclipse, both jars will be included in the classpath, with all the corresponding odd behaviour. So you might also need to tweak the export exclusions to avoid these situations.

Run configuration

Once you’ve cleaned up the exports from upstream projects, Eclipse will cheerfully ignore your exclusions when creating run or debug configurations, for example when running a JUnit test. This seems to be a legacy behaviour that has been kept for backward compatibility, but fortunately you can change it at a global level in the Eclipse preferences:

preferences

Make sure the last item, “only include exported classpath entries when launching”, is checked. Note that this applies to Run configurations as well, not just Debug configurations.

Conclusion

The manual Eclipse configuration needs to be redone whenever you do a gradle cleanEclipse eclipse, but usually not after just a plain gradle eclipse. It only takes a few minutes to redo from scratch, but it can be a hassle if you forget a step. Hence this blog post.

Interfaces as Ball of Mud protection

A response to https://dzone.com/articles/is-your-code-too-concrete, where Edmund Kirwan hypothesizes that using interfaces delays the onset of Mud.

A few observations:

* Any well-factored system will have more direct dependencies than methods. More methods than direct dependencies indicates that code re-use is very low.

* For any well-structured system, the relationship between direct dependencies and indirect dependencies will be linear, not exponential. The buttons and string experimental result is not surprising, but would only apply to software systems where people really do create interconnections at random. The whole purpose of modular program structure is explicitly to prevent this.

* Abstract interfaces are in no way a necessary condition for modular programming.

* Finally, the notion that interfaces act as a termination point for dependencies seems a little odd. An interface merely represents a point at which a dependency chain becomes potentially undiscoverable by static analysis. Unquestionably the dependencies are still there, otherwise your call to that method at runtime wouldn’t do anything.

So I suspect that what Edmund has discovered is a correlation between the use of interfaces and modular program structure. But that is just a correlation. A few years back there was an unfortunate vogue for creating an interface for each and every class, a practice which turned out to be entirely compatible with a Big Ball of Mud architecture. The button and string experiment provides an interesting support for modular programming, but I don’t know that it says much about interfaces.

New improved placebo effect!

http://www.sciencealert.com/the-placebo-effect-is-somehow-getting-even-better-at-fooling-patients-study-finds

Doncha just love the medical research community’s hate-hate relationship with the placebo effect? There’s a rich vein here, but I’ll limit myself to two observations:

1. According to the URL the placebo effect is getting better at fooling patients. Not getting better at treating patients. Which it is also doing, of course. Pretty obvious which aspect we’re really interested in.

2. Then there’s this paragraph:

“In any case, it’s something the medical industry will want to get on top of, as the move to conducting longer and larger drug trails – ostensibly for the purposes of testing efficacy – seems to be backfiring when it comes to getting new therapeutic solutions onto the market.”

In other words – conducting larger trials is not serving the goal, which is to get therapeutic solutions onto the market whether they are actually more effective than placebo or not.

And on and on it goes – all the usual terminology contrasting real drugs with imaginary cures etc etc. Why are we not researching how to generate and enhance the placebo effect?

Blackboard and software complexity

A comment on Blackboard’s complexity problems.

If either the author of this article or the otherwise knowledgeable Feldstein have ever worked in software development, it’s not apparent from this article and the ensuing comments thread. The list of architectural scare factors – multiple deployment environments, wide use of 3rd party libraries, legacy code – is simply business as usual for any substantial software product. And the assertion that “few other companies support this sheer raw complexity of configuration combinations” is just plain wrong. Many, many companies deal with exactly this. Cross-platform release engineering is a demanding but well-understood discipline.

To pick on a couple more representative points: “All enterprise software ages poorly”. No, all software ages. Whether it ages poorly or well depends on whether it’s worth the vendor’s time to manage its aging. Go and ask the IBM shops running 1960’s-vintage System 360 applications on modern virtualized environments whether they’re happy with 50 years of ROI on those applications. And then: “Microsoft control their entire ecosystem”. Please, please, go and talk to a Microsoft release test engineer about how controlled their release targets are. Make sure you have a very comfortable seat and lots of beer money, because you’ll be buying and you’ll be there for a looong time.

I don’t challenge the author’s underlying premise that Blackboard has mismanaged its software assets – I don’t have the inside knowledge to confirm or deny that. And the notion that Blackboard, like every software developer, needs to actively manage and reduce complexity is incontestable. But I don’t accept the notion that the architectural factors listed are any kind of indicator. I would bet that inside Blackboard there are some very frustrated developers who know exactly how to support that range of configurations, led by a management group who is telling them not to spend time refactoring and reducing technical debt, but rather to crack on with adding to the feature list smorgasbord. As if that’s an either/or choice.

Gradle – copy to multiple destinations

TL:DR (edited);

def deployTargets = ["my/dest/ination/path/1","my/other/desti/nation"]
def zipFile = file("${buildDir}/distributions/dist.zip")

task deploy (dependsOn: distZip) {
	inputs.file zipFile
	deployTargets.each { outputDir ->
		outputs.dir outputDir
	}
	
	doLast {
		deployTargets.each { outputDir ->
			copy {
				from zipTree(zipFile).files
				into outputDir
			}
		}
	}
}

My specific use case is to copy the jars from a java library distribution to tomcat web contexts, so you can see the distZip dependency in there, along with zip file manipulation.

The multiple destination copy seems to be a bit of FAQ for gradle newcomers like myself. Gradle has a cool copy task, and lots of options to specify how to copy multiple sources into one destination. What about copying one source into multiple destinations? There’s a fair bit of confusion around the fact that the copy task supports multiple “from” properties, but only one “into” property.

The answers I’ve found seem to fall into one of two classes. The first is to just do the copy imperatively, like so:

task justDoit << {
  destinations.each { dest ->
    copy {
      from 'src'
      to dest
    }
  }
}

which gives up up-to-date checking. The solution I’ve settled on fixes that by using the inputs and outputs properties. Unlike the copy task type’s “into” property, a generic task can have multiple outputs.

The other advice given is to create multiple copy tasks, one for each destination. The latter seems to be a little unsatisfactory, and un-dynamic. What if I have 100 destinations? Must I really clutter up my build script with 100 copy tasks? The following is my attempt to handle it dynamically.

def deployTargets = ["my/dest/ination/path/1","my/other/desti/nation"]
def zipFile = file("${buildDir}/distributions/dist.zip")

task deploy

// Set up a copy task for each deployment target
deployTargets.eachWithIndex { outputDir, index ->
	task "deploy${index}" (type: Copy, dependsOn: distZip) {
		from zipTree(zipFile).files
		into outputDir
	}
	
	deploy.dependsOn tasks["deploy${index}"]
}

This one suffers from the problem that it will not execute on the same build when the zip file changes, but it will execute on the next build. So in sequence:

  • Change a source file
  • Run “gradle deploy”
  • Sources compile, distZip executes, zip file is produced, but deploy tasks do not execute
  • Run “gradle deploy” again
  • Deploy tasks execute

Why is this so? I don’t know. This thread seems to imply that there could be some race condition in gradle, but beyond that – *shrug*. The multiple copy task approach is recommended by a lot of smart people, so I assume there’s a better way to do it, but for now the single custom task is working for me.