Domain model integrity example

One of the primary design goals of a domain model is to maintain the integrity of the model data, and to do so at a higher level than simple database constraints. A good domain model should be able to guarantee semantic consistency with respect to the business domain.

Validation is an important tool for consistency guarantees, but something that is often overlooked is the role of object design. Many validation rules can be replaced by designing objects so as to make it impossible to get into an invalid state in the first place. This post is about a simple example of doing just that.

The section of the model we’re concerned with looks like this:

DM example

We have a Company object, with references to Country, State, and Region objects. Country, State and Region are related in a strict hierarchy. If we knew that all countries had states and all states had regions, Company could just store a reference to Region and the rest would be implied. But we don’t have that luxury, so we need all three references. Obviously, there are some quite strong constraints on what can be considered consistent:

  1. A company’s state, if it exists, must belong the the company’s country
  2. A company’s region, if it exists, must belong to the company’s state

It’s simple to write validation rules to enforce these constraints, but we can more elegantly enforce them by embodying the rules in the behaviour of the domain objects. Here are the setters for country, state and region within the Company object:

	public void setCountry(Country country) {
		if (this.country == null || !country.equals(this.country)) {
			this.country = country;
			setState(country.getStates().getDefault());
		}
	}

	public void setState(State state) {
		if (this.state == null || !this.state.equals(state)) {
			this.state= state;
			setCountry(state.getCountry());
			setRegion(state.getRegions().getDefault());
		}
	}

	public void setRegion(Region region) {
		if (this.region == null || !this.region.equals(region)) {
			this.region = region;
			if (region != null) {
				setState(region.getState());
			}
		}
	}

If we set the company’s region, that setter automatically takes care of setting the company’s state and country to match. If we change the company’s country, on the other hand, we don’t know what state or region were intended. However, we set them to defaults that are at least consistent. The calling module can make a more considered choice at its leisure.

So, with a little model support from the country and state – that is, the provision of a “default” option for state and region respectively – it is now completely impossible for our company to be in an inconsistent state, without ever needing to validate any inputs.

An aside about normalization

In this example, company.region is nullable, state and country are not. Obviously this example is a little denormalized – country is completely specified by specifying the state. But many models have this sort of wrinkle, especially when the underlying database can’t be refactored. We can reduce the impact of the denormalized database schema on the model by changing the setter for country to this:

	private void setCountry(Country country) {
		this.country = country;
	}

Now we can only set the country by specifying a state. This more nearly matches the conceptual model, while retaining a country field in the company object for ORM purposes.

Conclusion

This is a very trivial example, but the principle is extremely powerful. A domain model often can enforce complex domain constraints simply by its built-in behaviour, either by internally adjusting its state or by simply making invalid operations unavailable. When possible, this approach is greatly preferable to reactive validation, which can tend to require either complex dirty checking, or endless revalidation of unchanging data.

The biggest challenge for older developers is…

This is a post in response to John Somnez’ article on DZone.

The biggest issue for older developers is exactly this attitude that you have to “keep up with the trends”. Note the choice of words. We’re not saying “technical improvements”. We’re really just talking fashion. Herd mentality, if you will.

Now, one of the skills you learn as you go on is how to filter out fluff. In every other field (for rhetorical values of “every”, of course) the increasing discernment of older professionals is valued. In software development it’s too often seen as inflexibility.

The chasing of the bright shiny object has been elevated to a core value of the profession. There was a prominent article a few months back by the technical lead of a household-name internet business talking about their recent reinvention of their technology platform (sorry, reference to follow if I find it). On close reading, one thing jumped out – the part of the document on the rationale for change was packed with fluffy phrases like “old hat”, “past it”, “time for a change”, and even “we were bored with Java”. That’s right – these guys went public with the admission that they spent five-figure sums of shareholder money because they were “bored”. And the punchline? Nobody called them on it. This is seen as normal, even laudable. Possibly even “visionary”.

So what do you do when you realize that much of what people around you are talking about is fluff? When you realize you’ve seen the same hype cycle 3 or 4 times? Heaven forbid you should actually say it – that’s the quickest way to get labelled a dinosaur, and unwilling to change or learn. The best you can do, as an older developer, is to try to add value, point out the pitfalls (because you’ve seen them before), and try to gently nudge the herd away from the worst cliffs. Stay positive. Keep learning, of course, because it’s never all fluff. And avoid eyerolling and audible groans wherever possible.

That, to me, is the biggest challenge of being an older developer.

Multi-project AspectJ builds with Gradle and Eclipse

Using Gradle for build/CI and Eclipse for development is a nice ecosystem with reasonable integration, but things get a bit trickier when we add multi-project builds and AspectJ into the mix. This post steps through some of the manual steps required to get it all working together.

Environment

Note: I am using the built-in gradle eclipse plugin, but not the eclipse gradle plugin

The multi-project build

For reasons beyond the scope of this post, I’m using three projects, in order of dependency:

model – a rich domain model
persistence – a project which uses AspectJ to layer a set of generic persistence-aware superclasses on top of model
modelp – a project which takes the emitted classes from persistence and adds all the necessary persistence plumbing, such as hibernate mappings, optimized DAOs, etc.

Gradle configuration

Details irrelevant to the multi-project configuration are omitted.

persistence project:

dependencies {
	ajInpath project(path: ':model', transitive: false)
}

The persistence project will then emit all of the classes from the model project woven with the aspects from persistence. Note that the upstream dependencies of model are not woven, nor are they automatically available to the persistence project. We need to use the normal gradle dependency mechanisms if we want to do that.

modelp project

Similarly:

dependencies {
	ajInpath project(path: ':persistence', transitive: false)
}

Eclipse configuration

So far so good. Gradle is pretty clever about wiring up multi-project builds. Eclipse is a little less clever, or maybe just different. So after

gradle eclipse

we still have some manual steps to do to recreate this setup in Eclipse.

AspectJ setup

Gradle’s eclipse plugin does not integrate with the gradle aspectj plugin, and hence doesn’t apply the AspectJ nature to the eclipse project. So we have to do that manually:

right-click on project -> Configure -> Convert to AspectJ project

We then need to set the inpaths, via Build Path -> Configure Build Path... -> AspectJ Build.

Here we come to the first difference between Eclipse and Gradle. If we add the upstream project to the inpath, AspectJ will try to weave all of that project’s referenced libraries as well. In effect, Eclipse is missing the “transitive: false” argument we used in Gradle. This is (mostly) harmless (probably), but it’s slow and can throw spurious errors. So instead of adding the whole upstream project to the inpath, we add the project’s emitted class folder. For modelp, it’ll look like this:

inpath

Dependent project setup

We still need the upstream project and its libraries to be available to the Eclipse compiler. The gradle eclipse plugin will take care of this if we have a normal compile project dependency in our gradle build (e.g. compile project(":model")), but we don’t necessarily need that for our gradle build. If we only have the inpath dependency the gradle eclipse plugin will miss it, so in Eclipse we also need to add the upstream project as a required project in the Java Build Path, like so:

javapath

Export exclusions

By default, adding the AspectJ nature to an Eclipse project causes it to export the AspectJ runtime (aspectjrt-x.x.x.jar). As all three of these projects are AspectJ projects, we end up with multiply defined runtimes, so we need to remove the runtime from the export list of the upstream projects.

Gradle is much better than Eclipse at dealing with complex dependency graphs. In particular, if an upstream project depends on an older version of a jar and a downstream project depends on a newer version of the same jar, the newer version will win. In Eclipse, both jars will be included in the classpath, with all the corresponding odd behaviour. So you might also need to tweak the export exclusions to avoid these situations.

Run configuration

Once you’ve cleaned up the exports from upstream projects, Eclipse will cheerfully ignore your exclusions when creating run or debug configurations, for example when running a JUnit test. This seems to be a legacy behaviour that has been kept for backward compatibility, but fortunately you can change it at a global level in the Eclipse preferences:

preferences

Make sure the last item, “only include exported classpath entries when launching”, is checked. Note that this applies to Run configurations as well, not just Debug configurations.

Conclusion

The Eclipse configuration needs to be redone whenever you do a gradle cleanEclipse eclipse, but usually not after just a plain gradle eclipse. It only takes a few minutes to redo from scratch, but it can be a hassle if you forget a step. Hence this blog post.

Interfaces as Ball of Mud protection

A response to https://dzone.com/articles/is-your-code-too-concrete, where Edmund Kirwan hypothesizes that using interfaces delays the onset of Mud.

A few observations:

* Any well-factored system will have more direct dependencies than methods. More methods than direct dependencies indicates that code re-use is very low.

* For any well-structured system, the relationship between direct dependencies and indirect dependencies will be linear, not exponential. The buttons and string experimental result is not surprising, but would only apply to software systems where people really do create interconnections at random. The whole purpose of modular program structure is explicitly to prevent this.

* Abstract interfaces are in no way a necessary condition for modular programming.

* Finally, the notion that interfaces act as a termination point for dependencies seems a little odd. An interface merely represents a point at which a dependency chain becomes potentially undiscoverable by static analysis. Unquestionably the dependencies are still there, otherwise your call to that method at runtime wouldn’t do anything.

So I suspect that what Edmund has discovered is a correlation between the use of interfaces and modular program structure. But that is just a correlation. A few years back there was an unfortunate vogue for creating an interface for each and every class, a practice which turned out to be entirely compatible with a Big Ball of Mud architecture. The button and string experiment provides an interesting support for modular programming, but I don’t know that it says much about interfaces.

New improved placebo effect!

http://www.sciencealert.com/the-placebo-effect-is-somehow-getting-even-better-at-fooling-patients-study-finds

Doncha just love the medical research community’s hate-hate relationship with the placebo effect? There’s a rich vein here, but I’ll limit myself to two observations:

1. According to the URL the placebo effect is getting better at fooling patients. Not getting better at treating patients. Which it is also doing, of course. Pretty obvious which aspect we’re really interested in.

2. Then there’s this paragraph:

“In any case, it’s something the medical industry will want to get on top of, as the move to conducting longer and larger drug trails – ostensibly for the purposes of testing efficacy – seems to be backfiring when it comes to getting new therapeutic solutions onto the market.”

In other words – conducting larger trials is not serving the goal, which is to get therapeutic solutions onto the market whether they are actually more effective than placebo or not.

And on and on it goes – all the usual terminology contrasting real drugs with imaginary cures etc etc. Why are we not researching how to generate and enhance the placebo effect?

Blackboard and software complexity

A comment on Blackboard’s complexity problems.

If either the author of this article or the otherwise knowledgeable Feldstein have ever worked in software development, it’s not apparent from this article and the ensuing comments thread. The list of architectural scare factors – multiple deployment environments, wide use of 3rd party libraries, legacy code – is simply business as usual for any substantial software product. And the assertion that “few other companies support this sheer raw complexity of configuration combinations” is just plain wrong. Many, many companies deal with exactly this. Cross-platform release engineering is a demanding but well-understood discipline.

To pick on a couple more representative points: “All enterprise software ages poorly”. No, all software ages. Whether it ages poorly or well depends on whether it’s worth the vendor’s time to manage its aging. Go and ask the IBM shops running 1960’s-vintage System 360 applications on modern virtualized environments whether they’re happy with 50 years of ROI on those applications. And then: “Microsoft control their entire ecosystem”. Please, please, go and talk to a Microsoft release test engineer about how controlled their release targets are. Make sure you have a very comfortable seat and lots of beer money, because you’ll be buying and you’ll be there for a looong time.

I don’t challenge the author’s underlying premise that Blackboard has mismanaged its software assets – I don’t have the inside knowledge to confirm or deny that. And the notion that Blackboard, like every software developer, needs to actively manage and reduce complexity is incontestable. But I don’t accept the notion that the architectural factors listed are any kind of indicator. I would bet that inside Blackboard there are some very frustrated developers who know exactly how to support that range of configurations, led by a management group who is telling them not to spend time refactoring and reducing technical debt, but rather to crack on with adding to the feature list smorgasbord. As if that’s an either/or choice.

Gradle – copy to multiple destinations

TL:DR (edited);

def deployTargets = ["my/dest/ination/path/1","my/other/desti/nation"]
def zipFile = file("${buildDir}/distributions/dist.zip")

task deploy (dependsOn: distZip) {
	inputs.file zipFile
	deployTargets.each { outputDir ->
		outputs.dir outputDir
	}
	
	doLast {
		deployTargets.each { outputDir ->
			copy {
				from zipTree(zipFile).files
				into outputDir
			}
		}
	}
}

My specific use case is to copy the jars from a java library distribution to tomcat web contexts, so you can see the distZip dependency in there, along with zip file manipulation.

The multiple destination copy seems to be a bit of FAQ for gradle newcomers like myself. Gradle has a cool copy task, and lots of options to specify how to copy multiple sources into one destination. What about copying one source into multiple destinations? There’s a fair bit of confusion around the fact that the copy task supports multiple “from” properties, but only one “into” property.

The answers I’ve found seem to fall into one of two classes. The first is to just do the copy imperatively, like so:

task justDoit << {
  destinations.each { dest ->
    copy {
      from 'src'
      to dest
    }
  }
}

which gives up up-to-date checking. The solution I’ve settled on fixes that by using the inputs and outputs properties. Unlike the copy task type’s “into” property, a generic task can have multiple outputs.

The other advice given is to create multiple copy tasks, one for each destination. The latter seems to be a little unsatisfactory, and un-dynamic. What if I have 100 destinations? Must I really clutter up my build script with 100 copy tasks? The following is my attempt to handle it dynamically.

def deployTargets = ["my/dest/ination/path/1","my/other/desti/nation"]
def zipFile = file("${buildDir}/distributions/dist.zip")

task deploy

// Set up a copy task for each deployment target
deployTargets.eachWithIndex { outputDir, index ->
	task "deploy${index}" (type: Copy, dependsOn: distZip) {
		from zipTree(zipFile).files
		into outputDir
	}
	
	deploy.dependsOn tasks["deploy${index}"]
}

This one suffers from the problem that it will not execute on the same build when the zip file changes, but it will execute on the next build. So in sequence:

  • Change a source file
  • Run “gradle deploy”
  • Sources compile, distZip executes, zip file is produced, but deploy tasks do not execute
  • Run “gradle deploy” again
  • Deploy tasks execute

Why is this so? I don’t know. This thread seems to imply that there could be some race condition in gradle, but beyond that – *shrug*. The multiple copy task approach is recommended by a lot of smart people, so I assume there’s a better way to do it, but for now the single custom task is working for me.

Tech press bias

Will Windows 10 Win Developers Back To Microsoft?

This is a relatively balanced article on the issues facing Microsoft in growing developer mindshare, containing many balanced points. But I’m going to use it as a bit of a punching bag because I’m frustrated with the poor reporting and almost unbelievable levels of bias in the tech media. Sorry PW – there are way worse articles out there, you’re just in the firing line today.

There’s a bit of confusion in this article between iOS and MacOS. Yes, Apple sells a ton of iDevices. No, there aren’t “so many people on Mac”. In the article, we see the supposedly disastrous Windows phone market share numbers (2.7% vs 18% for iPhone). Then we hear from a former .NET desktop developer bemoaning the flight of his audience to MacOS. I’m sorry, but MacOS marketshare in the desktop space is not even as high as that of Windows in the tablet space – and, despite tech-press rhetoric, desktops are still ahead of tablets in raw numbers (just), eyeball hours (by a bit more), and value as enablers (no contest).

This is the old switcheroo we’ve seen so many times in the last ten years. Apple (and now Google) has sold a bunch of phones to upgrade-happy consumers, and somehow that means Microsoft is in trouble. Except it’s not, we have to admit, when we really look at the numbers. But it could be. Soon. Maybe. Or maybe a bit later. Or maybe not. Anyway, that’s gotta be worth an article, right?

The work of the world today overwhelmingly takes place on desktop computers. And if Microsoft has “failed” in the mobile space (a thesis with which I disagree), then Apple, after 35 years of trying, has surely “failed” in the desktop space. Anyone beating up Google for lack of market share for ChromeOS? No, didn’t think so.

But mass market plays aren’t the only ones that matter. The last 30 years of desktop computing would have been vastly poorer without Apple. Today, in the tablet space, the boot is on the other foot – Apple is the marketing success story, Google has taken cheap and cheerful to the limit, and Microsoft occupies the quality niche. And with its strengths in productivity, Microsoft has a lot to bring to the mobile table. Each vendor gets appropriate credit for their respective role, right?

Wrong. The bias is almost laughable. When Apple was a niche purveyor to the graphics and music industries, design quality was what mattered. Then when Apple had a mass market success with the iPad and then iPhone, market share was what mattered. Then the iPhone got knocked off by Android, and now high-end market share is what matters. Guys, if I want to read Apple sales brochures I know where to find them.

So why do I care? I can just ignore the tech press after all. But the scary thing is this: investors don’t. Year after year of relentless bad press is going to take its toll. It won’t be poor engineering quality, or inappropriate pricing, or the old red herring of “app ecosystem”, or lack of developer mindshare, or lack of market share that kills off mobile Windows. All those things are either furphies or eminently fixable. It’ll be plain old bad reporting. And that’s a shame. Like the Mac in the 80’s and 90’s, Microsoft’s mobile offerings have a lot to teach the major players.

Java logging fun facts

I finally bit the bullet. I pulled out all the nice System.out.println() calls from my slideshow app and set up proper Java logging. I didn’t expect it to be easy – in all the hours I’ve spent debugging Java frameworks, the hardest thing is always trying to work out how to get things to actually appear in logs – and, sure enough, I accumulated a little list of things that were non-obvious. I’ve used slf4j as the logging façade, and java.util.logging (aka JDK 1.4 logging) as the implementation.

Handler levels vs logger levels

If you have:

.level = SEVERE
au.id.lagod.slideshow.level              = FINE
java.util.logging.FileHandler.level = INFO

what’s the actual log level? The first line gives the default log level. For packages in au.id.lagod.slideshow, this is overridden by the second line. The third line then gives the finest level that will be logged by the file handler. So in this case, the log file will actually only contain messages of level INFO and coarser. Another handler might accept the FINE messages that will be emitted by my app.

Logger specifications don’t need wildcards

Line 2 in the above snippet sets the log level for all classes in package au.id.lagod.slideshow and all it’s subpackages. So, if I want to get FINE logging in au.id.lagod.slideshow.*:

# THIS IS CORRECT:)
au.id.lagod.slideshow.level = FINE

# OR EVEN THIS
au.id.lagod.level = FINE

# OR THIS (if I don't mind being that inclusive)
au.id.level = FINE

# WRONG!!
au.id.lagod.slideshow.*.level = FINE

logging.properties does NOT use slf4j level names

If you use SLF4J, you use calls like logger.debug(), logger.info(), etc., to send strings to the logger. If you use java.util.logging (JDK 1.4 logging) as the logging provider, you configure the logger using a logging.properties file.

These guys do NOT use the same log level names. Doing a bunch of logger.debug() calls? In the logging.properties file:

# This shows logger.debug() messages
.level = FINE

# This is an error
.level = DEBUG

See here for the full translation between slf4j and j.u.l. log levels.

Specifying logging.properties in Eclipse

When launching from the command line, I specify the properties file in the first command line parameter, like this:

java -Djava.util.logging.config.file=./logging.properties -cp .:./*:lib/* au.id.lagod.slideshow.Runner

When launching in Eclipse, -Djava.util.logging.config.file goes into the VM arguments box of the Run configuration, NOT the application arguments box.

Where on earth is the log file?

OK, this is in the documentation, but I can tell you that if you google “java logging log file location” it will be many, many pages before you find an answer you can use. Before you get there, you’ll have to wade through gems of the documentation writer’s art such as:

java.util.logging.FileHandler.pattern: The log file name pattern.

So here’s a hot tip: java.util.logging.FileHandler.pattern actually is the log file name. It’s not a pattern, at least not in the regex sense. There are some handy placeholders for variables that can be interpolated into it, but you don’t have to use any of them. Just type the path you want. If you want to know about the placeholders, have a look at the Javadoc for FileHandler.

What’s wrong with photo slideshow apps?

I’ve been dissatisfied with the photo slideshow applications I’ve been using. Like most people, I take a lot of photos, especially when I’m travelling. Unlike most people, I use a high-res camera with good lenses, not a camera phone. That means my photos have a lot of detail, and are worth looking at for a while (for me, anyway). And because there are a lot of them, I often find myself wondering exactly where and when an image was taken. So, here’s my feature wish list for a slideshow program:

  1. Recursive directory searching. I don’t have time to put together special collections. Even if I did, 10,000 files is too many for one folder. I just want to point the slideshow at a large folder tree and have it find everything.
  2. Configurable delay. I like to look at a photo for a while, focussing on different details. I took one photo of Dunedin Harbour with an entire penguin colony in one corner, that I didn’t see until I’d looked at it for several minutes.
  3. Metadata display. I don’t have time to caption every photo, but I tag pretty much everything with at least the occasion (e.g. “Christmas 2005”) and the place (e.g. “Dunedin”). So when I see a ten-year-old photo pop up on my screen saver, I’d like to see that metadata so I have some clue as to what I’m looking at.
  4. Forward and back controls. How often do you catch a great photo out of the corner of your eye and think “Wow, what’s that?” just as the slideshow transitions to the next photo. If you’re on shuffle in a collection of tens of thousands of files, I guarantee you’ll never find that photo again. Wouldn’t it be nice to just hit a key and get it back? Or, conversely, if you’ve got a nice long delay time so you can savour every detail, you’ll occasionally spend two minutes staring at a photo of a lens cap that you forgot to delete. Unless you can just hit a key and skip to the next photo.
  5. No fussy transitions. In fact, I really want to be able to turn transitions off. I pay a fair bit of attention to framing, so having my photos sliding and zooming around the place isn’t my cup of tea. I can live with a fade-in fade-out, but I really don’t need to see Grandma spinning off into space on the side of a cube. Slideshows that recrop 4:3 photos to 16:9 are a no-no as well.

Now, I have by no means done an exhaustive search of all slideshow applications. However, it’s not a crowded category. I suspect it’s one of those software categories where the tools bundled with the operating system, while inadequate, are still functional enough to take all the oxygen out of the market. For example, the Windows 8 lock screen slideshow is a pretty nice looking slideshow, but it doesn’t include a single one of my wishlist features. Still, given that it’s there, how many people have even gone looking for something better? I have, and I can tell you that the Windows Photo Gallery slideshow changes photos way too fast (non-configurable), Photo Slideshow has no forward/back and no metadata display, some other app I forget only lets you have one non-recursive photo folder – etc, etc.

So – what else? – I wrote my own. Here it is. Fair warning, though, it’s nothing like production quality code, and it’s a java program so you’ll need to have java installed. Check out the readme for more details, and stay tuned for a future post on the technical nitty gritty.