Skip to main content
Ben Nadel at Scotch On The Rocks (SOTR) 2011 (Edinburgh) with: Sally Jenkinson
Ben Nadel at Scotch On The Rocks (SOTR) 2011 (Edinburgh) with: Sally Jenkinson

Patterns Of Enterprise Application Architecture By Martin Fowler

By
Published in Comments (12)

I'm about 15-years behind the curve on this one, but I finally picked up a copy of Patterns of Enterprise Application Architecture by Martin Fowler (and others). Published in 2002, the book has certainly aged a bit - talking about XML and SOAP as common data formats, with no mention of JSON (JavaScript Object Notation) as the de facto standard. But, the core concepts discussed in the book are timeless and just as relevant today as they were at the turn of the century. Unfortunately, half to this book goes a bit over my head; but, the remarkable thing about this book is that Martin Fowler doesn't make me feel like a doofus for having an incomplete mental model. And, I actually finished the book feeling better about myself as a programmer.


 
 
 

 
Patterns of Enterprise Application Architecture by Martin Fowler, review by Ben Nadel.  
 
 
 

When I was 25, I didn't know thing-one about Object Oriented Programming (OOP) or Domain Models in software design. My approach to building software was roughly equivalent to the Page Controller pattern where each View had a block of server-side code at the top that dictated how the page would load data and respond to user actions. It was a simple approach; but, I was building simple software.

But, even though I was making my clients happy, I had the sense that the Page Controller pattern was a very primitive way to build applications. I was incredibly interested in learning more about application architecture and especially about OOP and Domain Models. I assumed that if I just kept my nose to the grind-stone and continued to learn and continued to experiment then, one day, I would finally undrestand OOP and application architecture would have become second nature.

Now, over a decade later, Patterns of Enterprise Application Architecture makes it abundantly clear that I still know basically nothing about Object Oriented Programming and Domain Models. Even the relatively simple example that Fowler uses - income recognition - looks overly complicated and confusing to me. I have trouble imagining a large web of interconnected objects scaling up to fulfill all business requirements. To my primitive brain, it would be a nightmare just trying to figure out how anything worked and which object fulfilled which responsibility.

But, I don't mean for this to become a pity-party. In fact, I bring this up to underscore the point that Fowler freely admits that creating and understanding Domain Models is a non-trivial task.

Yet the Domain Model has its faults. High on a list is the difficulty of learning how to use a domain model. Object bigots often look down their noses at people who just don't get objects, but the consequence is that a Domain Model requires skill if it's to be done well - done poorly it's a disaster. (Kindle, Location 2094)

.... If the "how" for a Domain Model is difficult because it's such a big subject, the "when" is hard because of both the vagueness and the simplicity of the advice. It all comes down to the complexity of the behavior in your system. If you have complicated and ever-changing business rules involving validation, calculations, and derivations, chances are that you'll want an object model to handle them. On the other hand, if you have simple not-null checks and a couple of sums to calculate, a Transaction Script is a better bet. (kindle, Location 2505)

I can personally attest to the fact that a poorly designed Domain Model can be disastrous. Years ago, I did try to build an application using an object-oriented Domain Model. And, the last thing that I remember is that the project was 2-times over budget; and, making even the slightest change required updates to many different code files. Thankfully, I was working with a wonderful and laid-back client; and, had an employer who gave me a lot of latitude to experiment.

That said, it was a low point in my career. The experience left me very hesitant to ever venture back into the world of Domain Models. And, since then, I've gravitated towards the Transaction Script pattern, which employs a more procedural, top-down approach to programming.

The Transaction Script pattern has worked out fairly well for me. But, I've always viewed it as a secret shame - something that I've felt guilty about using. And, that's because every time you ever read about the Transaction Script pattern, it's being portrayed in a negative light. A sort of "fake object-oriented programming" that combines the "worst of both worlds."

So, for me, one of the most valuable aspects of "Patterns Of Enterprise Application Architecture" is that Fowler describes the Transaction Script pattern as being useful; and, often times, the right tool for the right job.

The simplest of the three patterns is Transaction Script. It fits with the procedural model that most people are still comfortable with. It nicely encapsulates the logic of each system transaction in a comprehensible script. And it's easy to build on top of a relational database. Its great failing is that it doesn't deal well with complex business logic, being particularly susceptible to duplicate code. If you have a simple catalog application with little more than a shopping cart running off a basic pricing structure, Transaction Script will fill the bill perfectly. However, as your logic gets more complicated your difficulties multiply exponentially. (Kindle, Location 2094)

.... You can organize your Transaction Script into classes in two ways. The most common is to have several Transaction Scripts in a single class, where each class defines a subject area of related Transaction Scripts. This is straightforward and the best bet for most cases. The other way is to have each Transaction Script in its own class, using the Command pattern. In this case you define a supertype for your commands that specifies some execute method in which Transaction Script logic fits. (Kindle, Location 2328)

.... As the business logic gets more complicated, however, it gets progressively harder to keep it in a well-designed state. One particular problem to watch for is its duplication between transactions. Since the whole point is to handle one transaction, any common code tends to be duplicated.

Careful factoring can alleviate many of these problems, but more complex business domains need to build a Domain Model. A Domain Model will give you many more options in structuring the code, increasing readability, and decreasing duplication.

It's hard to quantify the cutover level, especially when you're more familiar with one pattern than the other. You can refactor a Transaction Script design to a Domain Model design, but it's a harder change than it otherwise needs to be. Therefore, an early shot is often the best way to move forward. (Kindle, Location 2349)

Seeing Fowler talk about the Transaction Script in a positive light has lifted a huge emotional burden off of my shoulders. The Transaction Script approach that I've come to use no longer has to be the private shame that it once was - it can be the "solution" that I use to provide value for my clients. This revelation alone made the book worth reading.

Now, while Fowler does say that the Transaction Script is a valuable pattern, he is also quick to point out that it's Achilles' Heel is dealing with complex business logic. But, I would hazard a guess that the vast majority of us don't really deal with complex business logic. I would guess that the vast majority of us actually build CRUD (Create, Read, Update, Delete) applications. In some cases - like InVision App - these are incredibly large, robust, and distributed CRUD applications; but, CRUD application nonetheless.

As I was reading this book, one thing that started to become more clear was the distinction between a "complex domain" and a "complex application." For a long time, I viewed these two concepts as going hand-in-hand. But, I think many of us can build very complex applications on top of very straightforward domains. A useful metaphor might be juggling. Going from juggling 3 balls to juggling 10 balls and 3 chainsaws while riding a unicycle is certainly an massive increase in difficulty; but, it's not really an increase in "complexity."

To this end, a large application does not equal a complex application. And, I think it still makes sense for a "large" but "simple" application to use the Transaction Script pattern.

Now, in my applications, I have historically been referring to these "Transaction Scripts" as my "service layer." But, I think that my "service layer" is a actually a kind of wonky amalgamation of two distinct concepts: the Service Layer and the Transaction Script. Where as the Transaction Script deals with "domain logic," the Service layer deals with "application logic."

Service Layer is a pattern for organizing business logic. Many designers, including me [Randy Stafford], like to divide "Business logic" into two kinds: "domain logic," having to do purely with the problem domain (such as strategies for calculating revenue recognition on a contract), and "application logic," having to do with application responsibilities (such as notifying contract administrators, and integrated applications, of revenue recognition calculations). Application logic is sometimes referred to as "workflow logic," although different people have different interpretations of "workflow." (Kindle, Location 2785)

.... In the operation script approach a Service Layer is implemented as a set of thicker classes that directly implement application logic but delegate to encapsulated domain object classes for domain logic. The operations available to clients of a Service Layer are implemented as scripts, organized several to a class defining a subject area of related logic. Each such class forms an application "Service," and it's common for service type names to end with "Service." A Service Layer is comprised of these application service classes, which should extend a Layer Supertype, abstracting their responsibilities and common behaviors. (Kindle, Location 2799)

.... My experience is that there's almost always a one-to-one correspondence between CRUD use cases and Service Layer operations... The application's responsibilities in carrying out these use cases, however, may be anything but boring. Validation aside, the creation, update, or deletion of a domain object in an application increasingly requires notification of other people and other integrated applications. These responses must be coordinated, and transacted atomically, by Service Layer operations. (Kindle, Location 2827)

For years, I've used "services" and "transaction scripts" to fulfill business requirements. But, I have often run into points of terrible friction along the way. I have long suspected that this friction was due to an incorrect organization of my application layers; but, I haven't had the mental model necessary to clearly identify the problems. This book has given me a lot to think about; and, I think I'm much closer now to being able to cleanly separate concerns within my application (even without a true "Domain Model").

Another hidden shame that this book washed away was my choice in database Primary Keys (PKey). When it comes to database PKeys, I generally use an auto-incrementing integer (or a UUID - Universally Unique ID - if the IDs are client-provided). And, to this end, I've often been made to feel guilty that I don't try to use some sort of "meaningful" PKey like an email address of a phone number - something that has significant meaning within the problem domain. Fowler, however, argues that "meaningful" keys in all but the most simple cases should be avoided:

The first concern is whether to use meaningful or meaningless keys. A meaningful key is something like a US Social Security number identifying a person. A meaningless key is essentially a random number the database dreams up that's never intended for human use. The danger with a meaningful key is that, while in theory they make good keys, in practice they don't. To work at all, keys need to be unique; to work well, they need to be immutable. While assigned numbers are supposed to be unique and immutable, human error often makes them neither. If you mistype my SSN for my wife's the resulting record is neither unique nor immutable (assuming you would like to fix the mistake). The database should detect the uniqueness problem, but it can only do that after my record goes into the sytem, and of course that might not happen until after the mistake. As a result, meaningful keys should be distrusted. For small systems and/or very stable cases you may get away with it, but usually you should take a rare stand on the side of meaninglessness. (Kindle, Location 4226)

I really appreciate how pragmatic Martin Fowler is. It's not all just ivory tower theory - it's value-add solutions. In addition to learning about programming, I came away from this book just feeling better about myself.

But, I don't want you to get the wrong idea - this book wasn't one giant therapy session for me; a good deal of it went over my head. And a good deal of it was new and fascinating. Of particular note, I found the chapter on Locking to be fascinating. In the past, I've used the "Pessimistic Offline Lock" - locking early and often; but, I've never used an "Optimistic Offline Lock," which leans on the versioning to catch conflicts.

With an RDBMS data store the verification is a matter of adding the version number to the criteria of any SQL statements used to update or delete a record. A single SQL statement can both acquire the lock and update the record data. The final step is for the business transaction to inspect the row count returned by the SQL execution. A row count of 1 indicates success; 0 indicates that the record has been changed or deleted. With a row count of 0 the business transaction must rollback the sytem transaction to prevent any changes from entering the record data. At this point the business transaction must either abort or attempt to resolve the conflict and retry. (Kindle, Location 7899)

.... As with all locking schemas, Optimistic Offline Lock by itself doesn't provide adequate solutions to some of the trickier concurrency and temporal issues in a business application. I can't stress enough that in a business application concurrency management is as much a domain issue as it is a technical one. (Kindle, Location 7930)

In retrospect, I suppose that's what PouchDB / CouchDB is using, in so much as you can't update or delete a PouchDB object without providing the version that you want to act on. But, that choice was foisted upon me by the persistence library - optimistic offline locking is not something that I've ever explicitly built into a database application. But, I'm very eager to try it out!

Published 15 years ago, Patterns Of Enterprise Application Architecture is in some ways dated; but, in many more ways it's just as relevant today as it was when it was released. We're still dealing with business problems; we're still dealing with large, sprawling applications; we're still dealing with data persistence; and we're still dealing with high-concurrency workflows. We're still dealing with many of the problems that this book seeks to address. And, while I'm still an object-oriented novice, many of the patterns in this book gave me a lot to think about.

Reader Comments

16 Comments

Your technical book reviews are so rich with essential goodness you should bottle and sell them. Like most developers, although I know this kind of book would be beneficial to read (I mean it's Fowler...) I find it really hard to follow along and when I buy one, I fall off the wagon by chapter 3... Thank you for taking the time to not only read these books but also sharing your point of view on them. It helps me quickly understand what the critical takeaways are and whether buying it is worth it (I hate spending $50 on paperweights).

As always, you da man and keep up the good work!

3 Comments

I really appreciate your honesty. When I read (mostly read, sometimes just skimmed) that book, much went over my head, too. But rarely do you see a developer admit as much in a public forum. It's very refreshing.

Also, I have long been a proponent of non-natural (meaningless) keys for RDBMS records, and even in very simple systems I will advocate for their use.

I've seen too many systems that experienced problems when the application became more complex over time (which they almost always tend to do). And making changes at that most fundamental level of the application after it is in production (or late in the development cycle) almost always leads to a lot of pain and a lot of cost.

Thanks!

1 Comments

Thank you for the book review.
There's some history on the whole meaningful/meaningless key concept that Fowler leaves out. Back in my database design class (late 80's), there was still an argument between the natural key vs meaningless key models. As Fowler points out -- there are issues with any value that _you don't control_. And, while it took some time for the industry to conclude this, I think that's the critical difference. When you are not the single point of truth, then mutability will follow and all CRUD operations have to be supported.
There are some other things that helped -- the concept of a "clustered" index, for example, and covering keys.
Some books and concepts are timeless. I read this book many years ago and your review has motivated me to pick it up again and re-read it.
Thanks :-)

15,902 Comments

@Jean,

Much appreciated sir! Well, hopefully I can continue to pass on some of the little tid bits that I'm actually able to understand :D And trust me, these books are a best. This one was like 500 pages. And there's another book in the same series on "Enterprise Integration Patterns" and it's like 700 pages!!!! Insanity :D

15,902 Comments

@Charles,

Thank you for the kinds words. I earnestly want to understand all this stuff. But, it just doesn't come easy. I don't think I'll ever really have a solid understand of OOP principles unless I go work at a company that truly integrates it into their workflow. Otherwise, I'm just tinkering and it never seems to make sense.

Re: DB keys, what's your take on UUIDs? I know some people say that they should be used everywhere. But, I have trouble breaking away from the Integer :D

15,902 Comments

@Johh, @Charles,

... and, sometimes you end up having duplicate data because of contextual state. Like, take a "soft delete" where a table has an isDeleted or isArchived column. Suddenly, that meaningful key can't be "unique" because there might be a deleted one. To which, people will say, No problem, make the key:

KEY (SSN, isDeleted)

... to which you can say, Yeah, but what happens when the 2nd one is deleted and now you have to have two soft-seleded SSN values in the same table.

As an aside, though, I'm starting to like the idea of actually moving "soft delete" items to an "archive" table, like "user" and "user_archive". Then, you could have more constrained keys, if you so wanted. But, I still like the meaningless key, regardless.

3 Comments

@Ben,

If I have anything to say about it, we use an auto-generated integer key (Identity key in SQL Server). Simple and to the point, and easy to type when doing development and testing. I dislike using UUIDs (or GUIDs as I call them) for database keys. They are too long, too unwieldy, and take up too much space.

But that's my opinion, and I know there are those who prefer GUIDs over integers.

And I concur with your statements regarding natural keys, especially multiple-column keys. They have always led to problems, or least made things more difficult, on projects that I have been a part of where those are used.

15,902 Comments

@Charles,

The one thing that we (at work) are starting to consider about UUID-based keys is that we are now building an infrastructure that has shared-nothing stacks. Meaning, we have complete copies of the stack for different sets of users (usually based on contract stuff). Part of that involves migrating those users from the main stack over to the isolated stack. As part of this migration, we have to generate new IDs for _all the things_ in case we ever need to migrate _back_ to the main stack. Because we use Integers, this is a massive pain, keeping all the new IDs and cross-entity relationships in tact. However, if they were UUIDs, it would be easier since we could theoretically always move the data without messing with the IDs.

Not sto say that is reason-enough to use UUIDs from the start; but, in the future, this is the kind of activity we'll have to take into account -- for this application specifically.

15,902 Comments

@Mark,

Thanks for the link, I'll take a look. I'm hoping to tackle the "Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions" book next (if I can man-up for another 700-page book). But, I'll take a look at the SOA one as well.

3 Comments

@Ben,

That is absolutely a situation where UUIDs would have made perfect sense. In the world that I program in, though, that kind of thing has never happened, and I've been doing business application development for nearly thirty years.

This might be the future of applications, though, so I would certainly want to think about these kinds of scenarios when gathering requirements for a new system.

Thanks

15,902 Comments

@Charles,

To be fair, this is the first time I've ever had to migrate data back and forth between a system. And this system is like 6 years old at this point. And the migrating "back" portion of this is mostly theoretical :D

I believe in love. I believe in compassion. I believe in human rights. I believe that we can afford to give more of these gifts to the world around us because it costs us nothing to be decent and kind and understanding. And, I want you to know that when you land on this site, you are accepted for who you are, no matter how you identify, what truths you live, or whatever kind of goofy shit makes you feel alive! Rock on with your bad self!
Ben Nadel