Tuesday, 21 July 2015

Using the Model-Store pattern in Inversion (part 5)

This post is the fifth in a series about using a Model-Store pattern in the Inversion behavioural micro-framework for .Net (see: https://github.com/guy-murphy/inversion-dev).
The first part can be found here.
The second part can be found here.
The third part can be found here.
The fourth part can be found here.


Looking back, I might as well have titled this essay - "I hate ORMs".

Firstly, because I really, really do - I think they are at the root of many evils in .Net software architecture. That's not to say that its the ORMs fault, because it really isn't. However, time after time we are told to "just use" Entity Framework or N-Hibernate because that's what Microsoft or the big boys have blogged about, and time after time I see people getting into the same mess with a half-cocked Repository pattern, absurdly heavyweight models and bloody DTOs springing up all over the place. What I want is a clean, quick, safe and lightweight pattern of data access, and an ORM has never convinced me that it is any of these things because people tend to abuse the functionality it provides without thinking about the consequences or that there might be other ways to approach data access.

Secondly, I didn't actually need to say that this was using Inversion, because Model-Store can be done in any .Net framework. However, it's been useful to write the foundations of the Inversion.Data namespace by considering the paths that writing this article has led me down.

The views in this set of articles almost feel heretical when looking at the current "best practice" of .Net application development. Having an immutable model POCO and a lightweight store (which eschews ORM-like functionality) shifts complexity away from the data access implementation, forcing any business logic back to the application where (in my opinion) it belongs. Data access should be dumb, MUST be dumb, if the application is to be agnostic about the storage technology to be used. This massively increases the platform and language options available for porting and the development of new features. I would counsel avoiding relying upon the features of a particular storage medium, because you never know when that medium could become suddenly unsuitable or unavailable.

Model objects are best designed very closely to the underlying table or document structures. This is for the simplicity of maintenance and the ease of introduction to new development staff. A Model should remain at a level of abstraction such that the properties would be the same across common types of database representation. Therefore, an application that uses this pattern can achieve granular control over the strategies used for reading, writing and updating individual data classes whilst also remaining agnostic to storage technology choices.

When developing the MyOffers.co.uk data analytics engine, we tended to write Stores for at least two storage mediums, e.g. MS Sql Server and Redis and/or MongoDB. This forced us into keeping data access straightforward while letting us concentrate on what the application actually needed to accomplish. These same patterns helped us again when writing a cloud-based quiz engine that would operate at scale and the new version of the website application. It forms part of a discipline that assists what I believe to be a fast and clean way of considering modern system architecture.

Immutable Models and CQRS/Event-Sourcing

There are other benefits that immutable Models can bring. If you take the basic tenet that a Model is immutable, then it follows that any change that is required to a Model can be represented as a separate thing. This is the basis of an architecture which gives CQRS-like benefits to an application that uses immutable Models in combination with read-only, optimised Model Stores, with changes persisted in the form of Events to write-capable Stores which could have a completely different storage medium that is optimised for write, thus separating the reads (a Query) from the writes (a Command).

This helps support eventual consistency, because any changes must be persisted explicitly and the source that the read-optimised Store fetches from can be lazily updated by a mechanism that is in the background. With this architecture, concurrent threads are guaranteed to read the same value when referencing an object, and modifications can be made without having to worry about locking or making a local copy.


It is not necessary to use the whole of Model-Store to get these advantages. However, the Store part of the pattern provides an efficient way of packaging data access which can use the immutable Model objects, and shows us that when implementing a DAL in the .Net ecosystem there are other options than Repository patterns and ORMs like Entity Framework or N-Hibernate.

If you haven't agreed with me so far, then perhaps the most compelling thing I can offer is that, most of the time, an application need only read what the values of a Model object are; updates and writes are scarce by comparison. Why weigh your application down with the ability to track changes in an object when you don't use it in most operations?

If you design your application to make changes explicitly then you can take advantage of the benefits of lightweight data access.

I've wanted to document some of my thoughts about this pattern for a while, and I do hope that some of you have stuck through to the end. There are many things that can be pointed to in this set of arguments which other developers may not agree with, and that's fine. As I said, this style of pattern may not be for you. However - I have seen it work in small, medium and production-scale systems, making them fly, whilst also providing a great deal of freedom as a developer.

At the very least - please check out the Inversion framework. Full disclosure being that I'm a major contributor for it and I am using it very happily in the production of a large, automatically scaling cloud-based system for a major institution at the moment, including the bits and pieces you will find in the Inversion.Data namespace.

Dream large.