I’ve been on a bit of kick lately, rethinking how I build applications, particularly with the aim of reducing complexity. One thing that has been readily apparent (even in the bad old days when we called it service oriented architecture) is how breaking a system into small, discrete pieces that do a single thing well makes it much easier to maintain and reason about code. This approach works really well for scaling, deployment, and for quickly being able to build out a piece of functionality without entangling it with other systems. The model works really well for the way I typically think about code in general these days. Breaking the system into smaller pieces also allows me to optimize my technology choices for the problem I’m trying to solve at a much more granular level, which often makes things simpler (in practice, I still typically stick with just c# code, but the ability to throw node, ruby, or python into the mix can be handy at times).
However, upon a little further consideration, I realized that while the microservice/SOA (hereafter called microservice, because SOA was a bit more painful at times because of less-developed tooling) architecture offered me a number of very compelling advantages with regards to the structure of my C# code, that it really fundamentally breaks the way I work with databases. A DBA friend of mine actually pointed this out to me a few years ago, but I hadn’t quite made the cognitive leap to understand it. The typical understanding in many modern frameworks is that the database is part of the application, that is, it is owned by (and subsequently only accessed by) your application. Unless your app is quite small, this is actually not the case and creates a fundamental problem, particularly with the design of microservices, especially when multiple services access the same database (more on why this is a problem in a minute). Further, it can even be problematic when the database is accessed by other systems “owned” by the DBAs, such as data warehouses and the like, as the same problems still apply.
This was a gradual transition for me, one for which I don’t think I’ve completely teased out all the implications. However, over time a gradual understanding of a better way to look at the database has emerged that seems to make a lot more sense. If the database is in a microservices architecture and is accessed by multiple applications (as opposed to being a transient data storage system for a single application), it may make more sense to treat it as its own discrete service itself, rather than a second class citizen. I would also suggest that a second-class citizen is exactly how we often treat the database and would go further to state that this treatment may be creating a lot more pain that is absolutely necessary. I’m beginning to envision data storage in an application as simply being yet another service in the architecture, written in a specific language (probably SQL) that suits the goals of the system. This unpacking into a service, while seeming like a simple semantic change, is actually a bit deeper and more profound because of the requirements placed on a service so that it can participate in a large system. Below are typical expectations of microservices, and how I think those expectations apply to a database…..err…. data storage service. I’m purposefully staying at the level of interaction between components, versus underlying concerns, such as the network, VM setup, containers, and the like. It’s not that I don’t care, it’s just that those things are somewhat driven by the manner we treat our microservices as a whole. Further, with all the changes in azure and other platforms of late, I simply haven’t caught up yet (seriously, Docker is in the mix now? Microsoft is doing amazing things that I never thought I’d see):
- Granularity of scaling. We typically do a fair job at scaling databases (or our apps die and we get the opportunity to peddle our services on the street), so I would say that in this particular case, many people’s implementations seem up to the job. Most companies seem to have at least a vague awareness that uptime and scaling are important, and most companies are able to manage scaling (to a certain point at least), without falling over and dying. Past a certain point though, many do not, and the option of actually breaking the database into separate pieces that can scale separately is rarely discussed (usually because it’s a pain, which is often because the possibility wasn’t considered early on).
- Service interface contracts and deployment. I believe this is the main area that interactions with databases are done in horrible ways in our industry. With the recent prevalence of ORMs, we see a lot of application code having deep connections into the inner structure of databases, with little ensuring database system stability beyond whatever the DBA can hack together to keep app developers from breaking things. Further, when multiple applications are accessing the same database, further shenanigans ensue. When an application needs a schema change to update to a new version and all the apps touching the database are reaching deep into the database structure, it typically creates deployment considerations that are rather painful to deal with. For instance, if Application A needs another required column added to a particular table (assuming values are defaulted), and Application B also uses the table, but does so in a dumb way (hello SELECT * and auto-generated gridview columns), the update to Application A might well go through without problems, while causing a surprise issue in Application B. These issues could range in severity from simple cosmetics to system failure, depending on the quality of the code in both applications. Careless use of ORMs is a great way to end up here, as are many lightweight approaches that directly build SQL to query the database.
- Disrespect of service interface contracts can often mean that deploying a simple change for a single application could potentially require deploying EVERY application that touches that database system. Moreover, if such coding practices are spread across a company, it could spread further from there. This is additional load for testing teams, more people on call in case the system falls over, and can even force an organization to take a system offline in order to do updates (which, given that you have likely otherwise built a large, fault-tolerance system if you are heavily using microservices, is more than a bit of a bummer). When regulatory concerns such as HIPAA, Sarbanes-Oxley, PCI, and other laws (and these are just the US problem children, there are more overseas) get into the mix, you are potentially adding hours more work that doesn’t provide actual value, simply to get a small change out the door and into production.
- Applications with deep knowledge of the internal database structure also have a way of hog-tying even the best DBAs. When you have direct access to tables, (especially write access, but read access is bad enough), any change that a DBA wants to make to the system suddenly requires the input and participation of developers for EVERY single component that touches that table. As application developers, we wouldn’t want the DBA reaching into the innards of one of our web apps and making it impossible to refactor. It would probably be wise to extend the same courtesy to whomever is managing the database. It also complicates logging and many other concerns which may be important for compliance purposes, and which are best managed as close to the data to which they pertain as possible.
Now to the fun part. Given the above things which are problematic in a microservices architecture with a shared database, what are the solutions? Well, that’s where things get rather uncomfortable. There isn’t a real good answer that any application developer is going to like. I’ve pondered the following:
- Get rid of ORMs and only talk to the database via stored procedures and views, called explicitly through a lightweight interface in the same manner that you would work with an external RESTful API. This is problematic for numerous reasons, not just because application developers love their ORMs. It also means that you must be far more intentional about the code you write and how it will interact with the database. Many developers don’t like this, because it does greatly hamper productivity and it’s difficult to attribute a gain that is worth the cost. It does, however, give the DBA a layer of abstraction to work with when optimizing the database, so it might well be worth the effort. I’m currently following this approach with the main contract I’m working on (using T4 templates to build a lightweight interface to the database to reduce the pain of classic ADO.NET calls).
- Add one or more services in a layer around the database proper. These services can abstract the calls to the database, potentially reducing the risk of a database change by limiting how far out into the system the change can impact. This can also bring things like caching, connection throttling, and the like into the picture, as well as completely hiding the location of the data from most application developers. These individual smaller services can offer a set of canned queries for the applications that consume them, while forcing ad-hoc queries onto a data-warehousing or other system (this could also be accomplished transparently within the services, giving the DBA even more flexibility). Combined with API versioning, this sort of setup could allow much easier transition to newer and better database designs as the application evolves, all without excessively involving other services in the system (which means faster iteration).
- Continue using an ORM, but use clever mapping techniques so as to only talk to views and stored procedures where possible. This can be done to some degree with entity framework and many others, but it is a bit of a pain. From personal experience, I can tell you that the metadata available to cleanly wrap certain types of database objects is a bit….lacking. Particularly when trying to generate code instead of building it by hand, SQL server is a bit of a nuisance when trying to get very basic information (for instance, I’ve not found a good query to tell me whether a parameter has a default value or not, which would be really handy to know when generating a wrapper around a stored procedure). ORMs can add additional problems, as they frequently only work with a subset of the available tools that a particular database engine provides, preferring instead to work with a broad range of systems. While this can help when one wants to switch database engines, how many people actually do that (versus how many could profit off of a vender-specific feature in a particular database that drastically improves performance)?
Here’s the problem though. I don’t have an answer. The items above will all either hurt performance, hurt developer productivity, or will require a deep dive into things that aren’t exactly the primary consideration of your application. I suspect there is a better way, but I’m not real sure on this one. But the way I’ve been doing things is clearly not optimal. Please leave a suggestion in the comments if you have one – I’d love to hear from someone with a better idea. This is one of those blog posts where I don’t tell you what I know; rather I tell you what I don’t.