Skip to content
loading...

How close to the data you like to have your business logic operations?

twitter logo github logo ・1 min read  

I was discussing with a friend of mine about code design in business web apps, when we got to the topic of logic tier (or business logic), there was this big gap between our preferences on where to implement it:

  • He liked controllers, and kept his logic there. Thus using the database as storage only.

  • But me, on the other hand, I've been coding the logic in stored procedures for as long as I can relate, so my controllers have been mostly thin.

My question is, is this a good practice? Are stored procedures an old thingy that just keeps giving us trouble?

I personally think that having logic in code is nice for testing, but when it comes to production data, it might have some performance issues.

Thoughts?

twitter logo DISCUSS (23)
markdown guide
 

For a small inhouse app that requires login, you can even rely on client side validation. If users want to put in garbage by composing invalid requests they'll get what they asked for.

Ofcourse most applications are not like that.

Some middle ground between the two scenarios you described wout be to create Command objects or Services. In the controller you make it such that you can easily translate user input to command input, safely. Validation goes to the methods in your Service or your Command. You can return a Result object indicating success or any validation errors. Or throw exceptions if you so prefer.

Now your validation resides in the code, where you have better tooling to validate. But also it doesn't clutter the controller.

To be honest I still struggle at times not to duplicate validation. I want it client side, but it should also be enforced in the domain. If anyone knows how to do a decent job avoiding that I'd be interested to learn.

 

I typically resign myself to duplicating validation. I think this is ok (but annoying), because really validation is for a different purpose. UI validation is to help the user make the right choices. API validation is to make sure that requests off the wire make sense. They are often so close that literally the same validation could be used, especially in CRUD scenarios. But I do sometimes find variances where if I had applied DRY it would be harder than just validating separately. It is a fine line. :/

 

Good to hear it's not uncommon to have duplicate validation :) Thanks for sharing your insight.

 

UI Validation does not cover all the validations necessary. So, it's pretty common to have them "redundant" through deeper layers.
Example: "name" is mandatory for the UI to have a quick feedback to the user about his small mistake leaving it empty, but the API almost must have this validation as well. But then, suddenly, a wild mobile app appears, and you need to implement this validation again for offline use!

Yep, it's one of those areas where there are significant trade-offs to set it up in a unified manner. It will be worth it in some cases (especially where you have a lot of different clients) but not in others.

 

Agreed! And validation goes harder when your client has offline capabilities D:

 

Offline capabilities... May the force be with you :)

 

I dont know what language and platform you use, but ASP.NET has a nice solution for that. They call this "data annotation validations". You simply put the rules in a ViewModel, and they will be checked both client side and server side. This way, you could simplify the jquery.validate (client side), and in your controller, check them with Model.IsValid (server side)

msdn.microsoft.com/en-us/library/e...

 

Indeed, but this is more like a backup for when the client has disabled JS.
What if you want to use Angular + .net API? Anyway you need to duplicate validations. :)

It's really hard to do a 100% business app, beyond CRUD using plain ASPNET MVC.

I second that. Data annotation validations work well for a CRUD type of feature, less so for relational constraints.

I'm working on this domain where computers are leased to customers. In order to record a dispatch, many fields have to be filled in. But there's rules about field values that limit the valid options for other fields. I implemented just the basic ones.

The real challenge would be formalize the rules in such a way you can get the most efficient decision tree in the UI as well being able to validate command input :)

 

Thanks for the advice, it's ASP.Net MVC indeed :)

 

Ross Tuck - Models and Service Layers; Hemoglobin and Hobgoblins (youtube.com/watch?v=ajhqScWECMo)

 

In my experience, logic in stored procedures is disastrous in the long term. Stored procedure programming is more limited and very side-effectful. It naturally tends to become a spaghetti mess as time goes on. Using logic in SPs also split-brains the logic, because there are some things which are really difficult/awkward for procedures to do but are easy in code. So you put that part in code, and have hidden complexity and dependency between the steps in code and those in the SP. SPs are generally a pain to keep in version control. You also don't have good organizational tools at your disposal for SPs. Common practice is segmented naming like "customer_create", "customer_update". Whereas in code you have more organizational tools like folders, namespaces, and classes.

Things I might use SPs for: required combinations of updates, complex queries which need temp tables, etc.

After writing this, I realized you didn't specify which database you were using. Some of them have access to full programming languages for stored procedures. However, I'd still recommend against this practice for most cases.

Side note, controllers are not the best place for logic either. :) Controllers basically organize side effects. Logic is best in its own module or library and kept as free as possible from side effects. After all, the business logic is the most important thing to the business, so it merits its own independent representation, not mixed in with all the incidental integrations required to accomplish the use case. It's a little extra work up front to separate it out, but its extra work I always end up doing anyway.

 

agree 100% - was going to write something, but you've already put it perfectly :)
Those separations of concerns are critical, otherwise you're well on your way to a monolith

 

Thanks! Separation of concerns is one of the largest factors in maintainability IMO. So much so that nowadays I will accept some level of duplication (mainly of data and data structures) across different subsystems in order to remove dependencies between them. It is quite freeing to be able to change a data structure and not worry about breaking somebody else.

Monoliths are not so bad. They are one way to package a system. Whether to choose monolith vs microservices vs something-in-between mainly depends on your organizational structure and problem domain.

agree on the monolith - knew it would come back to bite me :) Horses for courses definitely, but i've found monoliths always outgrow their original purpose, and end up adopting different business needs in new and interesting ways, when separating them out would have made things easier in the long run.

Yeah, I think you are spot on there. Many successful apps started monolithic and as the business became successful and grew, that model no longer suited their needs. Here's a good talk in that vein.

youtube.com/watch?v=9vS7TbgirgY

 

I agree with you, specially on the source control part. Issue here is that some companies already implement this kind of design and it's very hard change.

Then if using a relational database as storage, might be just a good to use a very fast nosql database, even if you lose the relational part?

About your side-note, yes, I know that. But you know what I meant :)

 

Yes, it is really common to see multiple legacy apps share the same database and use the shared data for different purposes. (This is called an Integration Database.) It creates a really unfortunate situation where it becomes increasingly difficult to make changes for fear of breaking one of the apps. In a system like this, new functionality usually means new tables because the risk of changing existing data is too great.

My solution for that is typically to insert an API (could be a service or a library) in between the database and the apps, and gradually migrate the apps to use the API. At least in the API you can have a better overview for how all the data is modified. Once apps are no longer accessing the database directly, you can make structural changes to the database with a decreased risk of breakage. As long as the API contract with the app remains the same.

Stored procedure isn't a purely relational database feature. So skipping them doesn't imply you should use NoSQL. Relational still does set-based constraints (unique constraints, set membership) really well whereas NoSQL databases are generally not good at those. The place where NoSQL really shines from a coding perspective is when you have a hierarchical object. Splitting it into a lot of child relational tables and reconstructing that data back to objects is very painful and is the main reason ORMs exist. But it's super easy in NoSQL because you just load/store the whole thing as one object. Many relational DBs can be used as key-value storage too nowadays (e.g. Postgres has json/jsonb types, as well as hstore). The other main reason to use NoSQL is for scaling out. Most relational databases will only scale up. There are also certain types of NoSQL database better for certain jobs, like graph databases.

I wasn't trying to be rude with my side note. I literally wrote business logic in controllers and code-behind pages for many years. Over time it became really hard to untangle the business code from all the integration code. I still face this problem with some legacy apps that haven't justified a rewrite yet.

Yes, we are exactly planing to do that next year. It'd require lots of work, mind you it's about 300 tables and 500 Stored procedures.
My plan is similar than yours, which makes me relax a bit thinking that it's the way to go :)

I never found your side note rude in any way, but the other way around.
You know, maybe there is someone who is doing that might read it and think about it. :) Side notes with good knowledge are a must in this kind of conversations.

 

It depends absolutely on the type of the project, the size, the technologies that you use, the type of organization and, of course, the preferences of the programmers. I've been in both of situations, and i will try to give you some advices:

Benefits of logic in DB:

  • Stored procedures are a lot faster, specially if you need to do many querys in a single action. If you are dealing with a controller and other layers, there are latency between the database and the webserver itself. The thing gets worse if the webserver and database are in different locations.
  • Better managment of transactions. You write the BEGIN, the END, and maybe the COMMIT sentences. And that's it. No need of unit of work, and some other strange patterns that may confuse you about the scope.
  • In some organizations, is faster to compile a PL/SQL package rather than ask for a migration of a .dll (or whatever type of component that you use in your webserver)

Cons:

  • You lose object oriented design (i mean, you have tables if you are working with SQL database, but thats not the point of OOP)
  • You lose single responsability principle.
  • You are much more attached to the database technology. If tomorrow you must migrate Oracle to MSSQL, it will be much difficult if all you logic is on the database. If the logic were in the controller/bll layer, you only need to change the db connector and thats it.
  • Generally, it's a lot more difficult to make unit testing of stored procedures rather than a .dll. There are less tools, and almost always deppends of the data itself, wich is not good.

As anecdote, currently i'm working with a acess control software, wich operates with an electronic device that validate the user RFID card and check if its authorized to open the door. We tried everything, but the only solution that we find, is to use a stored procedure. When you pass the card, the device triggers a query, and if the query returns "true", the door opens. There was no way to do that on the webserver side, so stored procedure was the only way to go. We change the default "select ..." default query to a "call sp_validateaccess(param1,param2,etc.)", and works pretty well.

 

Could not agree more with you. If you check this post, almost all replies agree to the same ideas, pros and cons, yet, there is so much variety out there! I think what happens is that "real life problems" hit you so hard that you need to throw all your principles and just figure out a way to solve it fast and efficient as possible.

PS: that door probably opens really fast, thanks to the SPs!

 

I still deeply subscribe to the idea of MVC and OO, and I don't use stored procedures at all. All my business logic is on the model level. Even using Elixir now, we have a similar architecture where our business logic stays on the schema level. Our resolvers (GQL) take care of authorization, data entry, notifications, emails, etc. Validation is done in the UI, through GQL schemas, in Elixir and then in the DB for the most basic type of validation.

Classic DEV Post from Jun 19 '19

Is GraphQL the future of APIs?

Graphs are everywhere! What are the main benefits of the data graph structure? Is GraphQL the future of APIs?

Juan C. Andreu profile image