Saturday, March 21, 2015

Why TDD is good for you


I must confess, I've realized that TDD is a must have in a project. Though I don't even like TDD, right now there's no better way of developing software. Writing software is like solving a perverted salesman traveling problem where the destination keeps changing and somehow the paths you took start to make less sense as the software progress. To know if you ended up with the best solution, you'd have to build all different versions of the problem to know if that's the best way of doing it, and that's not a viable idea.

Given that you have different developers with different backgrounds you must have something they know that somebody else is producing something accordingly something they know will have the same structure and background. And you'll have to keep doing it throughout the project and if you stray off the methodology you'll end up with inconsistencies in your code. And really in a project, you don't want to spend too much time (or any) discussing technical details related to pure code.

What I don't like with TDD

One important aspect of TDD is that I think TDD gets credit for is things which is TDD cannot solve. One thing is that it somehow creates layers, and this I think is deducted from the fact that TDD help with creating better abstractions, and that is a feature which is really nice with TDD, but it's really a side effect of practicing TDD. And it's not really TDD's fault for not being able to build layers because abstractions cannot be totally abstract, because if they could, that would mean that you could send in any data (or none) and get something which is exactly the data you want, which would be an impossible thing to do. There's only one thing that could pull that off, and I doubt that it actually exist, although there are many believers out there.

Also TDD does different thing to different languages so you get more from TDD in certain languages where others benefits less of it (one could argue that those languages which needs more TDD practices are bad languages).

TDD will influence your code and therefore your solution, and this will inevitably to “test induced design damage”. This means, as Conway's law states, your code will be tainted by TDD and code written with TDD in mind will be easier to integrate in a TDD project. That also means that code which is NOT written according to TDD will be a hard to fit in a TDD project (and no it's not about framework's abilities to be integrated). That also means that trying to use TDD in a project which is not started as a TDD project, will be very hard to start using the TDD practice. Most of the time, TDD projects are not being consistent and this will hurt you in the end.

Also I think TDD is showing that your language is failing describing what you really want, and you need to rely on something external to somehow verify that you have written something which is correct accordingly to your understanding. Instead of having TDD as some sort of “document”, I'd rather have the power of have all those assertions expressed by the code. I usually consider tests which are large a code smell since they give away that the code either does too many things or the code is not expressing enough intention or is not powerful enough.

But most importantly TDD creates some sort of focus on tests and unit tests, where TDD is not about those. It's about dealing with information and always confirming to that information and the test case is about verifying this, an sort of implementation of the TDD abstraction, but also we should be able to get rid of it.

If one consider that TDD is language and tech agnostic, meaning we need TDD to have a framework to actually deliver working code, the amount of work needed to verify your code should say something about the chosen language. If the language requires a lot of test cases to verify that you did something you intended to do would mean that that language is a poor choice. I'm not going to point on specific languages here and I leave this to some future discussion.

I really hope that we one day can get rid of TDD, but as for now, there are simply no better ways to write software.

Saturday, January 3, 2015

Good separation of concerns

Separations of concerns are really important when writing software. It's tightly coupled with working and correct code and it might not be obvious at first glance. One of my personal views of this is why non typed languages are not a good choice for anything longterm, types are crucial for good separations of concerns. I've worked with typed languages in large projects and they showed that even when using types its hard to keep things separated, although its not impossible achieving good separations using a non-typed language and most of the time it just breaks down. Just the fact that you need TDD to "verify" your code is a clear indicator of this.

An example of erroneous mixing of concerns:

val print_info = function(x){
    console.log('Variable x is of type "'+typeof(x)+'" and have the value of "'+x+'");
}
var x = "123"; // Variable x is type string with value "123"

print_info(x);

x-=0; // Variable x has now type number with value 123
 
print_info(x);
Output is;
Variable x is of type "string" and have the value of "123"
Variable x is of type "number" and have the value of "123"

The above example is a really simple but important aspect on mixing concerns but most of the time these things are more subtle and not so obvious.

There are several pitfalls when designing code and knowing when you are making good decisions when building software. One good rule is the "Gun rule" which is quite simple:

A modern gun today has very good separations of concerns (although a very despicable piece of technology). Most notably you have the bullet as an example on excellent separation of concerns. You can manufacture bullets separately but still deliver functionality, there are even room for making modifications to the bullets without needing to change the gun. Obviously there are certain factors you can't change without changing the gun, such as size.

One other factor is that a gun is useless without a bullet and a bullet is equally useless without a gun, so in functionality they are tightly coupled. For the gun to work you need to deploy the bullet with the gun. And this is a good indicator how they should be deployed, they should be deployed together. If you need different release cycles for them, you should separate them into two deployments artifacts, but they should share resources. This is really facilitated by Java VM by using dynamic class loading (one really good feature but for some reason not very well understood), other technologies might have problems with this and might require a full restart.

If you now equip the gun with a scope or perhaps a laser pointer this sure makes the gun better, but it is not entirely necessary for the operation of the gun. The gun will work with and without those additions and they are good separations of concerns by themselves. These are candidates for deploying on their own.

One misconception is that just because you need a different release cycle or you have identified a module with good separation of concerns, you need to deploy it on a separate instance. With the gun as an example; having a gun in one hand and the bullet in another doesn't render it more useful or more modular, though in fact it seems like a good idea and adheres to certain architectural ideas. If this idea should be brought to an extreme you should deploy each class in a separate runtime, but that however doesn't make it more modular or better.

You should look for those things which are possible to remove, but still maintain functionality. In fact being able to remove whole blocks of functionality without impacting function is a good indicator of good separations of concerns (adding them is the same). If you have to tear something apart is an indicator its not separated enough.

There's also another thing which is overlooked with separations of concerns is that too much effort is spent on making abstractions. So much abstractions actually harms your separation of concerns, everything is so abstract you have really no idea what happens.

As an example; instead of using a specific object to be able to "tunnel" data through layers you decide yo use a map like this:
interface SomeInterface {
   public abstract void someMethod(Map<String, String> map);
}

This is convenient because you could now cut through anything just because Map and String are both in a library which happens to be global. Now you can also bunch things together, which modularly, shouldn't be together and more important there's nothing that stops you to add more things which makes no sense at all. Fortunately in Java one could do this instead:
interface SomeInterface {
   public abstract void someMethod(Object map);
}
And then cast it to the Map whenever you need that information, but um, that kind of defeats a lot of things. Not only you lose the typing you also lose the intention and the function of the data. And when you loose that information, you also lose separations of concerns because now you don't know where you separations starts and where they ends.

Thursday, December 11, 2014

My thoughts on the HATEOAS debate

So HATEOAS is a hot topic. The discussions is about whether if its a good way of actually using it for an API or it should be the API. There are several things that concerns me.

Firstly there's no programming model for HATEOAS, such a like functional programming, procedural or like SQL type. Its representing something which there are no other mechanics for, so why would it be suitable for APIs?

Secondly the so called APIs, with exposing resources, requires a client which is dynamic enough to actually understand all of that data, and simple links to describe their relations. I haven't yet seen a language which does this natively so why would HATEOAS do this? SQL is pretty good at expressing relations and linking it together, however it doesn't understand what it is. Why would HATEOAS be so much better?

If HATEOAS is all of what you need to describe an API, why do I need a network to make use of it. Why isn't there a way of using it like a language? When there's a way of coding it and expression it as logical relations, I'd say it's something worth looking at.

I think all of those who think they need HATEOAS to describe an API have no idea of how to make one, it's simple like that. Yes you can create something which resembles an API, because you are exposing something which an consumer could use for something. BUT that's still not an API, simply because an API is so much more than how you relate resources to each other.

Most of the time when I see arguments for HATEOAS is the ability to extend without changing the client. The problems with having to change code because you update your API is a technical problem, not a API problem.

If there's any out there which could present me with a HATEOAS client which doesn't contain any code at all (!) and is able to understand what it can do with the API the client is exposed to and does everything the API exposes in a comprehensive matter which is useful, then I'd start think HATEOAS is a good thing.

Thing is, if you cannot show that you don't need a client which you have to translate the resources into something useful, HATEOAS is just another way for hipsters to mess up APIs.

Some links to read more about it: Jeff KnuppMule Soft,API Evangelist




Sunday, November 16, 2014

Is REST the best thing there is?

This blog post is about what's important when creating an API. There are a few things which is sane, but there's a diagram which is, to be kind, misleading.
SOAP RPC REST
Requires a SOAP library on the end of the client
(1)
Tightly coupled
(2)
No library support needed, typically used over HTTP
(3)
Not strongly supported by all languages
(4)
Can return back any format although usually tightly coupled to the RPC type (i.e. JSON-RPC)
(5)
Returns data without exposing methods
(6)
Exposes operations/method calls
(7)
Requires user to know procedure names
(8)
Supports any content-type (XML and JSON used primarily)
(9)
Larger packets of data, XML format required
(10)
Specific parameters and order
(11)
Single resource for multiple actions
(12)
All calls sent through POST
(13)
Requires a separate URI/resource for each action/method
(14)
Typically uses explicit HTTP action Verbs (CRUD)
(15)
Can be stateless or stateful
(16)
Typically utilizes just GET/POST
(17)
Documentation can be supplemented with hypermedia
(18)
WSDL - Web Service Definitions
(19)
Requires extensive documentation
(20)
Stateless
(21)
Most difficult for developers to use.
(22)
Stateless
(23)
More difficult for developers to use
(24)
Easy for developers to get started
(25)
As for item (1) this is entirely true, but the item (3), is just blatantly false, we tend to keep forgetting that you need a library to use http.
Item (2) is not true, you are tightly coupled with the implementation of the RPC library (which you are in both SOAP and HTTP/REST) so whats the difference?
(4) is valid however a weak point. SOAP is basically XML+XML parser+XML validator and http client.
(5) is weird, what data isn't coupled to it self?
(6) This is myth with REST about exposing methods. So it matters what methods I use to make a call? Consider following code
public class MyResource {
    public String GET(String parameter){
          // return something
    }
    public void POST(Map <String,String> data){
      // do something with data
    }
    public void DELETE(String id){
       // delete id
    }
    public void PUT(String id) {
       // update id
    }
    // The rest here... All pun intended :)
}
So whats the difference here? Does a method name matter that much? You still need the URL just as you would anything else. This also applies to items (7) and (8).
(9) I must admit is convenient with REST, however still achievable with other means but a lot uglier so this I guess is a valid point. However in a real API this is a moot point.
(10) is true but in certain environments this is actually more loser coupling than a REST call, since you don't have to know content of the message but only that its a SOAP envelope. The envelope creates a loose coupling for some broker system since they doesn't need to know the type of message or the endpoint. Another important things is that if you use HATEOAS could actually result in more data sent because you have to manage the resource from the client, whereas if you use a SOAP this is managed by the server.
(11) if this a problem, then I guess you have larger problems.
(12) This just doesn't matter. In any large application this will be loads of URLs and confusing by its own.
(13) True, but a technical detail.
(14) Same as (12), just doesn't matter.
(15) If you build services as CRUDs then you are creating unmaintainable services/APIs.
(16) True but moot point.
(17) Again moot point.
(18) Again, there's no CRUDs. Code which based on this is automatically unmaintainable.
(19) True, but in there's WADL for Rest too. This is a nice feature which RPC usually lacks, however using a interface for RPC is not bad either.
(20) Wrong. You need as much as you need for a REST service. If you don't then you don't know how to code. Sorry partner. Just because you use REST doesn't remove the need of context of what the resource do, supposed to be and what relations it has. If you build you application on this premise it will be buggy and error prone with time. It will also add unnecessary costs to maintain it.
(21) This is just irrelevant point.
(22) Well if you have good tooling, I'd say SOAP is the easiest, though I can agree it could get unnecessary complex. I also think that SOAP is trying to do too much which is not needed.
(23) Again irrelevant point.
(24) Again with good tooling this could be as hard/easy as using SOAP.
(25) This depends on the RPC implementation. However its easy to hide it with abstractions.

A protocol is NOT an API. REST is a deceptive technique. If you have a web application and you want some sort of database without anything in between REST could be a choice. However if you want to do something else, use other everything else but REST. REST doesn't create an abstraction as it's advertised to do because the mechanics of retrieving and getting data is now on both sides of the protocol boundary. This creates a mechanic coupling, stuff knows too much about other stuff. I bet that we'll se a lot of "legacy" problems in the future where rest apps are becoming a real problem. There is a lot of problems with the REST proclamation and of its benefits. My view on that is that REST is the answer on another problem which is born out of inherit problems with the inability to code correctly (and that differs on programming paradigm, language and framework).

My point is that, it cannot and should not matter how you retrieve data, just that you actually did and what that meant for that particular code. If the protocol implies mechanics on meaning you are mixing things which should not be mixed.

Friday, November 14, 2014

So what is "decoupling"?

One favorite argument when arguing for new technology or when people is arguing over what design principles you should use is the term "decoupling". My personal belief about this word is that it's extremely abused and more than often just used to boost the credibility of your own idea. It's also used as something to give the technology some "magic" properties as in "if we use this everything will be a lot easier". This is especially true for the arguments for using technologies like REST or Microservices. These two definitely have their uses but too often they are used as some sort of silver bullet which solves everything you throw at them. And with these two technologies are very often accompanied with the term "decoupling" or more correctly loose coupling. Well how intriguing, so how do they do this?
Consider following code
public void process(String[] array){
   for(int i = 0 ; i < 3; i++) {
       // Do something with array[i]
   }
}

So we loop through an array which is 3 elements long. So what happens if we do
public void process(String[] array){
   for(int i = 0 ; i < 5; i++) {
       // Do something with array[i]
   }
}

We have now changed the loop to 5 iterations. We haven't changed the method's signature so we don't need to change the type or anything, in a REST service the URL haven't changed. It still looks like the method as in the previous example. However it will fail if the client only has provided an array which only contains three elements. So how does this relate with REST or Microservices? Well if this would be a REST method, the technique REST wouldn't do shit about it. It would fail miserably like any other protocol able to invoke a method. This change will require a change with the client, so its not particularly "decoupled". The same would be true for a Microservice, all of its clients has to change. So where's the decoupled stuff with it? In REST is it the idea of you could add parameters without changing an API with a JSON map? Whats the point of that, send in data the service doesn't use or receive data which is not used? Have a endpoint which swallows data which you then try to make sense of it? Yay now you just created a ball of mud service by bypassing all of the abstraction mechanics you have available because you are lazy and being unprofessional.

With a Microservice the argument is that just because you can deploy stuff separately will make them decoupled. How does that apply to the above change?

 There's no such thing as a free lunch when coding. The above code is extremely simple but the length of the array is extremely important for the overall functionality of that piece of code. Actually there loads of information you have to deal with just with dealing with a string array of 3 elements. Cutting corners here will create a code which is bound to be misunderstood (yes with or without test), its just a matter of time. The above code has no description or information on what type of strings which is allowed as parameters. String's per se is an abstraction of information. A standalone string doesn't mean anything, it has to be put in a context which is meaningful for that particular data type. Just because it's a string and it's "understood" by the java runtime as an typed object doesn't mean its something which is relevant.

In today's http world, where we don't have to deal with the underlaying binary behaviors because it's abstracted away from us by standards like UTF-8 and such its really easy to not understand that most of the things we are dealing with are just some sort of representation of something, but because we can read it, and interpret it (as humans) we forget that whenever we're sending stuff (changing context) the information has to checked again to actually make sure that the context hasn't changed. This is what we see in the above example. The context has changed. It's still the same type of data but the requirements has changed.

Yes the example is a simple one, but imagine when a system contains millions lines of code, and try to follow data through a system where the code is not describing it's expectations of data makes it extremely hard to reason about the outcome of that code. So think twice when using your damn maps, and just because you can give abstract properties to the relations of an object doesn't mean that will create better code. In fact it will create a lot worse code since the intentions of the data is not described. It's like you would convert everything into an the type object and typecast it just whenever you want the information.

Monday, November 10, 2014

The meaning of pragmatic

On the net there are an abundance of advice how you should be doing this and that to actually be successful in delivering projects. You should be agile and develop agile systems but no idea on what agile code is (no it's not writing tests all over the place). You should adopt this or that work flow/process/mantra/technique/<insert the buzzword of the month here> and you should use <insert the programming language of the month here> and by using <insert the flavor of architecture of the month here>. Oh shut up and pick a language and use that. Swapping between languages for solving things is not a good thing. You don't see a lot of professors in several languages for a reason, its just too hard (of course there are a few which is good at it, but thats a few.). Problem is that it's very hard to define someone as really good at a language, how do you measure it?

And also that everyone is pragmatic by doing simple things in a simple way.
WTF...? If coding is simple and solving problems is simple why do we have a job and is a problem a problem if its simple? No wonder that things fails and code looks like shit. A newsflash for most people, if you code is simple like a CRUD, you just failed programming 101. There's no such thing as a free lunch, particular when writing code. There are frameworks which will make you life a bit easier but, most of the time they fall short. Also if you are coding you ALWAYS have to look at the big picture (yes TDD falls short here). If you don't you will end up with a piece of unmaintainable piece of crap code because it's content is just rubbish. It may look all "clean codeish" and dandy with tests and stuff but it will still be a piece of crap, and bug ridden.

Oh hell, I just wrote my own "advice" (read rant) on the net... So I'll give my opinion on what pragmatic means. Pragmatic is NOT: use small frameworks and simple code.
Pragmatic is: use frameworks that are known and write code which makes problem solving simple.

Sunday, November 2, 2014

EA is troublesome

This blog post is about 6 heresies you could do with EA. And in my opinion, this is why EA fails because its too slow to actually do something useful. In any system which EA is needed the details are hard to capture with EA and its tools are too abstract. The post also says that EA practitioners usually fall in love with their framework so much that it becomes framework produced model which is not modeling the system its supposed to model and they should develop something that "works" for them.

It would be nice to have some framework to reason about the system, problem is that how the functionality of the system is in the details of where the code is executed. Usually the models of a system is bound to the physical structure of the system and also that a process forms a module. This is just not simply true, but to know that you have to dig into the code where there are several details which makes out the functionality. There are things like language constructs, code constructs and lack of information which is crucial for the model, but that is only found in how the code is built. And unless we are able to figure that out each time we take a "snapshot" of the system, EA and the tooling will fall short.

For EA to work it has to be a lot more code centric, and it's tools needs to be able to understand code and what its built for, and unless that is not figured out EA will not to able to provide information which could be based on making decisions. I'd say because of this EA is responsible to actually make systems worse because you are not able to make decisions which is reflected by the code but the process instances, which is not good.

There are several other issues with EA related to the inflation of patterns, but thats another post.

Edit: A good view is this video. Simon Brown has a good things going on there.