This conversation comes up in every training I do about testing. I know exactly when it will happen. I know what I’m going to say, what they will say, and I know no one comes out completely convinced.

Then violence ensues.

We want to test some code. However, this code had started being written in the age of the dinosaurs, its size resembles some of the big ones, and the risk of changing the code is the risk of feeding a Jurassic beast.

There’s no point ignoring it, we’ll need to modify it in order to be able to test it. In fact what’s really bothering us is that some of the information in there needs to be accessed. I suggest changing the visibility of some members.

This escales quickly

Curious person: “What do you mean make it public”?
Me: You know, make it public. Or add a method that will enable us to set up the class easily, so we can test it.
Curious person: “But that means breaking encapsulation”.
Me: Might be. Is that wrong?
Mildly suspicious person: “Well, the design is there for a reason. The member is private because it needs to be private, we can’t expose it”.
Me: Remember that tests are first-class users of the code? That means that the design can change to support them too.
Mildly angry person: “But you can’t expose everything, what if somebody calls it”?
Me: That’s the point, that the test can call it.
Angry person: “But you can’t ensure that only tests will call it. At some point, someone will call it and create hell on earth. We’ve all been there”.
Mildly angry Me: Yes, it might happen, and we can’t predict everything and plan for everything, software is a risky business.
Very angry person: “THESE RULES WERE WRITTEN IN BLOOD!”

The design perspective

Let’s ignore the fact that you’re defending a horrible untested design, with the argument that changing the accessibility of one member will make it so much worse.

Encapsulation is a great concept. Having public and private members makes perfect sense. Some things we want to expose, others we want to hide.

But the water becomes murky very quickly. If we were talking black and white, only private and public, that would be ok. But for those special times, when private is just not enough…

There’s “friend” (C++).
Or “internal” (.net).
Or implicit package visibiliy (Java).
Not to mention “protected”.
Or reflection that bypasses everything.

It’s like we’re breaking encapsulation, but not really breaking it, so we feel better.

Let’s face it, design can serve more than one purpose, and more than one client in different ways. The language giving us tools doesn’t solve all the problems. That’s the cool thing about software, it can be molded to fit what we need it to do.

That involves living with not-so-perfect designs. It also means “changing the code just for the tests” is ok, because that’s another good purpose.

BUT SOMEBODY WILL CALL IT AND PEOPLE WILL DIE

Yes, somebody might call it.

That somebody might also look at the code and decide to change it, or add an accessor themselves.
It’s silly, not to mention risky, to think that a language keyword is the only thing standing betweend us and a disaster.

In C everything is public, how do you prevent the “someone might call it” disaster waiting to happen there? Playing with .h files and custom linking can be a lot more risky.

We do that the way that’s always more effective than the code itself: Processes that involve humans, not relying on tools.

We need to understand the purpose of code – create value for someone. If the code is beautiful but cannot be tested,  you don’t get points for using the “right” encapsulation.

The value comes when functionality works. In order for it to work, we need to check it. If we discussed the risks involved, and decided value comes from testing, it usually outweighs the risk of “someone might do something bad with the code” (after many have abused it already).

And if you feel this risk is not negligable – do something about it. Do code reviews, document and share the knowledge, create architecture guidelines.

But don’t rely on a language feature as the first resort.


5 Comments

David V. Corbin · April 11, 2017 at 7:12 pm

I am going to largely disagree. I would favor encapsulation integrity quite highly. If properly designed (and we agree the code in question is not), then it should never be an issue [and I do mean never in the absolute sense].

If you must gain access to things that are private, then I would lean towards reflection…but only after a long and hard session thinking about why the test can not be effectively accomplished via those items which are public….

    Standback · April 12, 2017 at 8:52 am

    I think the immediate question is, “Class X is currently untestable — what do we do about that RIGHT NOW?”.

    Obviously, the long-term solution is to refactor and redesign. And if doing that is relatively quick, then the question doesn’t really arise.

    But if it *isn’t* a quick fix, then you basically have two options — either you can leave the class untestable, or you can compromise encapsulation in order to gain testability.

    And I’d lean very, very strongly to favoring testability over encapsulation. The reason being, once you have tests in place, it’s much easier to reliably redesign for good encapsulation. Whereas the other way round, insisting on hermetic encapsulation until you have time for a redesign, often encourages the internal, untestable code to bloat further and further…

    Gil Zilberfeld · April 12, 2017 at 10:35 am

    David,

    If it were “properly” designed, there wouldn’t be an issue. But what is properly? And at what time does it become proper, and who decides? If testabuilty == proper design, then there’s no problem, but I don’t think that’s always the case (even wrote a post about it here). And if not, then why use reflection, which might cause buggy tests, as it’s a complex solution to the problem. Testability is as important as the “proper” design.

      David V. Corbin · April 12, 2017 at 1:51 pm

      I figured my comment would spark some response 🙂

      Lets look at encapsulation boundaries… There is the class member accessibility which is the focus so far, but there are also statement (or statement block) boundaries within a method, code blocks within a statement, heck there is even a set of boundaries between the CIS exposed by the processor and the RISC implementation….. At what points are be going to break those boundaries.

      As far as the “set it public temporarily”. I have seen this done, then at a later point in time the class refactored and the member remain public (only because it was) and then end up in use in production code. I do not like (and I admit this is subjective, but also based on decades as a consultant with nearly 100 different teams) that risk.

Dan Martin · April 12, 2017 at 12:26 am

I like this a lot for getting tests written around legacy code. Ideally I’d keep things encapsulated, but I’m fine with making it public for the time being if it makes the refactoring safer and easier to do.

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *