Remote Work Is The New Black – Part II

Remote work is the new black
All "Remote Work" Posts
IntroductionInformation IInformation II
GamblingPrioritiesAnarchy and Order

Early morning, off to do some remote work.
Just kidding, Early? Right. Anyway

You got your task work ready for you. You know exactly what you’re going to work on today. You start coding, or writing a doc, or testing an app. And then stop.

You have questions, or some great ideas. And there’s no one around to ask. Sure, there’s going to be in a few minutes when they come online, on their own sweet time. So you wait a few minutes, have that conversation. Hopefully not texting, or slack. That would slow us down even more. And a few more minutes go by.

Then you continue to work. Until the next bump.

If you’re not used to remote work, the first thing you notice is lag in accessibility. We take availability of information and people for granted, but when everyone’s remote, things bog down. We slow down.

So tip #1 in the communication field doing remote work: Over-communicate.

You need to talk more with teammates. More than before. Set working times when everybody is available and working on the same things. And if you haven’t yet – set them as ground rules.

And notice the lag. Try to count “stuck moments”. See how they accumulate and affect your remote work.

We’ll get back to ground rules a lot. To be effective, the team agrees to play by the same rules. Soon as next time.

In the meantime, get your remote team working again . Check out my First Aid Kit.

Remote Work Is The New Black – Part I

Remote work is the new black
All "Remote Work" Posts
IntroductionInformation IInformation II
GamblingPrioritiesAnarchy and Order

I looked around the office, and there was no one working. No typing. No staring confusedly at code. No arguments about where to eat lunch. Empty rooms.

There are more and more workplaces like that. People start remote working, or on leave. Or worse.

But how does work continue? Slowly. Very slowly.

We believe we will live through this. No, not the virus.

The period until victory and when life resumes. But as a team, and as team leaders we want to continue working with the processes we had success with before. Now it’s a whole new world of remote work we didn’t prepare for.

We don’t know how to make the jump. The result is work slowing down.

I’m going to write for a while about different problems I see happening, from the remote worker’s perspective, but mainly from the whole team’s perspective. It’s how to keep the team development standards as much as possible in a remote world.

The next post is going to be about the first thing we notice: Lack of accessibility, and how to counter it.

In the meantime, if you want my help to get your team working again in the age of COVID-19 check out my First Aid Kit program.

The Zen Tour Continues – #TestIL Meetup Tel Aviv

Gil Zilberfeld at the Test IL meetup.

Once more unto the breach, my friends. Once more.

This time I took the Test Maintenance presentation to Tel Aviv. The presentation is right there below.

Test maintenance is something nobody loves doing, but everyone must. The presentation is about what we do when we’ve accompanied lots of tests, some of those tests are old, some new tests we’re writing. How to organize those tests, add tests in a smart way so people can find them. The we looked at different types of tests and smells in them and how to fix them.

The one-hour presentation is a very compressed version of a full-day workshop on Test Maintenance. That’s right, you can have the “Test Maintenance” workshop experience, including many hands-on exercises in your workplace, on your code.

Check out the workshop page.

TestIL Presentation: Zen and the Art of Test Maintenance

Gil Zilberfeld and friends at the Test.IL Meetup on maintenance presentation

Making the “Zen and the art of Test Maintenance” workshop into an hour long presentation, I made way north on a dark and stormy night. It was lovely to see old and new acquaintances.  I saw a nice demo of  new-ish open source tool called Oxygen, presented by Nick Dimer. Thanks for the organizers of the meetup, Nitzan Goldenberg and the hosts in Bar Lev HiTech Park. And to Bar Gothertz for the photographs like the one on top!

This will come in soon to the Israeli center soon. And as always, if you want to experience the full workshop, test refactoring galore, let me know in the comments.

Here are the slides:

Let’s Test 2019: “Zen And Test Maintenance”

Art of Test Maintenance Workshop

Last week I came back from Let’s Test South Africa. As always, Let’s Test stood up to its name. It’s a small conference, with a lot of hands on-learning, and a focus on community. It’s about testers meeting other testers, learning, and feeling good about it.

My workshop “Zen And the Art of Test Maintenance” went great. (See picture for testers working on refactoring tests.) Unfortunately, immediately after the session the bug I apparently picked up on the plane hit me hard. I spent the next day either in bed, or trying to keep awake at sessions I didn’t want to miss (Hi Paul Holland!).

I’ve also re-met friends (Simon Berner, Gerlad Mucke, Beren van Daele, Regina MartinsJoanne Perold, Elizabeth Zagroba and Paul Holland ) and made new ones. I want to thank especially Alison Gitelson for drawing the poster for my workshop (the main picture on top). And to Louise Perold and the rest of the crew for an awesome conference. I’ll be coming back. Healthier.

Zen and the Art of Test Maintenance Workshop with Gil Zilberfeld at Let's Test

After that, things got better. I got back to eating the awesome food provided by the conference. Then I became a tourist for a couple of days, plus going to a karate class in a local dojo. I felt a lot better then.

Anyway, that’s 2019 testing schedule. I got one more training on agile and estimation this week at Sela Developer Conference.

Then it’s on to 2020. I have a good feeling about next year. Stay tuned.

Technical Debt Considered Harmful, Part II

Technical debt and code entropy - same same, or different?

Last time, I gave you the short-short version of thermodynamics. We talked about how entropy grows rampant, as well as creating waste in the system, if it doesn’t have constant energy input.

Let’s get back to technical debt. The technical debt metaphor has two parts: First, it slows us down in the long term. The second part is that it can go away – for a price.

You can already see the resemblance to our energetic system. The code has growing entropy. Chaos, or our inability to control the code, keeps growing. The chaos creates energy-wasting traps, and we deliver slower.

What are these traps?

If we don’t have tests, we tend not to refactor. If the code is not readable, we’re very cautious about adding new functionality. If the code is one giant method, we tend to add parts of the new functionality within the giant method, otherwise we risk breaking stuff. These are the time-sucking traps we need to avoid, instead of going forward fast.

So the code grows, laying more energy traps for us, which slow us even further the next time we go into the swamp.

We need to invest energy in the system to remove the traps. We need to take care of the code – spend time in refactoring, improving it, so the next time the swamp is easier to navigate through. In the long run we sustain velocity, instead of losing energy. The swamp gets clearer, but never gets clear.

The second part of the metaphor is where I think it fails – since it’s not really a loan, we’re never going to fully repay it. There’s not going to be an endpoint where it is repaid. There’s not going to be a single point of repayment. There’s always something more important to do. And since we never stop and “repay it” – we’re stuck in a downward spiral.

In the real world, the best we can do is keep things under control the best we can. We do that with clean code, refactoring, adding tests and code reviews and design reviews. Those reduce the energy traps, and maintain sanity. Entropy does *not* grow unchecked. But since entropy wants to grow, we will keep doing the work, putting energy in, but there is no end in sight. Or at all.

In some situations we believe we can repay the loan fully. We later discover we got to a “good enough” point. If we continue after that, the way we worked before, we see that the “good enough” point is not only good enough, it was just a temporary mirage.

I’ve stopped using technical debt as a metaphor, because it doesn’t correspond with reality. In real life, the business people have no problem sacrificing long term costs for short term gain. The loan can never be repaid.

Instead we should continuously lower the code entropy, get rid of the black holes, and putting energy, into clearing the code swamp.

Technical Debt Considered Harmful, Part I

Gil Zilberfeld explains how technical debt is confusing, and instead using thermodynamics to explaining the benefits of clean code and refactoring

Goodheart’s Law states that when a metric becomes a goal, it is no longer a good metric.

I don’t know when technical debt became a metric. But SonarQube calculates and reports it as such. If this calculation is correct (you must be scratching your head right now) people may assume that once they eliminate it, according to the calculation, they are debt free.

Of course, they are not. The technical debt metaphor (yes, there’s no real metric) was supposed to help explain the idea of comparing technical tasks to feature tasks, which are considered more valuable. Technical tasks, like adding tests, refactoring and simplifying the code and architecture don’t have an immediate value, compared to “real” features (I know, it’s perceived value, not necessarily an  actual one). But not doing the technical stuff makes adding feature value slow in the long run. The deeper the debt, the slower you get with the new features.

Business people, however, don’t understand how “not working on features” helps features get built. Organizations that have evolved beyond that don’t require technical debt use, since they already understand the impact.

I don’t use technical debt in my vocabulary anymore. So I needed an alternative. When I studied engineering I took a couple of courses in thermodynamics. Little did I know then, how well they relate to coding.

Ready to learn a bit?

A system has different kinds of energy in it. They can be kinetic, electrical, or potential energy which can transform into other types energy. Energy can be transformed into “work”. In physics, “work” is the implementation of potential energy. In a closed system, energy is conserved, but as we all know, no system is really closed, and therefore every system cannot conserve all its energy, unless we add more energy into it – like heat it, charge it, throw it real fast. You get the idea.

Next, there’s entropy. Entropy is basically the chaos in the system. Another way to look at entropy is from a perspective of energy. Entropy is the amount of energy that we cannot use to do work. We can’t transform it into useful kind of energy, because it gets wasted.

Now, if left unattended, given enough time, chaos takes over. In energy terms, we lose the energy, since it cannot be conserved. Entropy grows, unless we put more energy into the system.

Reminds you of something?

Part II is all about energy and entropy in code.

Where Do Bugs Come From? Part V

Gil Zilberfeld explains bugs appearing due to translation errors
This series is about the origin of bugs. Although they did not come from an apocalypse, they are sure leading us towards it.
Part IPart IIPart III
Part IV
Part V

We’re getting there. Here’s the final nail in the coffin: We are bad at translation and unfortunately, we do a lot of translations.

In a typical application development, we’ve got a user who has a problem:

“Why can’t I see the content when I open the app?”

The business person translates that into a vision:

“Of course, that makes sense. Every time the app starts, it should show the content”.

He then turns that into a requirement:

“The start screen should look like this, and the latest content should be shown”.

The architect takes the requirement and says:

“When the main module loads, it should get the latest data from the database. Oh and let’s separate  it into a new micro-service”.

The developer (I know I’m stereotyping, but role-playing is fun) gets that from the high level design, and writes a detailed design for the code saying:

“For performance reasons, when the new data gets there, it is also cached for other retrievals. Oh and let’s make that two micro-services”

On the other side of the table, our tester looks at the requirement and says:

“I’ll need two staging environments, one for testing in a standalone configuration and one distributed, I’ll load the data differently each time, and I expect every time to see the newest data when I reset”.

Just a short example. Of course, after all this is release, the user says:

“Well it works most of the times, but not always. Why can’t I get my content?

Communication is hard, especially when we’re working cross-domain working on complex issues. We don’t speak the same language. Whenever there is a translation error, two bugs get their wings. That’s how the Tower of Babel started.

To summarize: Bugs happen because we are not good at what we do, we don’t admit it, we don’t communicate well, and after all that, we still relying on pure luck that the thing would work.

How do you fix a bug? You understand what the correct behavior should be, plug the hole up stream and check that it works now, and again, and in two days – again.

How do you fix a buggy process? The same way.


Where Do Bugs Come From? Part IV

Gil Zilberfeld describes how bugs arrive from where we trusted the most: other people's hardware and software.
This series is about the origin of bugs. Although they did not come from an apocalypse, they are sure leading us towards it.
Part IPart IIPart III
Part IV
Part V

On to the next issue: We trust in the “others”. Way too much.

Have you ever considered how much of the code we actually write makes of the whole application?

I mean, let’s start with hardware. Computers, wires, antennas, satellites. We use them as part of our solution, and they are not even code. Then there’s servers and routers. There are runtimes and libraries. There’s open (remember heartburn?) and closed code. There’s code that’s been running for fifteen years that nobody can read, but works in production. And then there’s our code that integrates with all of them.

Oh, our code doesn’t really run. There are the compilers that turn into into runnable bits.

So how much of that whole thing does our code weigh?

Let’s be generous and say 5%. Yeah, the “others” are not what runs between our pieces of our code. Our code IS really the “others”. And we don’t only trust everything out there, we don’t even think about those things and how they work.

Until they don’t, but then, once we upgrade/reintegrate/replace them, we usually forget and move on to the new trendy framework that must be better than the old one because… trust?

Our view of testing is limited too, because of our optimistic assumptions that “these things work”. Bugs come out of the cracks and they surprise us every time, because we didn’t even get a flashlight.

Our concept of building software should be a skeptic and cynical. The idea of standing on the shoulders of giants is a good one, as long as the giants are stable.

We’re a pretty optimistic bunch. And optimism gets us every time.

When were you last surprised (in a bad way?)

Where Do Bugs Come From? Part III

Gil Zilberfeld discusses where bugs come from when we don't TDD clean code etc.
This series is about the origin of bugs. Although they did not come from an apocalypse, they are sure leading us towards it.
Part IPart IIPart III
Part IV
Part V

We’ve looked at a couple of excuses to where bugs come from, now let’s dig deeper.
And we’re going to start with something that’s hard to admit: We don’t know exactly what we’re doing.

The best of us already know that, and the rest of us have not figured it out yet. If you’re offended, I’m sorry, but hey, if we knew what you were doing, it wouldn’t be that hard, would it?

I used to think I know everything. And then I understood I don’t, but thought that even if that’s the case, I can learn everything. And if anything happens, I can control it.

I’m better now.

Our skill level plays a part in making mistakes, obviously. We are less prone to small stupid bugs. We think.

And it’s our hubris that tells us that bugs are caused by other, inferior people. We won’t make the mistakes that “they” do.

Nope. We still make mistakes. Big and small ones.

And I’m not talking about just developers here, if you haven’t figured that out yet. Anyone and everyone on the value stream is not as good as they should be. Nobody is exempt from contributing to the bug pool.

Misunderstanding, using the wrong tools, wasting time on Facebook and working on magnificent plans that never pan out – We still don’t do our job well. We are not focused, we are not weary of the risks, and don’t do our best to get remove those pesky bugs.

Funnily enough (or not), agile development (especially eXtreme programming) has a couple of cures for that. Even “old” methodologies had those.
Processes that focus on delivery, quality and feedback. TDD, clean code, design, exploratory testing, design reviews, code reviews. There’s a lot more we can do.

We’re still learning how to be better, and we should be better at that too.

What are the things you know you can improve right now?