Friday, April 21, 2023

Non-functional requirements

What about security? What about accessibility? What about visual design? What about usability? What about data analytics? Do we have to write tests?

A very effective way for a project to die before it even gets started is to load it down with a lot of "well, you can't start until you address X" for dozens of different topics X. Yet we've also seen the dangers of worrying about everything later. After a big system has been built it can be hard to try to retrofit (especially if we do not communicate what is not present or carelessly use words like "done"). Can you just ignore the annoying topics unless you are sure they are biting you? Up to a point yes, but I'm operating from the assumption that you want to protect our users data, you don't want your users to be baffled, and the like.

Are these core or extras? There's a lot of judgement which goes into this. It will depend on the specific topic and we need a way to make those decisions and some idea of where that fits into the whole process of getting our software into the hands of users.

When do you address non-functional requirements? I'd say as you build the software. Not only is this more manageable than an up-front-focused process, it is also more effective. Making a lot of plans about how secure (or accessible, or operational) your software will be is only as good as your follow-through, so develop and revise your plans or techniques as you are implementing. Not only will this be more feasible, the presence of running code will give a reality check and a degree of concreteness which will improve your ability to find the best ways to achieve your non-functional goals. Make it a habit, not a totally separate process.

One good technique for some topics is to check for desired behaviors in your testsuite which gets run regularly (typically on every pull request or commit). Open source linters exist for topics like accessibility and security, and in many cases you can write your own (they don't need to be perfect to be useful, as a simple text search may be good enough for something like whether you are, for example, calling the method which sends logging to your centralized logging service rather than the logging method which does not). Bring the people along too, because it is no fun to keep automated checks passing but give no thought to whether those checks achieve their intended purpose. But the automated checks are fairly easy to implement and conducive to a situation where software is changing constantly.

It can be daunting especially if you have a small team. "I don't know anything about security!" "I'm just a graphic designer, not a UX expert!" "Why can't we just write our logs to disk?" "Who are we publishing these metrics for anyway?" Don't let this paralyze you but do try to build awareness within your team as you can and also get help as you are able. For example, in the area of UX, having empathy for the users and just asking the question of what they are trying to accomplish will be a good start. More broadly, think about how you'll know whether you are doing a good job (for example, the role of penetration tests in the security landscape is a whole topic of its own - but the idea of a penetration test originated from a good impulse, of trying to find out how secure your software is rather than just operate on unverified assumptions).

Also, give some thought to your definition of done. In many contexts your compliance department has some rigid-sounding rules about characteristics your software must meet before some point (maybe before any user uses it, maybe before it is generally available, something like that). Try to make yourself rules which fit with those or exceed them. And apply them at the same level as you do for other requirements (often before considering each user story done).

Tuesday, April 18, 2023

If you support it, you get to enhance/replace it

Are you a software creator or a software maintainer? If this sounds like a trick question, you might be on what I'm calling a build/operate team. I'm actually not sure whether there is a standard term for this. It is at least similar to a "product team" as opposed to a "project team".

So we are talking about a team which owns a particular sort of value and is staffed to provide it, including as much of product, design, engineering, testing, support, etc, as feasible.

One way to say this is "if you build it, you support it". In that case, a handoff from a build team to a maintenance team is an anti pattern.

But when I showed an early draft of this essay to someone struggling with these patterns, they objected. But we don't want everything to be owned by the last person/group that touched it! We don't want to make it impossible for someone to chip in without committing themself unto generations to come!

That's what made me think of flipping "if you build it, you support it" on its head. What if we formulate it as "if you support it, you get to enhance/replace it"? Especially if your organization always seems to neglect maintenance activities, putting them at the front of your mindset may be helpful, but this flip also helps address some of our paradoxes from before.

Is something owned by the last person who touched it? Not really, we're aiming for a world in which someone who jumps in temporarily is working with an owner who is engaged enough to understand what is being done, has the final say on how it is done, and knows how it ties into their ongoing responsibilities.

Can we reorg without damaging the principle that people build things and also support them? Yes, although to follow the rules a reorg has to assign the ongoing tasks which come up on a regular basis, as well as the glamorous new things which we are getting all excited about.

See also:

"Products Over Projects", by Sriram Narayan, https://martinfowler.com/articles/products-over-projects.html .