What about security? What about accessibility? What about visual design? What about usability? What about data analytics? Do we have to write tests?
A very effective way for a project to die before it even gets started is to load it down with a lot of "well, you can't start until you address X" for dozens of different topics X. Yet we've also seen the dangers of worrying about everything later. After a big system has been built it can be hard to try to retrofit (especially if we do not communicate what is not present or carelessly use words like "done"). Can you just ignore the annoying topics unless you are sure they are biting you? Up to a point yes, but I'm operating from the assumption that you want to protect our users data, you don't want your users to be baffled, and the like.
Are these core or extras? There's a lot of judgement which goes into this. It will depend on the specific topic and we need a way to make those decisions and some idea of where that fits into the whole process of getting our software into the hands of users.
When do you address non-functional requirements? I'd say as you build the software. Not only is this more manageable than an up-front-focused process, it is also more effective. Making a lot of plans about how secure (or accessible, or operational) your software will be is only as good as your follow-through, so develop and revise your plans or techniques as you are implementing. Not only will this be more feasible, the presence of running code will give a reality check and a degree of concreteness which will improve your ability to find the best ways to achieve your non-functional goals. Make it a habit, not a totally separate process.
One good technique for some topics is to check for desired behaviors in your testsuite which gets run regularly (typically on every pull request or commit). Open source linters exist for topics like accessibility and security, and in many cases you can write your own (they don't need to be perfect to be useful, as a simple text search may be good enough for something like whether you are, for example, calling the method which sends logging to your centralized logging service rather than the logging method which does not). Bring the people along too, because it is no fun to keep automated checks passing but give no thought to whether those checks achieve their intended purpose. But the automated checks are fairly easy to implement and conducive to a situation where software is changing constantly.
It can be daunting especially if you have a small team. "I don't know anything about security!" "I'm just a graphic designer, not a UX expert!" "Why can't we just write our logs to disk?" "Who are we publishing these metrics for anyway?" Don't let this paralyze you but do try to build awareness within your team as you can and also get help as you are able. For example, in the area of UX, having empathy for the users and just asking the question of what they are trying to accomplish will be a good start. More broadly, think about how you'll know whether you are doing a good job (for example, the role of penetration tests in the security landscape is a whole topic of its own - but the idea of a penetration test originated from a good impulse, of trying to find out how secure your software is rather than just operate on unverified assumptions).
Also, give some thought to your definition of done. In many contexts your compliance department has some rigid-sounding rules about characteristics your software must meet before some point (maybe before any user uses it, maybe before it is generally available, something like that). Try to make yourself rules which fit with those or exceed them. And apply them at the same level as you do for other requirements (often before considering each user story done).
No comments:
Post a Comment