What does "good software" mean?

Reflecting on quality assurance in software

This is a fundamental question, one of those questions so open-ended that they are never fully answered - and there is a good reason for it.

Why is this important?

Asking yourself this question is paramount, whether you are a Quality Analyst, QA Engineer, Tester or Software Engineer, because this sets a guiding light for what we, software professionals, should pursue in our work. It may seem like a dumb question easily answered in terms like "good software is software that works", but this is more of a starting point rather than a closure to this question. Software that works is what both we, the makers, and the users want, but we must, in the first place, switch sides and put ourselves in the user's shoes for a second to understand what is a "good software" for them!

Software is not a toy that we get to play with, it's not meant (at least primarily!) to be a source of joy and amusement for the professionals involved in it. Software is frequently a service (which became pretty much a business model in the cloud-based environment) we offer, as well as its maintenance. It's a tool we sell clients under the promise that will solve a particular problem that client might be facing.

So this leads us to reflect on what a good software should look like - it's a business matter, at the end of the day. I invite you to sit down, grab a coffee and explore this subject with me.

Thinking about good software from a customer's perspective

If the software is being developed for someone else to use it, it seems obvious that we should consider their needs and put their needs before our own convenience. I know there are a lot of things that wouldn't let us achieve this ideal state, such as accommodating requests from sales teams and management, etc. These things happen and we need to work around them instead of just moaning about it.

I strongly advocate for a much closer relationship between the development (especially the test and quality members) and the 1st and 2nd line support teams. They have a much more clear vision of what happens in the field, what the customers/users may be struggling with, what they informally complain about, etc. In my experience, at least, the development team I was part of was the 3rd line of support, which means that our visibility of every day issues was very limited - we just dealt with big live issues and ended up not having a very good visibility of the minor issues the users brought up when they called in the helpdesk. A good strategy to cover this "blindness" can be setting up a reporting routine from the helpdesk, quantifying the issues by their types. It's simple, easy to do, and opens up the curtains to a vast richness of information from the people who actually use your product on a daily basis, providing parameters we can use to create test cases, for instance.

Also, these teams can tell us what they, as support agents, struggle with. This can be easily overlooked, but it's worth to keep in touch with these professionals to understand their perspective as well - a good software should, then, be easy to support. If your software product poses hardship for its very support agents, then it may not be really good. Not only this will ultimately impact the end user, but it can also become a source of distress and affect the harmony of teams that should be working in fine tuning to address the client's needs.

Appearance DOES matter!

Let's imagine you have a behaviour in your application that's not ideal, but it's also very laborious to fix. It doesn't affect the functionality at all and neither the security is compromised. But still the behaviour may pass an ambiguous message for someone not really acquainted with what's going on under the hood. Is it something we can call "good software"?

In my opinion, no we can't. This situation happened to me not long ago: during the testing, we found an ambiguity in the way the application 'talked' to the user, even though the workflow and security remained intact. It was the result of an issue not addressed in the past that was subsequently disseminated further. Because of the very nature of the issue that resulted in this, let's say, bad communication, a lot of work would be demanded and, with the delivery deadline looming, the choice from the development team was to push it and fix in a following version. And that is totally understandable, even though is not ideal.

This kind of situation is very common and requires a deeper reflection. There could be something bigger at stake, not only that little insignificant bug (from a developer's point of view). Let's remember again the user is not necessarily computer literate, they doesn't really care or even understand that ambiguity does not pose any risk of security breach or workflow breakage.

It doesn't matter how secure your application is, it must LOOK secure.

This is important to 1) manage your business reputation; and 2) to reassure the user they can trust your software. Ambiguities are not really helpful for any of those, if the user doesn't feel reassured, your reputation might be gone out of the window.

How easy it is to maintain your software?

We have all heard about developer's and tester's worst nightmare: legacy systems. Why are they so frightening? Usually, these are systems that were left behind for some reason and it's a mess: no one knows what's happening there, there are no tests in place, lots of spaghetti code everywhere, etc. Imagine you're taking over a project that's been left aside because, let's suppose, the other members who created it from scratch and worked on it for years have left the company. The code is tightly coupled and you also find variables named 'a', 'b', 'x' and so on. How on earth are you supposed to maintain, test and ship new features for that system?

Quality is also an internal affair. We need to look to developers and testers as users of software. In a different level than the end user, but definitely users. And we care about users.

I have no intention of prescribing best practices or anything like this here, every project is a living organism in itself with its own needs and peculiarities, but an effort must be made, at a team level, to ensure that every one that touches or will touch in the future that software is able to understand the code, and proper documentation plays a fundamental part in this matter.

As I said, there are no perfect recipes, but there somethings we can look for and be vigilant about:

  • Is there any documentation, and is it clear enough?

  • How are the classes in the code organised?

  • How is the naming convention for variables?

  • Are there any metrics on testing?

  • Is it easy to introduce new features?

  • When introducing new features, does it require deep changes in the code?

These are just a few things I can think of now, but surely there are a lot more. Hope it can help you in your journey and provide some food for thought.

If you wish, feel free to get in touch and if you're reading me from Brazil or any other Portuguese speaking country, I invite you to have a look at a project I maintain, it's a virtual library for readings on QA, development and all things software: https://www.biblioteca-qa.org/.