Eugene Tkachenko

Testing Compatibility on the Web

It’s hard to imagine full testing of a software product without testing its compatibility. Of course, there are exceptions, such as embedded solutions, but since most projects in outsourcing, which we are working in, have a web part, we can consider life without compatibility totally impossible.

Why impossible? Because, to our regret and to the joy of end users, the latter have a certain choice of environments where they can use our solution. And these environments, just like people, see the world around a little differently. Where one kind of environment will interact with our product easily, another kind might respond inadequately.

For us developers there are only two ways to ensure quality under these circumstances: either we seize the world and force everyone to use one single environment or we adapt to the tastes of the public. The second option is usually more preferable, since a working ideology or a powerful army to start the seizure are hardly ever at hand.

This article will consider an example approach to organization of manual compatibility testing of a web product.

This is not a call to action, but rather a number of ideas to help you remember important things and to arrange your thoughts (but there is no guarantee).

«Chicken’s no bird, release without compatibility testing doesn't work»

Country lore

«PM in dear to me, but dearer still is the truth»

Aristotle

 

Where to test

It’s great when the customers know what they want and from the very beginning give a full list of detailed requirements, including non-functional, where they describe the wishes as to the compatibility. In this case, there is nothing to think about – you go ahead and do the testing. However, more common are the projects where we have to decide on the scope ourselves and then reason the proposed scope to the customer. How can we do it?

  1. Remember where our target audience lives.
  2. Go to http://gs.statcounter.com/ site and examine the statistics for the countries we need. There are two options here:
  • we know that all users are found within one or two ethnically/economically related countries, as it is often the case with some nation-oriented products, like payment systems, that are in demand only in the customer's homeland. Then we look up statistics for the country in question and draw up the scope;
  • users live in several countries or throughout the world – in this case it’s necessary to look at the worldwide statistics or statistics for a specific region where these countries are located. We need to determine what combinations of “browser-OS-platform” are the most popular. Unfortunately, the graph cannot show a combination of several parameters at a time, so you will have to look up statistics for specific parameters and draw up the scope based on that.

In this task, statistics is useful for us only if it’s up-to-date. We don’t need data from times long gone – it’s enough to study numbers for the past year. It’s also worse to consider trends – if a certain browser has gained popularity over the last year, and this growth can be described as linear, it can be placed into a group with a higher priority (see the next point for priorities).

  1. At this point, we could make up a list of environments and settle with it happily. But there is another option – to divide this list into groups according to their priorities. Why divide browsers based on priorities? There may be a lot of them; they can be more or less equally popular. We must understand where to start, what to pay more attention to, when to finish, and what can be omitted from the scope whatsoever in case of lack of time.

Before sorting environments according to priorities, we must first understand what ranges of popularity percentage correspond to each priority. As a rule, three priorities are used: P1, P2, and P3.

  • P1 – environments without which there can be no release at all. We pay most attention to them, and we perform functional testing in one of them before proceeding to compatibility.
  • P2 – something in between P1 and P3. Yes, they are still important for the full compatibility, but they are not as important as P1, because they are used by fewer people.
  • P3 – these are the environments, which can be omitted in case of lack of time. They are good to have for negotiations with the customer.

It’s not that simple with the bottom boundary for P3 – we can make it standard 5% and leave out browsers with popularity below this 5% threshold. However, there may be the following situations:

  • there may be 20 browsers, each with a popularity of about 5%;
  • the target audience might live in, say, China, where 5% is like 200% for Sweden.
  • Therefore, I recommend determining ranges and boundaries for each case separately, taking into account the demography and the big picture of popularity.
  1. We distribute browsers into groups based on priorities. At this stage, P1 can include some random browser as the customer’s special wish. For example, the product under development is a product, which the customer can sell further (White label solution). And potential buyers’ tastes as for environments are very different from the average in the country – for example, some may be using Apple devices only, while others may be loyal to Android and Windows. In this case, you need to forget the dry statistics and listen to what the money talks.

By the way, it is only acceptable to leave out some browsers that do not reach a certain popularity threshold when we consider absolute popularity in the country only. This approach requires less effort for testing and support, but then our product’s performance on some platforms is not guaranteed at all. If we do want to provide support for all possible platforms, we must consider this too. For example, if some Z browser, which has only 3% of the absolute popularity, is tied to a certain platform or OS, which have no other supported browsers, it makes sense to include it into the scope, and even give it a higher priority.

 

When to start

So, we have determined the scope for testing and are waiting for the moment when the product is ready for compatibility testing. How not to miss this moment and not to begin too late?

Of course, the best is to wait for the functional testing to be completed. When all the planned features have been created and tested functionally, shown to the customer at a demo, and undergone stabilization, we can start our compatibility testing. This provides a certain guarantee that there will be no further changes in the code and you will not have to test again in the environments you have once covered.

However, by this stage, there is not always enough time left for full testing, so we can start compatibility testing whenever a new feature is ready, considering risks of further changes in these features.

 

What to test

Naturally, it isn’t worth performing a full set of functional tests in each browser. Functional testing makes sense only when these very functions run on the customer’s side. If your product is an online store, and price calculation of the goods in the shopping cart is done in the backend, there is no point in testing it in every supported environment. In other cases, the emphasis should be made on visual checks.

Special attention should be given to how elements are placed on a page (that is, to the layout), to controls and other things that depend on the browser and the platform. Take date pickers, for example. Each platform should show its native date pickers.

  • A good approach would be a mandatory performance of a smoke test for each browser.
  • It is convenient to track the performed work in an Excel table, where columns are browsers, and rows are certain user stories or features. If you know what features require functional tests during the compatibility testing and have this table at hand, you will automatically answer the question about what to test.

 

When to finish

Ideally, you finish when all environments from P1 to P3 are covered.

In reality, you finish after you have done what there is time for. At the planning stage with the customer, you should discuss situations when “we don’t have time for everything” and consider risks and assumptions. For example, you can agree that P1-P2 will be tested in any case, but P3 will be tested if there is time left for this.

 

What to remember

  • Browser versions. Generally, this parameter can be ignored, since any respectable browser has auto-update by default, which average mortal people don’t usually disable. But, some customers ask for compatibility with the current and major previous versions of certain browsers. By default, compatibility testing is done in the latest versions of browsers. If you want to be sure and safe, look up the statistics for browser versions (fortunately, StatsСounter allows this) – and take necessary measures. You might need to take a glance at how the result of your team’s common effort behaves in the previous versions, so as to account for users who have not yet done the update. Of course, this point is not applicable for Internet Explorer.
  • Don’t forget that the screen resolution is a variable too. Statistics about popularity of resolutions must also be studied. To avoid overloading the scope of compatibility, you can include testing of different resolutions into functional tests of separate features, as there are a lot more combinations of “browser-OS-platform-resolution” than “browser-OS-platform”
  • Keep in mind that fixing of visual defects must be tested in all browsers, and not only in the one where this defect was detected. There is no guarantee that fixing a defect in one browser will not ruin something in another.
  • Some browsers have extended settings for cookies. For example, version 11 of Internet Explorer has as many as 7 options – from a complete ban to a complete permission of cookies. Whether the product is working correctly should be tested with all possible settings. You haven’t forgotten that a correct warning of an error should be shown if cookies are disabled, have you?  (javascript is no exception).
  • Remember about specific platform gestures and check how adequately your product responds to them. This is especially true for sensor input on mobile devices.
  • There are rumors that, if browsers are working on the same engine, the tests for functions that are done on the customer’s side can be done only in one of them. Seems legit.

 

Afterword

It might seem at first sight that the idea is very simple – you pick a bunch of the most popular browsers and test in them. But once you dig deeper, you uncover a number of details and nuances that are worth a thesis paper. I hope this article will help you acquire new ideas or see a different perspective of what you haven’t had questions about for a long time now.

Eugene
Tkachenko
Subscribe for regular updates
By clicking Submit you agree to Sigma Software's Privacy Policy