Q. What would you say is the most interesting trend that you have observed about producing secure and safe software?
A. One of the many things that has become obvious to me over the years is the fact that you can achieve the same aims in many different ways. Some people can produce software very quickly whereas for others it could take a very long time. It all depends on the skills of people involved and their experience. The first time takes a very long time, the second time you can do it much faster, and over the years, it speeds up a lot more. When we first started we were very cautious because our guys were dealing with nuclear power and so they had to be very careful. As we moved on to more commercial sectors such as aerospace, our teams found themselves under the same time pressures as everybody else. Now, the drive for efficiency is paramount but even so, we still have to be careful.
Q. What would you say are the elements that have affected this learning curve?
A. As time goes by, the care and attention to detail required has increased. That has been impacted not only by the ability of people to learn, but also by the input of regulatory authorities whose positions on many things have changed significantly over the years. More and more industries are becoming regulated and each industry has its own distinctive way of doing things.
Q. How would you say the way industries and regulatory authorities interact have changed?
A. When we began, no one knew the regulatory authorities well, and there was almost no communication between them. Each industry was answerable to itself. The nuclear industry’s software specific regulations have always been under scrutiny, so they just had to be right. Aviation was the first commercial sector to put in place regulations, and both I and the company were very involved in the setting up of those early standards. Fast forward to today, and we have a much more mature regulatory system in place.
Q. I saw a discussion from 2009, where a group of people were comparing the benefits of static analysis and dynamic analysis but didn’t really conclude either way. Could you share some insights on how test engineers look at today static and dynamic analysis?
A. Dynamic analysis implies the execution of code. If you open your testing with dynamic analysis, the tendency is that you will find a number of faults. As each one appears, you have a problem because you have to change the program before you can continue. However, once you do that, you have to start the testing all over again because it is a new program and you cannot be sure what other parts of the system have been impacted by your change.
By applying static analysis first, you will recognise and correct many of the problems before any code is even executed. The sensible way to go is therefore to perform static analysis, to correct the problems that the static analysis shows up, and then do the dynamic analysis. That way, with a bit of luck you might get right through the dynamic testing without having to change the program. So, the two are very complementary, and to my mind it’s utterly stupid to not to do them both.
Q. How would you recommend that a design or development team should look at this when there are significant cost constraints before their first prototype?
A. Excluding either static or dynamic analysis would be a gamble. Static analysis enables you to find certain class of faults – basically the technical faults – where the language has been misused or people have made silly mistakes in their programming. But static analysis reveals nothing about the application faults. Only when you use dynamic analysis to execute code can you expose problems with how the application itself implements the system’s requirements. So again, there is a difference between the type of faults you can find with each technique, making it clear that it is always best to do both.
Q. Could you give an example of how algorithmic errors might be found using dynamic analysis?
A. Suppose a car is under software control, and a circumstance arises where the system fails to apply the brakes when it should have done, potentially resulting in a collision. Static analysis could not foresee that eventuality. Dynamic analysis could.