The Bug
The term bug is used colloquially to describe a software or program error. Special programs are used to detect bugs. A bug is generally an error in a computer program. A distinction is made between syntax errors, runtime errors, design errors, logical errors and errors in the operating system. The term “bug” probably originated in the 19th century, when it was assumed that faulty telephone lines were infested with bugs. This was later applied to faulty computers.
Inevitability of bugs
Even if programmers work very conscientiously, bugs cannot be avoided. It is assumed that in 1000 programmed lines two to three lines have an error, because either the logic of a program was not observed correctly or the program is embedded in an already faulty environment. In general, the earlier a bug is committed during programming and the later it is discovered, the more damage it potentially causes and the more difficult it is to fix. That is why it is helpful to carefully plan all the important steps of the development in advance. Development is divided into several phases, starting with planning. This is followed by analysis, design, programming, the test phase and the program phase.
Test phases and troubleshooting
For the test phase, it makes sense to create the test program before the final programming of the actual software, so that the test is not tailored to the program, but can work independently. To get on the track of a bug, one uses special programming tools, so-called “debuggers”. These software tools execute the analyzed application piece by piece. In the process, the debuggers display all variables, which makes it possible to detect deviations between the actual and target values. As part of the testing phase, some software providers release beta versions of their programs. In this way, the programs are publicly tested by a large group of users under various technical and application-specific aspects and provide the vendor with feedback on malfunctions before the final release.
Control standards and verification
In some areas, particularly high control standards are common, including military, transportation, aerospace, medical and pharmaceutical, and security, because the financial, human, and economic consequences of malfunction would be severe. Despite all this, it is almost impossible for software to be error-free. However, in areas with higher control standards, there is a method called verification. Here, correctness is proven mathematically-formally. However, this method is very complex and is therefore not used as standard. In addition, there is also the practical verification, which is called quality management standard ISO 9000. According to this, a defect only exists if a requirement is not fulfilled. If a software fulfills all requirements, which are checked in several tests, it is considered practically error-free.