Hackproof

The term hackproof has been used for some time to assert that a product or system has immunity to unauthorized remote access and manipulation. This characteristic is mathematically impossible to achieve on a networked general-purpose computer. Computer security experts acknowledge that there are "many different ways of hacking into a computer". What they do not explain very well is that the number of ways to be hacked is uncountable. A countable set has a one to one relationship with the natural numbers, the set 1, 2, 3, 4, .... A recursively enumerable set has some algorithm that will produce all of the members of that set. An uncountable set, however, is an infinite set that has too many elements to be countable and therefore in most cases also is not recursively enumerable. The number of possible hacks is uncountable, as explained below. computer security research addresses these questions in great detail under the rubric of monitor functions and runtime safety. This article explains how at present being hackproof on a networked general purpose computer is known to be mathematically impossible. It also provides pointers to the related research and development where alternative computer architectures that may be truly hackproof are being pursued.
Why is Being Hackproof so Hard?
Computer security includes preventing hacks. Hackproof would mean protecting not only all of the known hacks catalogued, e.g. in the Common Vulnerabilities and Exposures(CVE) database but also, protecting against all possible future hacks. If you could count all of the possible hacks, then you could assign each of the possible hacks a number in some logical sequence. You also could provide an algorithm that would enumerate all of the hacks. You might have to wait a long time for a given hack, but you would know that eventually or with a big enough computer, the hack would be exposed as a hack. The set of all possible hacks, H, could be infinite, just like the natural numbers, but if you could arrange all of the possible hacks in some logical order, then some algorithm h could examine some suspicious code c and if h(code)==True then the code is a hack. This refers to code that is not yet a known vulnerability. Such an algorithm, if possible, need not be very complex even if the list of possible hacks is infinite (but countable). For example, the algorithm natural(x) examines the characters of x - if all are between 0 and 9, then it is a natural number regardless of how big it is. If h(code) existed, it would examine the code and return true or false regardless of how big the code is. In this case, hackproof would be possible.
However, on a general purpose computer, the set of possible hacks is uncountable and therefore it is impossible to have h(code) that returns True when the code is a hack and False when it is not a hack. The proof of this fact is a little cumbersome. It is implied in technical papers cited below explained below, has been proven, and is in the process of being published. However, the key point is that cyber-security on networked general purpose computers is impossible.
In the search for solutions to this quandary, a hackproof cognitive radio was described in 1999 The principle of that approach was that a software-defined radio (SDR) does not need general purpose computing in order to transform voice or text input into radio waves for transmission. Similarly, the SDR does not need general purpose computing for receiving radio waves and translating them into voice and data in real time. Since the radio receiver has only a few millisecond to perform the transmit and receive functions, the property of general purpose computers called the μ-recursive function capability is not needed. The μ-recursive function capability includes open-ended search via the μ operator. SDR does not need open-ended search, but instead uses only finite search which makes an SDR needs only the primitive recursive function capability, which informally is a DO loop FORTRAN. The real issue, however, is the Instruction set architecture (ISA). All von Neumann architecture machines share instructions with data in the same memory. Thus, malware in the shared memory with the Operating system transforms data into instructions and good instructions into more malware. Computing architectures like the Dataflow architecture do not necessarily share data and instructions in the same memory and thus may have fewer attack surfaces than general purpose computers, but this question remains a research issue being pursued by academic and industrial research organizations
Why are General Purpose Computers Not Hackproof?
Some familiarity with the theory of computing and related computer architecture in order to define hackproof accurately, yet in a way that is accessible to general readers.
The concept of being hackproof (per Wiktionary.org) refers to a computer that cannot be hacked. Such a computer could not be remotely accessed in an unauthorized way, e.g. via the Internet. While cyber hygiene raises the bar to malware, being hackproof would completely shut the door.
Today, however, it is impossible for most computers to be hackproof. This is because of a fundamental characteristic of the hardware. The chip-sets have a theoretical property called Turing-equivalence. Turing proposed the Turing Machine (TM). This is a finite state machine (FSM) with an associated infinite tape on which to record data and from which data could be read. A true TM is programmed in a very arcane binary language. But a TM can compute anything that is conceivable to compute.
Operating Systems Enable Hacks
An operating system (OS) is system software that manages computer hardware and software resources. An OS is supposed to provide common services for computer programs. It also exposes significant attack surfaces that hackers exploit. The operating system is a component of the system software in a computer system that also includes (GUI) and Input/output driver software. The GUI and IO drivers also harbor malware. In commercial general purpose computers, an Application program require services from an operating system. Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources. The Cuckoo's Egg describes the first Advanced persistent threat that was discovered in 1988-89 because of an accounting error between the costs allocated to actual users and the total processing resources used.
Computer architecture Enables Hacks
Computer hardware functions include Input/output and memory allocation. Although the operating system acts as an intermediary between programs and the computer hardware, the Instruction set architecture (ISA) itself is the root cause of hacks because the hardware cannot tell the difference between data and instructions. Ericsson's book spends 200 pages describing the details of the Intel ISA and then provides on page per hack for the last 100 pages of the book, with the mantra "bits is bits". The Intel ISA, MIPS, and all the rest share the property that they provide general recursion, infinite looping, on which the proof of the uncountability of hacks is based. Although the application code (app) is usually executed directly by the hardware, the apps frequently make system calls to an OS function. APT malware often lives in the threads of the OS, so may infect an app when the OS libraries are called or when the app is interrupted by the OS, e.g. for IO. Operating systems are found on myriad common non-computing devices such as cellular phones, video game consoles, web servers, and Industrial control systems.
Computing Researchers are Focusing on the problem of Hacks
The US National Science Foundation (NSF) supports research in theory of computing and computer architecture, including on the question of hackproof computing. NSF does not use the informal term hackproof however. The term "runtime enforcement" is the relevant technical term. Quoting that reference, "Runtime enforcement mechanisms work by monitoring untrusted applications, to ensure that those applications obey desired policies." Paraphrasing the introduction to this article, runtime mechanism are often called security or program monitors. They are popular and can be seen in Operating Systems, web
browsers, spam �filters, intrusion-detection systems, �firewalls, access-control systems, etc. Despite their popularity and some initial e�orts at modeling monitors formally, we lack satisfactory models of monitors in general.
Not having general models of runtime mechanisms is problematic because it prevents us from developing an accurate and reflective theory of runtime enforcement. On the other hand, if we can model runtime mechanisms accurately and generally, we should be able to use those models to better understand how real security mechanisms operate and what their limitations are, in terms of policies they can and cannot enforce.
 
< Prev   Next >