Distributed computing

Distributed computing

Distributed computing is a field of computer science that studies distributed systems. A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with each other in order to achieve a common goal. A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs.[1]

Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one computer.[2]

The word distributed in terms such as “distributed computing”, “distributed system”, “distributed programming”, and “distributed algorithm” originally referred to computer networks where individual computers were physically distributed within some geographical area.[3] The terms are nowadays used in a much wider sense, even when referring to autonomous processes that run on the same physical computer and interact with each other by message passing.[4]

While there is no single definition of a distributed system,[5] the following defining properties are commonly used:

  • There are several autonomous processes, each of which has its own local memory.[6]
  • The processes communicate with each other by message passing.[7]

The system may have a common goal, such as solving a large computational problem.[8] Alternatively, each computer may have its own user with individual needs, and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users.[9]

Other typical properties of distributed systems include the following:

  • The system has to tolerate failures in individual computers.[10]
  • The structure of the system (network topology, network latency, number of computers) is not known in advance, the system may consist of different kinds of computers and network links, and the system may change during the execution of a distributed program.[11]
  • Each computer has only a limited, incomplete view of the system. Each computer may know only one part of the input.[12]

Concurrent, parallel, and distributed computing

(a)–(b) A distributed system.
(c) A parallel system.

The terms “concurrent computing“, “parallel computing“, and “distributed computing” have a lot of overlap, and no clear distinction exists between them.[13] The same system may be characterised both as “parallel” and “distributed”; the processors in a typical distributed system run concurrently in parallel.[14] Parallel computing may be seen as a particular tightly-coupled form of distributed computing,[15] and distributed computing may be seen as a loosely-coupled form of parallel computing.[16] Nevertheless, it is possible to roughly classify concurrent systems as “parallel” or “distributed” using the following criteria:

  • In parallel computing, all processors have access to a shared memory. Shared memory can be used to exchange information between processors.
  • In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors.

The figure on the right illustrates the difference between distributed and parallel systems. Figure (a) is a schematic view of a typical distributed system; as usual, the system is represented as a graph in which each node is a computer and each edge is a communication link. Figure (b) shows the same distributed system in more detail: each computer has its own local memory, and information can be exchanged only by passing messages from one node to another by using the available communication links. Figure (c) shows a parallel system in which each processor has a direct access to a shared memory.

Theoretical foundations

Even though parallel and distributed systems have a lot in common, traditionally these two fields have studied algorithms and computational complexity by using different models:

A Boolean circuit can be seen as a particular distributed system. Therefore these two models take seemingly opposite directions:

  • In distributed computing, a distributed algorithm must solve the computational problem in any network topology.
  • In parallel computing, the designer of a parallel algorithm can choose the network topology.

Nevertheless, many algorithms and results can be applied in both fields. An efficient distributed algorithm is usually also an efficient parallel algorithm. Conversely, many central results in distributed computing were originally presented as parallel algorithms; examples include the Cole–Vishkin algorithm for graph colouring.

There are also challenges that are specific to one of these fields. For example, the concept of speedup is central in parallel computing. Unique challenges in distributed computing include the following:

  • Challenges that are related to fault-tolerance, for example, consensus problems, Byzantine fault tolerance, and self-stabilisation.[18]
  • The asynchronous nature of a distributed system may necessitate clock synchronisation.
  • Information can only be transferred hop-by-hop from one node to another in the communication network, and reaching a distant node may require several transmissions. Therefore the design of an efficient distributed algorithm must take the network topology and the shortest-path distances between the nodes into account.[19]

Architectures

Various hardware and software architectures are used for distributed computing. At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of loosely-coupled devices and cables. At a higher level, it is necessary to interconnect processes running on those CPUs with some sort of communication system.

Distributed programming typically falls into one of several basic architectures or categories: Client-server, 3-tier architecture, N-tier architecture, Distributed objects, loose coupling, or tight coupling.

  • Client-server — Smart client code contacts the server for data, then formats and displays it to the user. Input at the client is committed back to the server when it represents a permanent change.
  • 3-tier architecture — Three tier systems move the client intelligence to a middle tier so that stateless clients can be used. This simplifies application deployment. Most web applications are 3-Tier.
  • N-tier architecture — N-Tier refers typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers.
  • Tightly coupled (clustered) — refers typically to a cluster of machines that closely work together, running a shared process in parallel. The task is subdivided in parts that are made individually by each one and then put back together to make the final result.
  • Peer-to-peer — an architecture where there is no special machine or machines that provide a service or manage the network resources. Instead all responsibilities are uniformly divided among all machines, known as peers. Peers can serve both as clients and servers.
  • Space based — refers to an infrastructure that creates the illusion (virtualization) of one single address-space. Data are transparently replicated according to application needs. Decoupling in time, space and reference is achieved.

Another basic aspect of distributed computing architecture is the method of communicating and coordinating work among concurrent processes. Through various message passing protocols, processes may communicate directly with one another, typically in a master/slave relationship. Alternatively, a “database-centric” architecture can enable distributed computing to be done without any form of direct inter-process communication, by utilizing a shared database.[24]

Leave a comment