|
A persisting technology often experiences some major changes during its ongoing development, building upon the standards of a previous generation and (probably) continuing to do so in the future. Over time Geo Information System architecture has moved away from the early monolithic architecture to a client-server, and then to a more distributed architecture.
At the beginning of application development, monolithic applications were a bit like rocks (Greek monolithos, consisting of a single stone: monos, single, alone + lithos, stone). They were made up from one large, single program containing all the functionality. As the scope of the programming tasks increased, the need for reusability did, too. Therefore, the software's tasks were separated into well-encapsulated entities, which could be reused whenever the same problem came up, or if one of the entities was replaced by a new software generation. Unfortunately, the level of reuse was mainly at the level of source code within an application, where it was not accessible from other applications. So the next step was to create software modules with specific tasks such as the application or database server, whose functionality can be accessed by a client-computer. Over time database applications not only grew in complexity but they also took over a lot of processing rules and tasks (stored procedures), which were not really modular. This contradicted the idea of reusability, so finally the middleware architecture evolved, where a special middle layer handles all the processing between client and server, and over which communication between all kinds of applications / modules is possible due to its open and modular architecture.
Data exchange in the internet is based on different protocols, which are generally handled by the operating system. One important protocol is the TCP/IP (Transmission Control Protocol/Internet Protocol) network protocol, actually a suite of layered protocols. This means that each protocol layer builds upon the layer below, adding new functionality. On the lowest level, there are protocols that are implemented right into the network adapter which are responsible for communication with the actual network hardware (e.g ethernet card). Above that are protocols that handle the connection and routing, and on top are the application protocols designed for tasks such as transferring files or sending and receiving E-mail.
Internet addresses can be symbolic or numeric. The symbolic form is for humans, because it is easier to read, for example: http://www.geod.baug.ethz.ch. Its corresponding machine readable form would be 129.132.26.4 and is used by the IP protocol. The mapping between the two is done by the Domain Name System (DNS).
Another important protocol is HTTP (Hyper Text Transfer Protocol). It is most frequently used by web browsers (Firefox, Internet Explorer, Netscape/Mozilla, Opera, Safari and others) to access documents, images, video and sound. It defines how pages are formatted and transmitted, and what actions web servers and browsers should take in response to various commands. For example, when you enter an URL (Uniform Resource Locator) in the form of „Protocol://Domain Name:Port/Directory/Filename“, this actually sends an HTTP command to the Web server directing it to send the requested web page back. But before this happens the web server might parse the page first. This is done by a script language (ASP, CGI, PHP, etc), which might then query a database. Only the processed page is then sent back. The browser receives this page, parses it too, and creates the output according to the instructions (HTML, XML, JavaScript, SVG).