Fourth generation computers also saw the development of GUIs, the mouse and handheld devices. Intel's first microprocessor, the 4004, was conceived by Ted Hoff and Stanley Mazor. Image Source: Intel Timeline (PDF) Fifth Generation: Artificial Intelligence (Present and Beyond). The Third generation computers used the integrated circuits (ICs). The fourth generation computers started with the invention of Microprocessor. The Microprocessor contains thousands of ICs. The 5th Generation is based on the technique of Artificial Intelligence (AI). Computers can understand spoken words & imitate human reasoning.
- Difference Between 4th And 5th Generation Computer Systems
- Difference Between 4th And 5th Generation Computer Name
- Examples Of 5th Generation Computers
Today, it is widely recognized computer is really essential for the entire world. During the 21th century, many people were realize using computer could help them successful a lot of things with easier, either that’s for calculation or management. So, as we know computer is a very useful digital machine, but not everyone actually know how it created by. Therefore in this assignment, we would discuss all the things about computer architecture.
In this assignment, it has been recognized in four sections, which are introduction, content conclusion and references, each section would go in to details. First of all, in first question we would talk about some several generations of computer central processing unit (CPU), which included the stage of design and development of early CPU as well. During this section, we will make some comparison with the latest CPU and how much faster is the design and development of the latest CPU as opposed to the beginning of the CPU inventions. After that, during the secondary question in this assignment, we would create a diagram and discuss about the bus system. In this part, we should explain more detail about the bus system in term of interconnection, transmission and architecture as well.
As a conclusion, this assignment is about the function and structure of computer. The purpose of this assignment is to present as clearly and completely as possible, the characteristics and nature of modern-day computer system. Although most of the resources of this assignment are taken from internet and reference book, the objective is to present the material in a fashion that keeps new material in a clear context to those readers.
2.0 Definition of question 1
The meaning of computer architecture can be properly defined as a specification detailing that how a set of the hardware and software technologies standards interacting to form the platform or computer system. It is refers to how compatible with a computer system technologies and its design. Likewise, computer architecture also could refer to those attribute within the system, and those attribute have a direct impact on the logical execution of a program. For example, architecture attribute have include a lot of instruction set, those are the number of bits inside the program were used to represent a various data type, and the data type can be a numbers or characters.
Besides that, I/O mechanisms and some techniques for the addressing memory also have been included within the architecture attribute. In addition, there have three type of computer architecture in our daily use, which is system design, instruction set architecture (ISA) and computer organization (known as mircoarchitecture). In short, computer architecture mostly likes to determine what the user, system or technology were needs and create some logical design and standard based on those requirement.
(Techopedia.com, 2012-2013)
3.0 The brief history of computer
The history of computer development was often referring to several different generations of computing devices. Each generation of computer is characterized by its technological development. Those purposes are wanted to increate smaller, cheaper, efficient, powerful and reliable devices.
3.1First Generation (1940-1956) Vacuum Tubes
For the first generation computer, its circuitry and magnetic drums of memory almost are making by vacuum tubes. It often larger and should have a rooms size to place it. It is very expensive to operate, since this kind of computers are enormous and should use a great deal of electricity to make it work. Besides that, it also would generate a lot of heat, so that is the most common cause to become malfunction.
The first generation computers were relied on the lowest-level programming language or machine language to perform an operation system, but it just only can solve the problem at a time. By using an input, it was based on the punched cards or paper tape, and the output would be displayed on printouts. The ENIAC and UNIVAC computers are the great examples of first-generation computing devices. The UNIVAC was the first commercial computer that delivered to a business client, which is the U.S. Census Bureau in 1951.
3.2 Second Generation (1956-1963) Transistors
While completely develop a second generation computing devices, the transistors had been developed and replaced vacuum tubes. In 1947, the transistor was already invented, but it is not widespread to use until 1950s. Though this transistor was far superior to vacuum tube, it became more reliable than their first generation predecessors, and allows computer to increasingly smaller, cheaper, faster, and more energy efficient. In fact, although the problem about generated a great deal of heat are haven’t solve yet, its improvement are still biggest than vacuum tube. Because of this reason, its input and output must still reliable on punched card and printout.
In addition, the cryptic binary machine language of second generation computer was evolving a change to languages, symbolic or assembly, which could allow programmer to specify a proper instruction in word form. Moreover, the high-level programming language had also being developed at the same time, such as the early version of FORTRAN and COBOL.
During this invention, the technology of magnetic drum had been changed to magnetic core, which means the first computer can store their instruction to their memory. So there have many atomic energy industry in this generation would started to use this type of computer to operate their system.
3.3 Third Generation (1964-1971) Integrated Circuits
For the third generation computer, the development of integrated circuit was began the hallmark. In this generation, its transistor was evolved to become miniaturized, and it could place on the silicon chips, which is called semiconductor. Therefore, it has a decisive prerequisite to increase the efficiency and speed of the computer. Besides that, it was also instead of printouts and punched cards. The user interact with the third generation computer was through the monitors, keyboard and interface with an operating system, which would allow any device to run many kind of application at a same time, and its application should run with the central program that had been monitored within the memory. Lastly, it was increasingly smaller and cheaper than before generation.
3.4 Fourth Generation (1971-Present) Microprocessors
The microprocessor was the fourth generation of the computer, it have a thousand of integrated circuits were built onto the single silicon chip. Different with the first generation, the shape of this computer now could fit into the palm of the hand as well.
For example in 1971, the Intel 4004 chip has been developed, it was located the entire component within the computer, which is from the central processing unit and memory until to the input/output controls that onto the single chip. After that, the IBM was introduced its first computer that suggest for the home user in 1981, and in 1984 Apple company introduced Macintosh. Besides that, during this generation, as a small computer to become more powerful and efficiently, Microprocessors are not only could be used in realm of desktop computer, many products in our daily use was begin to use microprocessor, for example like handheld devices, though the development of GUI, it could be easily link together and form a network, and it was led to the development of the internet.
3.5 Fifth Generation (Present and Beyond) Artificial Intelligence
For the concept of fifth generation computing devices, there are almost based on the artificial intelligence. Although this is still in development, there have some application like voice recognition was being to use. In addition, based on this development, the usage of parallel processing and superconductor was helping to make artificial intelligence to become more reality. There have some technologies were most radically change the face of computer, which is molecular, quantum computation and nanotechnology. Normally, the goal of fifth generation computing is to develop some device that could properly respond to the natural language input or become more capable of self-organization and learning.
4.0 The comparison of computer generation
Depended on this assignment, as we can see there have a very big change during those evolutions of computer generation, whether that is hardware or software components, they still have a lot of improvements to become smaller, faster and efficiently. Below diagram was obviously showing those differences between each computer generation.
COMPARISION IN GENERATIONS OF COMPUTERS
1STGENERATION
2nd GENERATION
3rdGENERATI-ON
4th GENERATION
5thGENERATION
PERIOD
1940-1956
1956-1963
1964-1971
1971-PRESENT
TODAY- FUTURE
CIRCUITRY
VACUUME TUBE
TRANSISTOR
INTEGRATED CIRCUITS (IC)
MICROPROCESSOR
Difference Between 4th And 5th Generation Computer Systems
(VLSI)
ARTIFICIAL INTELLIGNCE
MEMORY CAPACITY
20 KB
128KB
1 MB
SEMICONDUCTOR TYPE & VERY HIGH
-
PROCESSING SPEED
300IPS
300 IPS
1 MIPS
FASTER THEN 3RDGENERATION
-
PROGRAMMING
LANGAUAGES
ASSEMBLY
LANGUAGE
HIGH LEVEL LANGAUGAE (FORTRAN , ALGOL)
C, C++
C , C++ , JAVA
-
Difference Between 4th And 5th Generation Computer Name
POWER CONSUMED
HIGH
LESS COMPARE TO 1ST GEN.
LESS
LESS
-
SIZE
VERY LARGE
LESS SPACE COMPARE TO 1ST GENERATION
SMALL & CAN BE USED IN HOMES
SMALL & USED IN HOMES
-
EXAMPLES OF COMPUTERS
UNIVAC,EDVAC
IBM 1401,IBM 7094,CDC3600,D UNIVAC 1108
IBM 360 SERIES,1900 SERIES
PENTIUM SERIES , MULTIMEDIA, STIMULATION
-
5.0 Definition of question 2
For talking about the bus interconnection, the bus could properly define as a communication pathway that used to connect to two or more devices within the computer system, it also knows as a medium of sharing transmission. Whenever the multiple devices are connecting to the bus, its signal will be transmitted by any devices which their reception was available, and though receptions mostly were coming from the other devices that have been attached to the bus. In addition, if there have two devices try to transmit during at the same time period, its signal will become overlap and sometimes begin garbled. Therefore, the transmission could be successful at a time, but only did by one device.
Typically, a bus was consists a lot of multiple communication pathway and line within the computer. Actually, each line was used to transmitting signal which represented binary 1 and 0. Over the time, the sequence of binary digits could be transmitted across a single line. Likewise, the several lines of the bus could be used to transmit binary digit simultaneously. For example, the 8-bit unit of data could be transmitted over eight bus line.
In computer system, there have contains a lot of different buses that provide pathways, and these process will be provided in between the component within the various levels of computer system hierarchy. Some of the major computer components like processor, memory, and I/O are using buses to connected, and this also what we called a system bus. In short, the most general computer interconnection structure is based on the use of single or many system buses.
6.0 Bus Structure
In a system bus, it typically consists of about 50 to 100 of separate lines. Each line was assigned its particular meaning or some else function as well. Although there have many different design of a bus structure, but it would properly be classified into three categories of the function groups, which is data, address, and control lines. Below diagram was obviously showing the bus interconnection scheme.
Bus
Address lines
I/O
RAM
ROM
CPU
Control lines
Bus
Data lines
6.1 Data lines
The function of data lines which is use to provide the path for moving data among the system modules. These kinds of lines, collectively, know as data bus. The data bus may have consists of 32, 64, 128, or something even more separate line. Besides that, the numbers of the lines are begun referred as a width within the data bus. Because each of the line could only carry 1 bit at a same time, therefore the number of line would determine how many bit that could be transfer as well. The width of these data bus was a factor of key with determined that overall the system performance. For example, if there is a type of 32 bit wide and each instruction has 64 bits long of the data bus, then during each instruction cycle, the processor should access to memory module in twice.
6.2 Address lines
The function of address lines were used to designate the destination and source of the data which on the data bus. For example, if the processor would like to read a word (8, 16, 32 bits) of the data from the memory, it would put the address of its desired word onto the address lines. In short, the width of the address bus will determines the maximum possible memory capacity of the system. Moreover, typically the address lines were also using to address I/O ports. For the higher-order bits would be use to select a particular module onto the bus system, and the lower-order bits were used to select a memory location or I/O port within the module. For instance, on the 8-bit address bus, the address is 01111111 and the below might reference are located within the memory module 0 with 128word of the memory, therefore in the address 10000000 and the above refer to the devices would attached to the I/O module 1.
6.3 Control lines
The control lines mostly were used to control the entire access to and the usage of the data and the address line. Because even though the data or address lines were shared by the entire component, it should be a means to controlling them use. The control signal will transmit the both information of timing and command among the system modules. The timing signal will indicate the validity of the address and timing information. Besides that, the command signal also will specify the operation to begin to perform. Normally, there have something should be included within the control lines:
Memory write: Cause the data on the bus would be written into the address location.
Memory read: Cause the data from the address location would be place in the bus.
I/O write: Cause the data on the bus would become an output to the address I/O port.
I/O read: Cause the data from the address I/O port will be place on the bus.
Transfer ACK: To indicate which data have been accepted from or place on the bus as well.
Bus request: To indicate the module where need to gain a control of the bus.
Bus grant: To indicate the request module has already granted a control of the bus.
Interrupt request: To indicate an interrupt was pending.
Interrupt ACK: For acknowledge the pending interrupt was already recognized.
Clock: Which is use for synchronize operation.
Reset: For initializes all the modules.
7.0 Element of Bus Design
Although the variety of differences buses was implementation exist, there have some few basic parameter and element design would serve to differentiate buses. The elements of bus design which have classify to data type, method of arbitration, timing, bus width, and data transfer type.
7.1 Bus types
First of all, the bus lines could be separated to two generic types, which is dedicated and multiplexed. The differences between dedicated and multiplexed buses, dedicated bus was separate the wires for address and data and simplifies the bus protocol, which is a store operation that could put both the address and data onto the bus at the time. Besides that, the multiplexed bus almost is the same lines but used to hold either address or data at the different times. Therefore, it allow the chips could be limited the number of pins to provide a physically attached. For those given number of pins, it is usually advantageous to transfer more data.
7.2 Method of Arbitration
The purpose of arbitration in bus design issue is to provide the only one device that could put the data onto the bus at a time. Since there have many devices can sense the data, but it’s the only one can assert it. Besides that, the bus arbitration protocol was determined which the devices are getting to use the bus at any given times. Moreover, the bus arbitration also could be centralized or distributed.
Centralized arbitration
7.3 Timing
The timing could be referring to the way in which events had been coordinated onto the bus. Topically the buses are using either synchronous or asynchronous timing. Normally, in the synchronous bus, the usage of clock signal is to provide the timing for all operations. In this section, the device was presented the address on a given clock pulse, and it expects the data while during another predefined clock pulse. Besides that, in an asynchronous bus, the devices would waits for the ready signal when the data is available. Below diagram would represent a clearly definition of synchronous and asynchronous timing.
Timing of synchronous
Timing of asynchronous
7.4 Bus width
The width of this type buses is the number of line. While there have more data lines, it would more data that could be transferred simultaneously. For example, the 32 bit bus which meaning there has 32 data lines. Besides that, the more address lines, the larger and the maximum amount of memory that would be accessed. For the greater the width, there have more hardware were required to implement for the bus.
Data transfer type
Finally, there have few buses supports various, which had been classify to fetch, store, block, and wait state. At below would be showing its function and illustrated diagram.
Read- Use control line to request a fetch operation.
Store- Place an address on the address lines.
Block- the I/O controller may still communicate with the CPU or the memory with arbitrarily sized data.
Wait- When the CPU request data from RAM or an I/O device, it may not be able to get it the next clock cycle.
8.0 Conclusion
In a conclusion, we understand the computer architecture is very important for our life. Generally, the advancement of computer has really contributed much to the modern society. To making it more of a necessity are rather than complicated, computer make our life become more convenient and possible. Therefore, I could believe there still have a big improvement in advancement of computer architecture.
The Fifth Generation Computer Systems (FGCS) was an initiative by Japan's Ministry of International Trade and Industry (MITI), begun in 1982, to create computers using massively parallel computing and logic programming. It was to be the result of a massive government/industry research project in Japan during the 1980s. It aimed to create an 'epoch-making computer' with supercomputer-like performance and to provide a platform for future developments in artificial intelligence. There was also an unrelated Russian project also named as a fifth-generation computer (see Kronos (computer)).
Prof. Ehud Shapiro, in his 'Trip Report' paper[1] (which focused the FGCS project on concurrent logic programming as the software foundation for the project), captured the rationale and motivations driving this huge project:
'As part of Japan's effort to become a leader in the computer industry, the Institute for New Generation Computer Technology has launched a revolutionary ten-year plan for the development of large computer systems which will be applicable to knowledge information processing systems. These Fifth Generation computers will be built around the concepts of logic programming. In order to refute the accusation that Japan exploits knowledge from abroad without contributing any of its own, this project will stimulate original research and will make its results available to the international research community.'
The term 'fifth generation' was intended to convey the system as being a leap beyond existing machines. In the history of computing hardware, computers using vacuum tubes were called the first generation; transistors and diodes, the second; integrated circuits, the third; and those using microprocessors, the fourth. Whereas previous computer generations had focused on increasing the number of logic elements in a single CPU, the fifth generation, it was widely believed at the time, would instead turn to massive numbers of CPUs for added performance.
The project was to create the computer over a ten-year period, after which it was considered ended and investment in a new 'sixth generation' project would begin. Opinions about its outcome are divided: either it was a failure, or it was ahead of its time.
Information[edit]
In the late 1965s till the early 1970s, there was much talk about 'generations' of computer hardware — usually 'three generations'.
- First generation: Thermionic vacuum tubes. Mid-1940s. IBM pioneered the arrangement of vacuum tubes in pluggable modules. The IBM 650 was a first-generation computer.
- Second generation: Transistors. 1956. The era of miniaturization begins. Transistors are much smaller than vacuum tubes, draw less power, and generate less heat. Discrete transistors are soldered to circuit boards, with interconnections accomplished by stencil-screened conductive patterns on the reverse side. The IBM 7090 was a second-generation computer.
- Third generation: Integrated circuits (silicon chips containing multiple transistors). 1964. A pioneering example is the ACPX module used in the IBM 360/91, which, by stacking layers of silicon over a ceramic substrate, accommodated over 20 transistors per chip; the chips could be packed together onto a circuit board to achieve unheard-of logic densities. The IBM 360/91 was a hybrid second- and third-generation computer.
Omitted from this taxonomy is the 'zeroth-generation' computer based on metal gears (such as the IBM 407) or mechanical relays (such as the Mark I), and the post-third-generation computers based on Very Large Scale Integrated (VLSI) circuits.
There was also a parallel set of generations for software:
- First generation: Machine language.
- Second generation: Low-level programming languages such as Assembly language.
- Third generation: Structured high-level programming languages such as C, COBOL and FORTRAN.
- Fourth generation: 'Non-procedural' high-level programming languages (such as object-oriented languages)[2]
Throughout these multiple generations up to the 1970s, Japan had largely been a follower in the computing arena, building computers following U.S. and British leads. The Ministry of International Trade and Industry decided to attempt to break out of this follow-the-leader pattern, and in the mid-1970s started looking, on a small scale, into the future of computing. They asked the Japan Information Processing Development Center (JIPDEC) to indicate a number of future directions, and in 1979 offered a three-year contract to carry out more in-depth studies along with industry and academia. It was during this period that the term 'fifth-generation computer' started to be used.
Prior to the 1970s, MITI guidance had successes such as an improved steel industry, the creation of the oil supertanker, the automotive industry, consumer electronics, and computer memory. MITI decided that the future was going to be information technology. However, the Japanese language, in both written and spoken form, presented and still presents major obstacles for computers. These hurdles could not be taken lightly. So MITI held a conference and invited people around the world to help them.
The primary fields for investigation from this initial project were:
- Inference computer technologies for knowledge processing
- Computer technologies to process large-scale data bases and knowledge bases
- High performance workstations
- Distributed functional computer technologies
- Super-computers for scientific calculation
The project imagined an 'epoch-making computer' with supercomputer-like performance using massively parallel computing/processing. The aim was to build parallel computers for artificial intelligence applications using concurrent logic programming. The FGCS project and its vast findings contributed greatly to the development of the concurrent logic programming field.
The target defined by the FGCS project was to develop 'Knowledge Information Processing systems' (roughly meaning, applied Artificial Intelligence). The chosen tool to implement this goal was logic programming. Logic programming approach as was characterized by Maarten Van Emden – one of its founders – as:[3]
- The use of logic to express information in a computer.
- The use of logic to present problems to a computer.
- The use of logical inference to solve these problems.
More technically, it can be summed up in two equations:
- Program = Set of axioms.
- Computation = Proof of a statement from axioms.
The Axioms typically used are universal axioms of a restricted form, called Horn-clauses or definite-clauses. The statement proved in a computation is an existential statement. The proof is constructive, and provides values for the existentially quantified variables: these values constitute the output of the computation.
Logic programming was thought as something that unified various gradients of computer science (software engineering, databases, computer architecture and artificial intelligence). It seemed that logic programming was the 'missing link' between knowledge engineering and parallel computer architectures.
The project imagined a parallel processing computer running on top of massive databases (as opposed to a traditional filesystem) using a logic programming language to define and access the data. They envisioned building a prototype machine with performance between 100M and 1G LIPS, where a LIPS is a Logical Inference Per Second. At the time typical workstation machines were capable of about 100k LIPS. They proposed to build this machine over a ten-year period, 3 years for initial R&D, 4 years for building various subsystems, and a final 3 years to complete a working prototype system. In 1982 the government decided to go ahead with the project, and established the Institute for New Generation Computer Technology (ICOT) through joint investment with various Japanese computer companies.
In the same year, during a visit to the ICOT, Prof. Ehud Shapiro invented Concurrent Prolog, a novel concurrent programming language that integrated logic programming and concurrent programming. Concurrent Prolog is a logic programming language designed for concurrent programming and parallel execution. It is a process oriented language, which embodies dataflow synchronization and guarded-command indeterminacy as its basic control mechanisms. Shapiro described the language in a Report marked as ICOT Technical Report 003,[4] which presented a Concurrent Prolog interpreter written in Prolog. Shapiro's work on Concurrent Prolog inspired a change in the direction of the FGCS from focusing on parallel implementation of Prolog to the focus on concurrent logic programming as the software foundation for the project. It also inspired the concurrent logic programming language Guarded Horn Clauses (GHC) by Ueda, which was the basis of KL1, the programming language that was finally designed and implemented by the FGCS project as its core programming language.
Implementation[edit]
So ingrained was the belief that parallel computing was the future of all performance gains that the Fifth-Generation project generated a great deal of apprehension in the computer field. After having seen the Japanese take over the consumer electronics field during the 1970s and apparently doing the same in the automotive world during the 1980s, the Japanese in the 1980s had a reputation for invincibility. Soon parallel projects were set up in the US as the Strategic Computing Initiative and the Microelectronics and Computer Technology Corporation (MCC), in the UK as Alvey, and in Europe as the European Strategic Program on Research in Information Technology (ESPRIT), as well as the European Computer‐Industry Research Centre (ECRC) in Munich, a collaboration between ICL in Britain, Bull in France, and Siemens in Germany.
Five running Parallel Inference Machines (PIM) were eventually produced: PIM/m, PIM/p, PIM/i, PIM/k, PIM/c. The project also produced applications to run on these systems, such as the parallel database management system Kappa, the legal reasoning systemHELIC-II, and the automated theorem proverMGTP, as well as applications to bioinformatics.
Failure[edit]
Examples Of 5th Generation Computers
The FGCS Project did not meet with commercial success for reasons similar to the Lisp machine companies and Thinking Machines. The highly parallel computer architecture was eventually surpassed in speed by less specialized hardware (for example, Sun workstations and Intelx86 machines). The project did produce a new generation of promising Japanese researchers. But after the FGCS Project, MITI stopped funding large-scale computer research projects, and the research momentum developed by the FGCS Project dissipated. However MITI/ICOT embarked on a Sixth Generation Project in the 1990s.
A primary problem was the choice of concurrent logic programming as the bridge between the parallel computer architecture and the use of logic as a knowledge representation and problem solving language for AI applications. This never happened cleanly; a number of languages were developed, all with their own limitations. In particular, the committed choice feature of concurrent constraint logic programming interfered with the logical semantics of the languages.[5]
Another problem was that existing CPU performance quickly pushed through the 'obvious' barriers that experts perceived in the 1980s, and the value of parallel computing quickly dropped to the point where it was for some time used only in niche situations. Although a number of workstations of increasing capacity were designed and built over the project's lifespan, they generally found themselves soon outperformed by 'off the shelf' units available commercially.
The project also suffered from being on the wrong side of the technology curve. During its lifespan, GUIs became mainstream in computers; the internet enabled locally stored databases to become distributed; and even simple research projects provided better real-world results in data mining.[citation needed] Moreover, the project found that the promises of logic programming were largely negated by the use of committed choice.[citation needed]
At the end of the ten-year period, the project had spent over ¥50 billion (about US$400 million at 1992 exchange rates) and was terminated without having met its goals. The workstations had no appeal in a market where general purpose systems could now take over their job and even outrun them. This is parallel to the Lisp machine market, where rule-based systems such as CLIPS could run on general-purpose computers, making expensive Lisp machines unnecessary.[6]
Ahead of its time[edit]
In spite of the possibility of considering the project a failure, many of the approaches envisioned in the Fifth-Generation project, such as logic programming distributed over massive knowledge-bases, are now being re-interpreted in current technologies. For example, the Web Ontology Language (OWL) employs several layers of logic-based knowledge representation systems. It appears, however, that these new technologies reinvented rather than leveraged approaches investigated under the Fifth-Generation initiative.
In the early 21st century, many flavors of parallel computing began to proliferate, including multi-core architectures at the low-end and massively parallel processing at the high end. When clock speeds of CPUs began to move into the 3–5 GHz range, CPU power dissipation and other problems became more important. The ability of industry to produce ever-faster single CPU systems (linked to Moore's Law about the periodic doubling of transistor counts) began to be threatened. Ordinary consumer machines and game consoles began to have parallel processors like the Intel Core, AMD K10, and Cell. Graphics card companies like Nvidia and AMD began introducing large parallel systems like CUDA and OpenCL. Again, however, it is not clear that these developments were facilitated in any significant way by the Fifth-Generation project.
In summary, a strong case can be made that the Fifth-Generation project was ahead of its time, but it is debatable whether this counters or justifies claims that it was a failure.
References[edit]
- ^Shapiro, Ehud Y. 'The fifth generation project—a trip report.' Communications of the ACM 26.9 (1983): 637-641.
- ^http://www.rogerclarke.com/SOS/SwareGenns.html
- ^Van Emden, Maarten H., and Robert A. Kowalski. 'The semantics of predicate logic as a programming language.' Journal of the ACM 23.4 (1976): 733-742.
- ^Shapiro E. A subset of Concurrent Prolog and its interpreter, ICOT Technical Report TR-003, Institute for New Generation Computer Technology, Tokyo, 1983. Also in Concurrent Prolog: Collected Papers, E. Shapiro (ed.), MIT Press, 1987, Chapter 2.
- ^Carl Hewitt. Inconsistency Robustness in Logic Programming ArXiv 2009.
- ^Hendler, James (1 March 2008). 'Avoiding Another AI Winter'(PDF). IEEE Intelligent Systems. 23 (2): 2–4. doi:10.1109/MIS.2008.20. Archived from the original(PDF) on 12 February 2012.
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Fifth_generation_computer&oldid=992096796'