DATA PROCESSING TECHNIQUE

Processing systems make use of one or more processing techniques, depending on the requirements of the application. The methods can be categorised according to the ways in which data is controlled, stored and passed through the system; the major categories are: batch processing; on-line processing, which includes real-time and time-sharing; distributed processing and centralised processing. 

Batch Processing: The Batch processing method process batches of data at regular interval. It closely resembles manual methods of data handling, in that transactions are collected together into batches, sent to the computer centre, sorted into the order of the master file and processed. Such systems are known as traditional data processing systems. It has advantage of providing many opportunities for controlling the accuracy of data and conversely, the disadvantage is the delay between the time of collecting transaction and that of receiving the result of processing. 

On-line Processing System: On-line processing systems are those where all peripherals in use are connected to the CPU of the main computer. Transactions can be keyed in directly. The main advantage of an on-line system is the reduction in time between the collection and processing of data. The two main methods of on-line processing are real time and time-sharing processing. 
Real-time processing:This refers to situation where event is monitored and controlled by a computer. For example a car's engine performance can be monitored and controlled by sensing and immediately responding to changes in such factors like the air temperature, ignition timing or engine load. Microprocessors dedicated to particular functions are known as embedded system. 
Time Sharing:This refers to the activity of the computer's processor in allocating time-slices to a number of users who are given access through terminals to centralised computer resources. The aim of the system is to give each user a good response time. The processor time-slice is allocated and controlled by time-sharing operating system. 

Distributed Processing

As the term suggests, a distributed processing system is one which spreads the processing tasks of an organization across several computer system; frequently, these systems are connected and share resources (this may relate to common access to files or programs) through a data communication system. Each computer system in the network should be able to process independently. 

Centralised System 

With this type of system, all processing is carried out centrally, generally by a mainframe computer. This is achieved through computer networks. There are several
networks that is linked with the central computer where processing takes place. 


Multitasking

While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time", then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.
Before the era of cheap computers, the principle use for multitasking was to allow many people to share the same computer.
Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly - in direct proportion to the number of programs it is running. However, most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run at the same time without unacceptable speed loss.

Multiprocessing

Cray designed many supercomputers that used multiprocessing heavily.
Some computers may divide their work between one or more separate CPUs, creating a multiprocessing configuration. Traditionally, this technique was utilized only in large and powerful computers such as supercomputers, mainframe computers and servers. However, multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers have become widely available and are beginning to see increased usage in lower-end markets as a result.
Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose
computers. They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.

Post a Comment

0 Comments