News

Process Control System


Process Control System

A process is broadly defined as an operation which uses resources to transform inputs into outputs. It is the resource which provides the needed energy to the process for the transformation to occur. In the context of an industry, the term ‘process’ as used in the term ‘process control’ refers to the methods of changing and refining of raw materials which remain in a solid, liquid, gaseous, fluid, or slurry state  to create end products of specified properties. The raw materials during the process are transferred, measured, mixed, heated or cooled, filtered, stored, or handled in a way so that the desired output is obtained. During the process, the raw materials undergo physical and chemical changes for their conversion into the end products. Normally an industry operates a number of processes where each process shows a particular dynamic (time varying) behaviour which governs the transformation. The dynamic behaviour is determined by the physical and chemical properties of the inputs, the resource, and the process itself.

Process control is used in order to maximize production while maintaining a desired level of product quality and safety and making the process more economical. Since these objectives apply to a variety of industries, process control systems are used in facilities for the production of metals, chemicals, pulp and paper, food, and pharmaceuticals etc. While the methods of production vary from industry to industry, the principles of process automatic control are generic in nature and can be universally applied, regardless of the nature and the size of the plant.

In order to produce a product with consistently high quality, tight process control is necessary. As the processes become larger in scale and / or more complex, the role of process control becomes more and more important. When the processes become more complicated to operate, it is advantageous to make use of some form of automatic control. Automatic control of a process offers several advantages which include (i) improved process safety, (ii) satisfying environmental constraints, (iii) meeting product quality specifications which are becoming stricter with passing time, (iv) more efficient use of raw materials and energy, and (v) increased profitability.

There are two main reasons for process control. The first reason for process control is to maintain the measured variable at its desired value when disturbances occur. The second reason for process is to respond to changes in the desired values of the process parameters. The desired values are those values which are needed for the process to run in stable form to produce specified product in an economic and safe way without adversely affecting the environment.



The process control system has got several objectives which include (i) it is central for the maintenance of the quality of the product since it maintains the parameters of the process within the close limits to get products of consistent specifications from the process and without control of the process, quality of the product not only can vary but the product can also get rejected, (ii) it is to preserve the safety of personnel, equipment, and environment, (iii) it is to meet and maximize the equipment productivity and hence the production, and (iv) it is to ensure that the consumption of raw materials, energy and utilities remain within the specified consumption norms for the process. Process control system is also needed to keep the cost of the production under control overcoming the constraints put on the process by the environment. The control of the cost is to be achieved without compromizing on the safety and product quality.

A process control system consists of a controller and a plant which can be a machine, vehicle, or process which is to be controlled. The controller is the system which is needed to produce satisfactory result for the plant. A ‘manual controlled system’ is one where the controller is a human being. The alternative to this is an ‘automatic controlled system’ where the controller is a device, normally implemented electronically, either using analogue circuits or a digital computer (microprocessors). In the manual control, the operator continuously monitors process parameters which are being measured. When the operator finds that the process parameters are deviating from the norm, the operator takes the corrective action to bring back the process parameters within the norm. In case of automatic control, this manual function is done by a controller.

A simple to understand example of process control is the supply of water to a number of cleaning stations, where the water temperature needs to be kept constant in spite of the demand. A simple control block is shown in Fig 1a, steam and cold water are fed into a heat exchanger, where heat from the steam is used to bring the cold water to the required working temperature. A thermometer is used to measure the temperature of the water (the measured variable) from the process or exchanger. The temperature is observed by an operator who adjusts the flow of steam (the manipulated variable) into the heat exchanger to keep the water flowing from the heat exchanger at the constant set temperature. This operation is referred to as process control, and in practice can be automated as shown in Fig 1b.

Fig 1 Process control of a simple heat exchanger process loop

Process control is the automatic control of an output variable by sensing the amplitude of the output parameter from the process and comparing it to the desired or set level and feeding an error signal back to control an input variable. In Fig 1b, a temperature sensor attached to the outlet pipe senses the temperature of the water flowing. As the demand for hot water increases or decreases, a change in the water temperature is sensed and converted to an electrical signal, amplified, and sent to a controller which evaluates the signal and sends a correction signal to an actuator. The actuator adjusts the flow of steam to the heat exchanger to keep the temperature of the water at its pre-determined value.

The role of process control has changed throughout the years and is being continuously shaped by the technology. The traditional role of process control in industrial operations was to contribute to safety, minimized environmental impact, and optimize processes by maintaining process variable near the desired values. Normally, anything which needs continuous monitoring of an operation involve the role of a person for the control of the process. In earlier years, the monitoring of the processes was done at the unit and the process was maintained locally by an operator. These days the processes are fully automated and the operator is aided by the distributed control system (DCS) which communicates with the instruments in the field. One of the purpose of controlling the process is to have safety of equipments as well as personnel working in the area around the process since something going wrong can lead to serious accidents. A process control system is needed to perform either one or both of the following tasks.

Maintain the process at the operational conditions and set points – Several processes are needed to work at steady state conditions or in a state in which they satisfy all the benefits for the organization such as budget, yield, safety, and other quality objectives. In several real life situations, a process does not always remain static under the working conditions because of various disturbances which take place in the environment in which the process is taking place. A disturbed process can cause substantial loss to the process productivity and the product quality. The process becomes unsteady when the process variables oscillate from its physical boundaries over a limited time span. The process can also stray away from steady state conditions, because of different changes in the environmental conditions, such as composition of a feed, temperature conditions, or flow rate.

Transition the process from one operational condition to another – In real-life situations, process operator can change the process operational conditions for a variety of different reasons. The transitioning a process from one operating condition to another can disturbed the process resulting into process destability.

To give the desired output, a process is to operate within the set parameters. The process is said to be in stable operation, if it is operating within specified range of the set parameters. However, there can be deviations from set parameters during the process operation. There are some other effects which are not controllable. These deviations and non-controllable effects are called disturbances. Disturbances are caused due to uncontrolled changes in the process inputs which can be feed condition, utility supply pressure or temperature, catalyst activity, and heat transfer efficiency etc. When a process experiences disturbances, at that time there are deviations in the process variables and then the process becomes unstable. The disturbances can be both measurable and immeasurable disturbances.

Industrial processes are normally dynamic in nature and hence a process control system normally has dual tasks. Through process control system an operator (i) monitors certain process condition indicators and (ii) induces changes in appropriate variables in order to improve process conditions. Control is exercised to maintain desired conditions in the process system by adjusting selected variables in the process system. A specific value or range is used as the desired value for the variable to be controlled. It is always better to utilize the simplest control system which can achieve the desired objectives. The design of a process determines how it is going to respond dynamically and how it can be controlled.

A process has normally got four types of variables namely (i) input variables, (ii) output variables, (iii) control variables, and (iv) state variables. Input variables are those variables which independently stimulate the process system and hence induce change in the internal conditions of the process. Input variable shows the effect of the environment on the process. It normally refers to those factors which influence the process. There are some effects of the surrounding which are controllable and are known as manipulated inputs. These variables can be manipulated variables or can be disturbance variables.

Output variables are those variables by which an operator obtains information about internal state of the process. These are the variables which are the process outputs that affect the environment. An example of this is the flue gas containing carbon di-oxide gas which results from a combustion reaction. These variables are or are not measured. Control variables are those variables which are used by the operator for manipulation to keep the process under control. State variables are the minimum set of variables which are necessary for completely describing the internal state (or condition) of a process.

Control systems are used to maintain process conditions at their desired values by manipulating certain process variables to adjust the variables of interest. There are three types of process variables. These are (i) controlled variable, (ii) manipulated variable, and (iii) load variable. Controlled variable is that variable which directly or indirectly indicate the form or state of the product such as temperature in a process reactor, outlet temperature of the product or exit gas. Manipulated variable is that variable which is selected for adjustment by the controller so as to maintain the controlled variable at the desired value. All other independent variables except controlled and manipulated variables are called load variable.

The process need to be monitored by the operator so as to ensure that the process operates in the stable condition and gives the desired output. For monitoring, the conditions (process parameters) of the process system are measured. For this operator is to depend on instruments.  Further each system has a control calculation which is also called algorithm. The results of calculation are implemented for process to function correctly.  The instruments used for controlling a process can be sensors, transmitters, controllers, and final control element.

There are several common attributes of control systems which include (i) the ability to maintain the process variable at its desired value in spite of disturbances which can be experienced, and (ii) the ability to move the process variable from one setting to a new desired setting which is also known as set point tracking.

In the control system, the controller compares the measurement signal of the controlled variable to the set point (the desired value of the controlled variable). The difference between the two values is called the error. Error set (point value) is the measurement signal of the controlled variable. Depending upon the magnitude and sign of the error, the controller takes suitable action by sending a signal to the final control element, which provides an input to the process to return the controlled variable to the set point.

The concept of using information about the deviation of the system from its desired state to control the system is called feed-back control. Information about the state of the system is ‘fed back’ to a controller, which utilizes this information to change the system in some way.

The type of control system shown in Fig 2 is termed a closed loop feed-back control system. Closed loop refers to the fact that the controller automatically acts to return the controlled variable to its desired value. In contrast, an open loop system has the measurement signal disconnected from the controller, and the controller output has to be manually adjusted to change the value of the controlled variable. An open loop system is sometimes said to be in manual mode as opposed to automatic mode (closed loop). Negative feedback is the most common type of signal feedback. Negative refers to the fact that the error signal is computed from the difference between the set point and the measured signal. The negative value of the measured signal is ‘fed back’ to the controller and added to the set point to compute the error.

Fig 2 Generalized process control system

Process control refers to the methods which are employed for controlling the variables of the process or processes used in the manufacturing of a product. It is the act of controlling a final control element to change manipulated variable to maintain the process variable at a desired set point. These variables can be in process inputs, in process parameters, and in process output. Control of the process is done to reduce its variability for getting aimed product quality, improving the production rates, increasing the efficiency of the process, facilitating protection of the environment and ensuring safety of the men and equipment employed for the process.

A corollary to the process control is that a controllable process is to behave in a predictable manner. For a given change in the manipulated variable, the process variable is to respond in a predictable and consistent manner.

A process control system has two major control structures. These are (i) single input-single output (SISO) control structure, and (ii) multiple input-multiple output (MIMO) control structure. In SISO control structure for one control (output) variable there exist one manipulated (input) variable which is used to affect the process. In MIMO control structure, there are several control (output) variables which are affected by several manipulated (input) variables used in the given process. The terminology used in the process control system is described below.

Cascade control – It is a control system with two or more controllers, with a ‘master’ and a ‘slave’ loop. The output of the ‘master’ controller is the set point for the ‘slave’ controller.

Controllers – Controllers carry out the function of brain. Controllers can be pneumatic, electronic, or digital computers.

Dead time – It is the quantity of time which is taken by the process to start changing after a disturbance in the system.

Derivative control – It is the ‘D’ part of a PID (proportional integral derivative) controller. With derivative action, the controller output is proportional to the rate of change of the process variable or error.

Error – In the process control systems, error is defined as the difference between set point and process variable and is given by the equation ‘Error = set point – process variable’.

Final control elements – Final control elements carry out the function of muscles. These are control valves, pumps, compressors, conveyors, and relay switches etc.

Integral control – It is the ‘I’ part of a PID controller. With integral action the controller output is proportional to the quantity and duration of the error signal.

PID controller – PID controllers are designed to eliminate the need for continuous operator attention. They are used to automatically adjust system variables to hold a process variable at a set point.

Proportional control – It is the ‘P’ part of a PID controller. With proportional action the controller output is proportional to the quantity of the error signal.

Sensors – Sensors are those instruments which carry out the functions of vision, hearing etc. These instruments measures temperature, level, flow, composition etc. These are thermocouples, differential pressure meters, gas analyzers etc.

Set point – The set point is where the operator likes a controlled process variable to be set.

Transmitters – Transmitters are like neurons. They transmit signals like air signals or electric signal etc.

A block diagram showing some control elements is at Fig 3.

Fig 3 Block diagram showing some control elements

Design methodology for process control

The design methodology of the process is described below.

Understand the process – Before attempting to control a process, it is necessary to understand how the process works and what it does.

Identify the operating parameters – Once the process is well understood, operating parameters such as temperatures, pressures, flow rates, and other variables specific to the process are to be identified for its control.

Identify the hazardous conditions – In order to maintain a safe and hazard free facility, variables which can cause safety concerns are to be identified and can need additional control.

Identify the measurable – It is important to identify the measurables which correspond with the operating parameters in order to control the process. Measurables for process systems include (i) temperature, (ii) pressure, (iii) flow rate, (iv) pH, (v) humidity, (vi) level, (vii) concentration, (viii) viscosity, (ix) conductivity, (x) turbidity, (xi) redox potential, (xii) electrical behaviour, and (xiii) flammability.

Identify the points of measurement – Once the measurables are identified, it is important to locate the place where they are to be measured so that the system can be accurately controlled.

Select measurement methods – Selecting the proper type of measurement device specific to the process ensures that the most accurate, stable, and cost effective method is chosen. There are several different signal types which can detect different things. These signal types include (i) electric, (ii) light, (iii) pneumatic, (iv) radio waves, (v) infrared (IR), and (vi) nuclear.

Select control method – In order to control the operating parameters, the proper control method is important to control the process effectively. On / off is one control method and the other is continuous control. Continuous control involves proportional (P), integral (I), and derivative (D) methods or some combination of these three.

Select control system – Choosing between a local or distributed control systems which fits well with respect to the process effects both the cost and efficacy of the overall control.

Set control limits – Understanding the operating parameters allows the ability to define the limits of the measurable parameters in the control system.

Define control logic – Choosing between feed-forward, feed-backward, cascade, ratio, or other control logic is a necessary decision based on the specific design and safety parameters of the system.

Create a redundancy system – Even the best control system has failure points. Hence, it is important to design a redundancy system to avoid catastrophic failures by having back-up controls in place.

Define a fail-safe – Fail-safes allow a system to return to a safe state after a breakdown of the control. This fail-safe allows the process to avoid hazardous conditions which can otherwise occur.

Set lead / lag criteria – Depending on the control logic used in the process, there can be lag times associated with the measurement of the operating parameters. Setting lead / lag times compensates for this effect and allow for accurate control.

Investigate effects of changes before / after – By investigating changes made by implementing the control system, unforeseen problems can be identified and corrected before they create hazardous conditions in the facility.

Integrate and test with other systems – The proper integration of a new control system with existing process systems avoids conflicts between multiple systems.

The design of the plant and equipment greatly influences the process control activities. It is also essential for the process to provide good dynamic performance. It is necessary that all the instruments and testing equipments are periodically and correctly calibrated. Calibration of the instruments and testing equipment are to trace back to national standards. Sensors are to be correct and fast acting with capacities suiting the equipment and process needs. A process can be controlled either manually or automatically with the help of necessary instrumentation. Most of the automatic controls are implemented with electronic equipment which uses levels of current or voltage to represent values to be communicated.

For a good process control, it is necessary that the system is to be responsive and only few disturbances occur. In a responsive process control system, controlled variables responds quickly to adjustments in the manipulated variables. In such system frequency of disturbances also gets reduced. One is to remember that an operator can control a parameter only if it is measured hence location and selection of the sensors are very important. Sensors are to measure variables rapidly, reliably and with sufficient accuracy.

Block diagram

A typical block diagram of a process with a single manipulated variable and a single controlled variable is shown in Fig 4. This diagram includes feed forward, feed-back, and supervisory control. The main aim of the feed-back controller of this process is to keep the controlled variable Y which is measured by some instrument as close as possible to the desired set point Ysp. The controlled variable can be the quality of the final product, which can be pressure, temperature, liquid level, composition, or any other inventory, environmental, or quality variable.

Fig 4 Typical block diagram of a process

Set points are frequently determined by a supervisory control system using real-time numerical optimization techniques. Several different types of final control elements exist but the most common one is a control valve for controlling some flow of material. The disturbance variable D, also called the load variable, can cause the controlled variable to deviate from its set point, needing control action in order to bring it back to its desired operating point.

Both feed-back and feed forward control can reduce the effects of disturbances, where each method has its own advantages and drawbacks. Disturbances can result from a variety of sources, including external environmental variables. These can occur randomly or have an underlying pattern to them, but in any case a disturbance variable cannot be influenced by the controller of the process. The error, or deviation, E between the controlled variable Y and its set point Ysp is the input to the feed-back controller, which changes the manipulated variable U in order to decrease the error. In a typical process plant, there can be hundreds or even thousands of control loops such as the one shown in Fig 4.

Feed-back control – The purpose of feed-back control is to keep the controlled variable close to its set point. This task can be achieved by computing the difference between the set point and the controlled variable and passing this as the input to the feed-back controller. By its design the feed-back controller takes corrective action to reduce the deviation. Very frequently, the manipulated variable moves in a direction opposite in sign to the error, which is called negative feed-back. The manipulated variable moves in the opposite (positive) direction to compensate for the error. Feed-back controllers have user specified parameters which can be adjusted to achieve desirable dynamic performance.

Feed forward control – A feed-back controller can only take action after the controlled variable deviates from its desired set point and generates a non-zero error. However, the response to disturbances can be very sluggish if the process or measurement changes very slowly. In such a situation a feed forward controller can improve the performance. The feed forward controller predicts the effect which the disturbances have on the controlled variable and takes control action which counteracts the influence of the disturbances. Since this control action is taken based upon model predictions, it can minimize the effect which the disturbances have on the controlled variable before any unwanted deviations occur.

However, in order to make these predictions, the disturbances are to be measurable and a model for the effect which the disturbances have on the controlled variable is needed. Since it is not possible to predict and measure every disturbance which affects a process, feed forward control is normally combined with feed-back control. In such a configuration, the feed forward controller can counteract the effect of the measurable disturbances quickly, while the feed-back controller s offset resulting from unmeasured load disturbances.

Regulation and set point changes – The purpose of feed-back control is to keep the controlled variable close to its set point. There are two reasons why the controlled variable can deviate from its set point. The set point is changed deliberately in order to achieve better performance or load disturbances drive the operation away from its desired set point. Controllers designed to reject load disturbances are called regulators while controllers designed to track set point changes are called servo-mechanisms.

For the majority of the continuous processes, set point changes occur infrequently, typically only if the supervisory controller computes a more favourable operating point. Because of this, regulators are the most common form of feed-back controllers in the continuous plants. In contrast, controllers for servo problems are common in batch plants, where frequent changes in the set points occur. While any controller is to be designed such that it can be used for servo as well as for regulatory problems without causing the system to become unstable it cannot show good performance on both types of problems. Hence, it is an important design task to determine which type of problem is dominant in a process and design the controller accordingly.

Control hardware and software – Process control as practiced in the process related industries has undergone considerable changes since it was first introduced in the 1940s. In the early 1960s, electrical analog control hardware replaced majority of the pneumatic analog control hardware in several process related industries. Certain control elements, i.e., control valve actuators, have remained pneumatic even today. Electrical analog controllers of the 1960s were single loop controllers in which each input was first brought from the measurement point in the process to the control room where most of the controllers were located. The output from the controller was then sent from the control room to the final control element. The operator interface consisted of a control panel having a combination of display face-plates and chart recorders for single loop controllers and indicators. Control strategies primarily involved feed-back control, normally with a proportional-integral (PI) controller.

During the late 1950s and early 1960s, a few organizations introduced process control computers to perform direct digital control (DDC) and supervisory process control. In cases where the system made extensive use of DDC, the DDC loops frequently had close to 100 % analog control backup making the systems costly. Other early systems primarily used process control computers for supervisory process control. Regulatory control was provided by analog controllers, which did not need backup, but the operator’s attention was split between the control panel and the computer screens. The terminal displays provided the operator interface when supervisory control was being used, but the control panels were still located in the control room for the times when the analog backup was necessary. Within this environment, some organizations began to broaden the use of advanced control techniques such as feed forward control, multi-variable decoupling control, and cascade control.

The functionalities of the early control systems were designed around the capabilities of the computers rather than the process characteristics. These limitations, coupled with inadequate operator training and an unfriendly user interface, led to designs which were hard to operate, maintain, and expand. In addition, several different systems had customized specifications, making them extremely expensive to assemble. Although valuable experience was gained in systems design and implementation, the lack of financial success hindered the infusion of digital system applications into the process industries until around 1970, when inexpensive micro-processors became commercially available.

Distributed control system – Starting in the mid-1970s, control equipment suppliers introduced micro-processor based distributed control system (DCS) and programmable logic controller (PLC). A digital DCS system consists of several elements. In DCS, host computers perform computationally intensive tasks like optimization and advanced control strategies. Data highways, consisting of a digital transmission link, connect all other components in the system. Redundant data highways reduce the possible loss of data. Operator control stations provide video consoles for operator communication with the system, in order to supervise and control processes.

Several control stations contain printers for alarm logging, report printing, or hard copying of process graphics. Remote control units implement basic control functions like PID algorithms and sometimes provide data acquisition capability. Programmer consoles develop application programmes for the DCS. Mass storage devices store the process data for control purposes as well as corporate decisions. Storage devices can be in form of hard disks or data-bases. Communications and interactions between controllers, inputs, and outputs are realized by software, not by hard wiring. Hence, DCSs revolutionized several aspects of the process control, from the appearance of the control room to the widespread use of the advanced control strategies.

Since the early 1980s, the capabilities of DCSs have improved dramatically. There has been a general increase in the use of digital communications technology within the process control. Some advanced control strategies are implemented within the DCS. Majority of the local control units perform their own analog-to-digital (A/D) and digital-to-analog (D/A) conversion and can be located in the equipment rooms closer to the process. Digital communications through a coaxial or fibre optic cable send information back to the control room, hence saving on wiring costs. With this trend toward increased use of digital communications technology, smart transmitters and smart actuators are also gaining in popularity. These devices, equipped with their own micro-processor, perform tasks such as auto ranging, auto calibration, characterization, signal conditioning, and self-diagnosis at the device location. Hence, tasks needed from the local control unit or the data acquisition unit are reduced.

The features which have made DCSs popular are (i) reduction in wiring and installation costs through the use of data highways and remotely located local control units, (ii) reduction in the space requirements for panels in the control room, (iii) improved operator interface with customized screens, (iv) ease of expansion because of modularity of the DCSs, (v) increased flexibility in control configuration, allowing control strategies to be modified without any need for rewiring, and (vi) improved reliability and redundancy.

Client server (personal computer) configuration – Majority of the DCSs systems today are open solutions based on component object models in order to interact with other programmes. These technologies allow tying together the best applications of each kind from different suppliers in order to optimize plant wide control. This is in contrast to earlier control systems where a single supplier had to supply the whole plant automation system since no interaction with other supplier products was possible. The emergence of software technologies such as CORBA (Common Object Request Broker Architecture) and COM (Component Object Model) as well as the increasing use of the programming language Java and the internet / intranets has driven this development towards open solutions (or ‘plug and play’). Personal computers are replacing panel boards as operator stations, which makes it easier to exchange data from the control system with other applications running on personal computers or in the network.

Programmable logic controllers – Initially, PLCs were dedicated, stand alone, micro-processor based devices executing straight forward binary logic for sequencing and interlocks. These were originally intended for applications which, prior to that time, had been implemented with hard wired electro-mechanical and electrical relays, switches, push-buttons, and timers. PLCs improved considerably the ease with which modifications and changes could be implemented to such logic. Although several of the early applications were in the discrete manufacturing industries, the use of PLCs quickly spread to the process related industries. PLCs have become increasingly more powerful in terms of calculational capabilities, e.g., PID algorithms, data highways to connect multiple PLCs, improved operator interfaces, and interfaces with personal computers and DCSs. Batch process control is dominated by logic type controls, and PLCs are a preferred alternative to a DCS.

Because of the availability of relatively smooth integrated interfaces between DCSs and PLCs, present practice is normally to use an integrated combination of a DCS and PLCs. Up to several thousand discrete (binary) inputs and outputs can be accommodated with PLCs, which can also have several hundred analog inputs and outputs for data logging and /or continuous PID control. All PLCs are designed to handle Boolean (binary) logic operations efficiently. Since the logical functions are stored in main memory, one measure of a PLC’s capability is its memory scan rate (typical values range from 10 milli seconds to 50 milli seconds). At the faster speeds, thousands of steps can be processed by a single unit. Majority of PLCs also handle sequential logic and are equipped with internal timing capability to delay an action by a prescribed quantity of time, to execute an action for a prescribed time, and so on. Newer PLC models frequently are networked to serve as one component of a DCS control system with operator I/O (input/output) provided by a separate component in the network.

A distinction is made between configurable and programmable PLCs. The term configurable implies that logical operations (performed on inputs to yield a desired output) are located in the PLC memory, potentially in the form of ladder diagrams by selecting from a PLC menu or by direct interrogation of the PLC. Normally, the logical operations are put into PLC memory in the form of a higher level programming language. Majority of the control engineers prefer the simplicity of configuring the PLC to the alternative of programming it. However, some batch applications, particularly those involving complex sequencing, are best handled by a programmable approach, perhaps through a higher level, computer control system.

Safety and shut down systems – Plant safety is an important consideration in operating a plant. Also, increasingly stringent statutory regulations need special attention to process safety during the design process. Process control plays an important role in the safety considerations of a plant. When automated procedures replace manual procedures for routine operations, the probability of human errors leading to hazardous situations is lowered. Additionally, the improved capability for presenting information to the process operators in a timely manner and in the most meaningful way increases the operator’s awareness of the current plant condition. This reduces the time in which abnormal conditions can be recognized and minimizes the likelihood that the situation progresses to a hazardous state.

A protective system is to be provided for processes where hazardous conditions can develop. One possible solution to this is to provide logic for the specific purpose of taking the process to a state where this condition cannot exist, called a safety interlock system. Since the process control system and the safety interlock system serve different purposes, they are to be physically separated. The process control system needs more modifications because of changing process conditions than the safety interlock system and having a separate system reduces the risk of unintentionally changing the safety system as well.

Special high reliability systems have been developed for safety shutdowns, e.g., triple modular redundant systems, which are designed to be fault tolerant. This permits the system to have an internal failure and still perform its basic function. Basically a triple modular redundant system consists of three identical sub-systems actively performing identical functions simultaneously. The results of the three sub-systems are compared in a two-of-three voting network prior to sending the signals to the output devices. If any one of the sub-systems experiences a failure, the overall system can still function properly as long as two of the sub-systems are working. This set up allows the identification of components suspected of failure. To further increase reliability, multiple sensors and output devices can be used. When multiple sensors are used, the system is frequently implemented along with a two-of-three voting network.

Alarms – The purpose of an alarm is to alert the process operator to a process condition which needs immediate attention. An alarm is activated whenever the abnormal condition is detected and the alert is issued. The alarm returns to normal when the abnormal condition no longer exists. Alarms can be defined on measured variables, calculated variables, and controller outputs. A variety of different classes of alarms exist.

A high alarm is generated when the value is higher than or equal to the value specified for the high alarm limit. A low alarm is generated when the value is less than or equal to the value specified for the low alarm limit. A high deviation alarm is generated when the measured value is higher than or equal to the target plus the deviation alarm limit. A low deviation alarm is generated when the value is less than or equal to the target minus the deviation alarm limit. A trend alarm is generated when the rate of change of the variable is higher than or equal to the value specified for the trend alarm limit.

One operational problem with alarms is that noise in the variable can cause multiple alarms whenever its value approaches a limit. This can be avoided by defining a dead band on the alarm. The high alarm is generated when the process variable is higher than or equal to the value specified for the high alarm limit. The high alarm return to normal is generated when the process variable is less than or equal to the high alarm limit less the dead band. Since the degree of noise varies from one input to the next, the dead band is to be individually configured for each alarm.

Smart transmitters, valves, and fieldbus – There is a clearly defined trend in process control technology toward increased use of digital technology. The trend, which started with digital controllers, has increasingly spread from that portion of the overall control system outward toward field elements such as smart transmitters and smart control valves. Digital communication occurs over a fieldbus, i.e., a coaxial or fibre optic cable, to which intelligent devices are directly connected and transmitted to and from the control room or remote equipment rooms as a digital signal. The fieldbus approach reduces the need for twisted pairs and associated wiring.

Several field network protocols such as ‘Foundation Fieldbus’ and ‘Profibus’ provide the capability of transferring digital information and instructions among field devices, instruments, and control systems. The fieldbus software mediates the flow of information among the components. Multiple digital devices can be connected and communicate with each other through the digital communication line, which greatly reduces the wiring cost for a typical plant. Manufacturers of instruments are focusing on inter-operability among different fieldbus supplier products.

Process control software – The most widely adopted user friendly approach is the fill-in-the-forms or table driven process control languages (PCL). Popular PCLs include function block diagrams, ladder logic, and programmable logic. The core of these languages is a number of basic function blocks or software modules, such as analog in, digital in, analog out, digital out, PID, summer, splitter, etc. Using a module is analogous to calling a sub-routine in conventional ‘Fortran’ or ‘C’ programmes.

In general, each module contains one or more inputs and an output. The programming involves commuting outputs of blocks to inputs of other blocks through the graphical-user interface. Users are needed to fill in templates to indicate the sources of input values, the destinations of output values, and the parameters for forms / tables prepared for the modules. The source and destination blanks can specify process I/O channels and tag names when appropriate. To connect modules, some systems need filling in the tag names of modules originating or receiving data. User specified fields include special functions, selectors (minimum or maximum), comparators (less than or equal to), and timers (activation delays). Majority of DCSs allow function blocks to be created.

Facility control hierarchy – There are five levels in the manufacturing process where various optimization, control, monitoring, and data acquisition activities are used (Fig 5). The relative position of each block is intended to be conceptual, since there can be overlap in the functions carried out, and frequently several levels can utilize the same computing platform. The relative time scales where each level is active are also shown. Data from the plant (flows, temperatures, pressures, compositions, etc.) as well as so called enterprise data, consisting of commercial and financial information, are used with the methodologies shown in order to make decisions in a timely fashion.

Fig 5 Five levels of process control and optimization in manufacturing

Advances in the capabilities of DCSs have made the incorporation of advanced controls within the DCS feasible. Modern process facilities are frequently designed with a relatively high degree of process integration in order to minimize the theoretical cost of producing the product. However, from operational standpoint, this integration gives rise to relatively complex interactions between the operating variables making it difficult to determine the plant adjustments to optimize the operation.

Each of the five conceptual control levels has its own requirements and needs in terms of hardware, software, techniques, and customization. Since information flows up in the hierarchy and control decisions flow down, effective control at a particular level occurs only if all the levels beneath the level of concern are working well. The highest level (planning and scheduling) sets production goals to meet supply and logistics constraints and addresses time-varying capacity and manpower utilization decisions. This is called enterprise resource planning (ERP) and the term supply chain in level 5 refers to the links in a web of relationships involving retailing (sales), distribution, transportation, and manufacturing. Planning and scheduling normally operate over relatively long time scales and tend to be decoupled from the rest of the activities in lower levels.

Normally, the various levels of control applications are aimed at one or more of the objectives namely (i) determining and maintaining the plant at a practical optimal operating point given the current conditions and economics, (ii) maintaining safe operation for the protection of personnel and equipment, (iii) minimizing the need for operator attention and intervention, and (iv) minimizing the number, extent, and propagation of upsets and disturbances.

At level 4, the plant wide optimization level, the primary goal is to determine the optimal operating point of the plant’s mass and energy balance and to adjust the relevant set points in an appropriate manner. There are several possible process interactions and combinations of constraints involved, hence these control applications help in identifying the optimal operating point. The appropriate optimization objective is dependent on the market situation, such that, at times, the optimization objective is to maximize production while at other times it is to minimize operating costs for a fixed production rate.

The control applications at this level frequently utilize steady state mathematical models for the process or portions thereof. These models are to be tuned to a specific plant’s operation and constraints in order to ensure that the key aspects of the overall plant operation are characterized. Execution frequency of the control applications at this level are hence from hours to days, depending on the frequency of relevant variations. Standard mathematical techniques and heuristics based on experience are used to determine the operating point which best accomplishes the selected optimization objective. Once the optimum operating point has been determined, the relevant set points are passed down to the lower control levels. Because of the time constants and dynamics associated with the level 4 controls, set points are normally ramped incrementally to their new values in a manner such that the process is not disturbed and the proximity to constraints can be periodically checked before the next increment is made.

The plant optimization control level applications determine the values of key variables which optimize the overall plant material and energy balance. The control applications at the local optimization and supervisory control level, on the other hand, focus on sub-systems within the overall plant. These sub-systems normally consist of a single or, at most, a few highly interactive pieces of equipment. Majority of the applications at this level are aimed at optimizing the process variables within an operating window defined by hard constraints, e.g., equipment and material limits. Frequently the optimal operating point of the sub-systems resides on one of the constraints of the operating window.

Hence, several of the control applications use a constraint control strategy, i.e., a strategy which pushes the sub-system against the closest active constraint. Typically the closest presently active constraint changes with time and situations, e.g., between day and night, different weather conditions, different operating states of upstream equipment. The constraint control strategies continually make minor adjustments to keep the state of the system along the active constraint near its optimal point. Since such adjustments are made continually, these applications can generate considerable benefits over the course of a year, although these benefits can appear minor when viewed at a single point in time.

In addition to local optimization applications, the control level also includes multi-variable, predictive, and model based control strategies such as model predictive control and inferential control. The control applications at the local optimization and supervisory level typically provide set points for the controls at the advanced regulatory and basic regulatory control levels (Level 3a in Fig 5). The general objective of the advanced regulatory control level applications is to improve the performance of basic regulatory control level controllers (Level 3b in Fig 5).

The execution frequency of applications at this level is typically in the range of seconds to minutes and differ from the controls at the basic regulatory level in that the former controls are frequently multi-variable and anticipatory in nature. The level of control closest to the process is the basic regulatory control level. Good performance of this level is crucial for the benefits of higher levels of control. Level 2 in Fig 5 (safety, environment, and equipment protection) includes activities such as alarm management and emergency shutdowns. While software implements the tasks shown, there is also a separate hardwired safety system for the plant.

Level 1 in Fig 5 (process measurement and actuation) provides data acquisition and on line analysis and actuation functions, including some sensor validation. Ideally there is bi-directional communication between levels, with higher levels setting goals for lower levels and the lower levels communicating constraints and performance information to the higher levels. The time scale for the decision making at the highest level (planning and scheduling) can be of the order of months, while at lower levels (e.g., process control), decisions affecting the process can be made frequently, e.g., in fractions of a second.

Instrumentation

Components of a control loop – Instrumentation, which provides the direct interface between the process and the control hierarchy, serves as the fundamental source of information about the process state and the ultimate means by which corrective actions are transmitted to the process. Fig 6 shows the hardware components of a typical modern digital control loop. The function of the process measurement device is to sense the value, or changes in value, of process variables. The choice of a specific device typically needs considerations of the specific application, economics, and reliability requirements.

Fig 6 Components of a computer control loop

The actual sensing device can generate a physical movement, pressure signal, milli-volt signal, etc. A transducer transforms the measurement signal from one physical or chemical quantity to another, e.g., pressure to milliamps. The transduced signal is then transmitted to a control room through the transmission line. The transmitter is hence a signal generator and a line driver. Frequently the transducer and the transmitter are contained in the same device. Most modern control equipment need a digital signal for displays and control algorithms, hence the analog-to-digital converter (ADC) transforms the transmitter analog signal to a digital format.

Since the ADCs can be relatively expensive if adequate digital resolution is needed, the incoming digital signals are to be normally multiplexed. Prior to sending the desired control action, which is frequently in a digital format, to the final control element in the field, the desired control action is normally transformed by a D/A converter (DAC) to an analog signal for transmission. The DACs are relatively inexpensive and are not normally multiplexed. Widespread use of digital control technologies has made ADCs and DACs standard parts of the control system.

Once the desired control action has been transformed to an analog signal, it is transmitted to the final control element over the transmission lines. However, the final control element’s actuator can need a different type of signal and hence another transducer can be necessary. Several control valve actuators utilize a pressure signal so a current-to-pressure (I/P) transducer is used to provide a pressure signal to the actuator.

Process measurements – The most commonly measured process variables are temperatures, flows, pressures, levels, and composition, when appropriate, other physical properties are also measured. The selection of the proper instrumentation for a particular application is dependent on factors such as the type and nature of the fluid or solid involved, relevant process conditions, rangeability, accuracy, and repeatability needed, response time, installed cost, and maintainability and reliability.

General considerations for measurements – In the selection of a measurement device, the needed measurement range for the process variable is to lie entirely within the instrument’s range of performance. Accuracy, repeatability, or some other measure of performance are appropriate specifications depending on the application. Where closed loop control is to be implemented, speed of response is to be included as a specification. Data available from the manufacturers provide baseline conditions on reliability. Previous experience with the measurement device is very important.

Materials of construction are selected so that the instrument withstands the process conditions, such as operating temperatures, operating pressures, corrosion, and abrasion. For some applications, seals or purges are necessary. For the first installation of a specific measurement device at a site, training of maintenance personnel, and purchases of spare parts are necessary. The potential for releasing process materials to the environment is to be evaluated. Exposure to fugitive emissions for maintenance personnel is important when the process fluid is corrosive or toxic. If the measurement device is not inherently compatible with possible exposure to hazards, suitable enclosures are to be purchased and included in the installation costs.

Instrument accuracy refers to the difference between the measured value and the true value of the measured variable. Since the true value is never known, accuracy normally refers to the difference between the measured value and a standard value of the measured variable. For process measurements, accuracy is normally expressed as a percentage of the span of the measured variable. Repeatability refers to the difference between measurements when the process conditions are the same. Repeatability is a very important factor for process control, since the main objective of regulatory control is to maintain uniform process operation.

For the process measurements, it is important to distinguish between accuracy and repeatability. Some applications depend on the accuracy of the instrument, but other applications depend on repeatability. Excellent accuracy implies excellent repeatability. However, an instrument can have poor accuracy but excellent repeatability. In some applications this is acceptable.

Manufacturers of measurement devices always state the accuracy of the instrument. However, these statements specify reference conditions at which the measurement device performs with the stated accuracy, with temperature and pressure most frequently appearing in the reference conditions. When the measurement device is applied at other conditions, the accuracy is affected. Manufacturers normally provide statements indicating how accuracy is affected when the conditions of use deviate from the reference conditions. Although appropriate calibration procedures can minimize some of these effects, rarely can they be totally eliminated. It is quite possible that a measurement device with a stated accuracy of 0.25 % of span at reference conditions can only provide measured values with accuracies of 1 % or less. Micro-processor based measurement devices normally provide better accuracy than the traditional electronic measurement devices. In practice most attention is given to accuracy when the measured variable is the basis for billing a customer. Whenever a measurement device provides data for real time optimization, accuracy is very important.

Temperature – Temperature sensor selection and installation is to be based on the process related requirements of a particular situation, i.e., temperature level and range and process environment. For example, if the average temperature of a flowing fluid is to be measured, mounting the device nearly flush with the internal wall can cause the measured temperature to be affected by the wall temperature and the fluid boundary layer. Thermo-couples are the most widely used means of measuring process temperatures and are based on the Seebeck effect. The EMF (electro motive force) developed by the hot junction is compared to the EMF of a reference or cold junction, which is held at a constant reference temperature or has compensation circuitry. The difference between the hot junction and the reference junction temperature is hence determined.

Depending on the temperature range and temperature level, various combinations of metals are used by the thermo-couple, e.g., Chromel / Alumel (Type K), iron / Constantan (Type J), and Platinum – 10 % rhodium / platinum (Type S). Since the thermo–couple EMFs are low level signals, it is important to prevent the contamination of the signal by stray currents or noise resulting from the proximity to other electrical devices and wiring.

Thermo-couples are placed within protecting tubes, called thermo-wells, for protection against mechanical damage, vibration, corrosion, and stresses owing to flowing fluids. These thermo-wells impact the speed of response of the thermo-couple by placing an additional lag in the control loop. Special thermo-well designs do exist, which minimize this added lag. In hostile process environments where the reliability of a temperature measurement is a concern, multiple temperature sensors are sometimes used in conjunction with a majority voting system which can be implemented in software or hardware. Where exceptional accuracy and repeatability are needed, resistance thermometry detectors (RTDs) are sometimes used although these are more expensive than thermocouples. RTDs are based on the principle that as the temperature increases, the electrical resistance of conductors also increases.

RTDs can experience several of the same problems as thermocouples, so considerations such as thermo-wells and protection from electrical noise contamination are also appropriate in the case of RTDs. Pyrometers are mostly used at very high temperatures (higher than 700 deg C) and estimate the temperature by measuring radiation emitted from the object whose temperature is to be determined. This can be done at several ranges of wave-lengths, depending on the pyrometer type in order to achieve a high level of accuracy for the measurement. Pyrometers can measure higher temperatures than thermocouples or resistance thermometers.

Flow – The principal types of flow rate sensors are differential pressure, electro-magnetic, vortex, turbine, and coriolis. Orifice plates and venturi type flow tubes are the most popular differential pressure flow rate sensors, where the pressure differential measured across the sensor is proportional to the square of the volumetric flow rate. Orifice plates are relatively inexpensive and are available in several materials to suit particular applications. This type of sensor is normally preferred for measuring gas and liquid flows. However, orifice plates have a relatively high unrecoverable pressure drop and limited range.

Investment cost for a venturi-type flow tube is normally higher than for an orifice plate for the same application but the accuracy is better. The higher unrecoverable pressure drop of the orifice plate sometimes dictates the use of a venturi type flow tube because of the overall cost. The proper installation of both orifice plates and venturi-type flow tubes needs a length of straight pipe upstream and downstream of the sensor. The pressure taps and connections for the differential pressure transmitter is to be located so as to prevent the accumulation of vapour when measuring a liquid and the accumulation of liquid when measuring a vapour.

Magnetic flow meters are sometimes used in corrosive liquid streams or slurries where a low unrecoverable pressure drop and high rangeability is needed. The fluid is needed to be electrically conductive. Magnetic flow meters, which use Faraday’s law to measure the velocity of the electrically conductive liquid, are relatively expensive. Their use is hence reserved for special situations where less expensive meters are not appropriate. Installation recommendations normally specify an upstream straight run of five pipe diameters, keeping the electrodes in continuous contact with the liquid.

Vortex meters have gained popularity where the unrecoverable pressure drop of an orifice meter is a concern. The vortex shedding meter is based on the principle that fluid flow about a bluff body causes vortices to be shed from alternating sides of the body at a frequency proportional to the fluid velocity. The vortex shedding meter can be designed to produce either a linear analog or a digital signal. The meter range, depending on process conditions, is relatively large (10:1). Because of the lack of moving parts and the absence of auxiliaries such as additional manifolds or valves, the reliability and safety of this meter are relatively good. The main restrictions on its use are (i) it is not to be used for dirty or very viscous fluids, (ii) the Reynolds number is to be higher than 10,000 but less than a process condition-dependent maximum set by cavitation, compressibility, and unrecoverable pressure drop, and (iii) meters in pipes larger than 200 mm in diameter tend to have limited applicability because of relatively high cost and limited resolution.

Similar to other flow meters, the vortex shedding meter needs a fully developed flow profile, and hence a run of straight pipe. The trend in the process industry is towards increased usage of mass flow meters which are independent of changes in pressure, temperature, viscosity, and density. These devices include the thermal mass meter and the coriolis meter. Thermal mass meters are widely used in semi-conductor manufacturing for control of low gas flow rates (called mass flow controllers, or MFCs). MFCs measure the heat loss from a heated element, which varies with flow rate, with an accuracy of + /- 1 %. Coriolis meters use a vibrating flow loop which undergoes a twisting action because of the ‘coriolis effect’. The amplitude of the deflection angle is converted to a voltage which is nearly proportional to the liquid mass flow rate, with an accuracy of + / – 0.5 %. Sufficient space is needed to accommodate the flow loop and pressure losses of 10 psi (0.07 MPa) i.e. be allowable. Capacitance probes measure the dielectric constant of the fluid and are useful for the flow measurements of slurries and other two phase flows.

Pressure – There are three distinct groups of pressure measurement devices. One is based upon the measurement of the height of a liquid column, another is based on the measurement of the distortion of an elastic pressure chamber, and a third encompasses electrical sensing devices. In liquid column pressure measuring devices the pressure is balanced against the pressure exerted by a column of a liquid with known density. The height of the liquid column directly correlates to the pressure to be measured. Majorities of the forms of liquid column measurement devices are called manometers.

Elastic element pressure measuring devices are those in which the measured pressure deforms an elastic material. The magnitude of the deformation is around equivalent to the applied pressure. Different types of elastic element measuring devices include Bourdon tubes, bellows, and diaphragms. Electrical sensing devices are based on the fact that when electrical conductors are stretched elastically, their length increases while the diameter decreases. Both of these dimensional changes result in an increase in electrical resistance of the conductor. Strain gauges and piezoelectric transducers are examples of electrical pressure sensing devices.

To avoid maintenance issues, the location of pressure measuring devices are to be carefully considered to protect against vibration, freezing, corrosion, temperature, and over-pressure etc. For example, in the case of a hard to handle fluid, an inert gas is sometimes used to isolate the sensing device from direct contact with the fluid. Optical fibre can be used for pressure measurement in high temperature environments.

Liquid Level – The location of a phase interface between two fluids is referred to as level measurement. Very frequently it is applied to liquid-gas interfaces, but interfaces between two liquids are not uncommon. Level measurement devices can be classified as either float actuated devices, displacer devices, head devices, and fluid characteristics. The widely used devices for measuring liquid levels involve detecting the buoyant force on an object or the pressure differential created by the height of liquid between two taps on the vessel. Hence, care is needed in locating the tap. Other less widely used techniques utilize concepts such as the attenuation of radiation, changes in electrical properties, and ultrasonic wave attenuation.

Chemical composition – On-stream analyzers measure different physical and chemical properties as well as component compositions. Compared to the majority of other instrumentation, analyzers are relatively expensive, more complex and sensitive, and need more regular maintenance by trained personnel. Hence, the expense for on-stream analyzers needs to be justified by the benefits generated through their use. Improvements in analyzer technology, digital control systems, and process control technology have led to increasing use of analyzers in closed-loop automatic control applications. The most common physical and chemical properties measured by analyzers include density, viscosity, vapour pressure, boiling point, flash point, cloud point, moisture, heating value, thermal conductivity, refractive index, and pH. Some of these analyzers are continuous while others are discrete.

In order to get quantitative composition measurements, specific instruments are to be chosen depending on the nature of the species to be analyzed. Measuring a specific concentration needs a unique chemical or physical attribute. In infrared (IR) spectroscopy, the vibrational frequency of specific molecules like CO (carbon mono-oxide) and CO2 (carbon di-oxide) is probed by absorbing electromagnetic radiation. Ultra-violet radiation analyzers operate similarly to infrared analyzers in that the degree of absorption for specific compounds occurs at specific frequencies and can be measured. Magnetic resonance analysis (formerly called nuclear magnetic resonance) uses magnetic moments to discern molecular structure and concentrations.

Considerable advances have been made during the past decade to get lower cost measurements, in some cases miniaturizing the size of the measurement system in order to make on-line analysis feasible and reducing the time delays which frequently are present in analyzers. Recently, chemical sensors have been placed on microchips, even those needing multiple physical, chemical, and bio-chemical steps (such as electrophoresis) in the analysis. This device has been called ‘lab-on-a- chip’. The measurements of chemical composition can be direct or indirect, the latter case referring to applications where some property of the process stream is measured (such as refractive index) and then related to composition of a particular component.

In gas chromatography (GC), normally the thermal conductivity is used to measure concentration. The GC can measure several components in a mixture at the same time, whereas majority of the other analyzers can only detect one component, hence GC is very popular. A gas sample (or a vapourized liquid sample) is carried through the GC by an inert gas, and components of the sample are separated by a packed bed. Since each component has a different affinity for the column packing, it needs a distinct time to pass through the column, allowing individual concentrations to be measured. Typically all components can be analyzed in a five to ten minute time period (although miniaturized GCs are faster). The GC can measure concentrations ranging from parts per billion (ppb) to tens of percent, depending on the compound.

Mass spectroscopy (MS) determines the partial pressures of gases in a mixture by directing ionized gases into a detector under a vacuum [(10)-6 torr], and the gas phase composition is then monitored more or less continuously based on the molecular weight of the species. Sometimes GC is combined with MS in order to get a higher level of discrimination of the components present. Other on-line analyzers types include UV (ultra violet) photometer, UV fluorescence, chemi-luminesence, infra-red, and paramagnetic. Fibre optic sensors are attractive options (although have a higher cost) for acquiring measurements in harsh environments such as high temperature or pressure. The transducing technique used by these sensors is optical and does not involve electrical signals, so they are immune to electro-magnetic interference. Raman spectroscopy uses fibre optics and involves pulsed light scattering by molecules. It has a wide variety of applications in process control.

Key considerations in using an analyzer for closed loop control are repeatable, reliable analyzer measurements and a suitable analyzer system response time. In control applications at the supervisory control level and above, accuracy as well as repeatability is frequently needed. On-line process analyzers can be placed into one of three categories. In the first, in situ, the analysis is continuous and the probe mounted directly in the process stream. In this category, the measurement can be treated similarly to other process measurements, as long as some additional care is taken owing to the reliability issues.

In the second category, the analysis is continuous but the sample is not naturally in a form needed by the analyzer. Hence, the sample is to be conditioned. In the third category, the analyzer takes a period of time to analyze a discrete sample which is normally to be conditioned. Also, a sample-and-hold circuitry is needed to keep an output signal at its last value between analyses. In the latter two categories, the analysis introduces dead time into the control loop. Hence, to achieve good closed-loop control in these instances, the additional dead time introduced by the measurement is to be minimized and the control algorithm is to contain some form of dead-time compensation.

Majority of the analyzers need sample takeoff, sample transport, sample conditioning and preparation, analysis, and sample return or disposal. These sub-systems need to be carefully designed to ensure that the analyzer meets its intended purpose and is reliable. The sample transport sub-system is needed since analyzers are frequently placed in the protected, controlled environment of an analyzer shelter remote from the sample takeoff. The sample location is to be selected such that the sample is representative, the complexity of the sample conditioning sub-system is minimized, and the equipment is accessible. It is hence preferable that the sample is a single phase, relatively clean, and that the takeoff location does not add considerable dead time to the control loop. Also, the pressure at the sample takeoff is to be such that the pressure differential between the sample capture and sample return point is adequate to drive the sample through the fast loop at a sufficient velocity to avoid the need for a sample pump. The purpose of the fast loop is to bring the sample to the vicinity of the analyzer takeoff at a high velocity.

Several factors need to be considered for the proper selection, design, and specification of an analyzer, its sample handling subsystems, and its intended use within a process control strategy and hierarchy, including the interface to the control system. As a rule of thumb, the measurement effective dead time plus lag time is to be no higher than one-sixth of the time constant of the process and other elements in the control loop if possible. Hence, majority of the analyzers are utilized at the supervisory control level and above. For example, a process gas chromatograph system having a cycle time, including the sample handling sub-systems, of five minutes is needed for a supervisory or local optimization control loop where the effective time constant is thirty minutes. Hence, only in-situ analyzers are to be considered at the regulatory control level. Also, there is normally a direct relationship between reduced analyzer system cycle time and increased analyzer system cost.

Signal transmission and conditioning – A wide variety of physical and chemical phenomena are used to measure the several process variables needed to characterize the state of a process. Since, the majority of the processes are operated from a control room, these values are to be available there. Hence, the measurements are normally transduced to an electronic form, very often 4 mA (milli ampere) to 20 mA, and then transmitted to a remote terminal unit and then to the control room. Wherever transmission of these signals takes place in twisted pairs, it is especially important that proper care is taken so that these measurement signals are not corrupted owing to ground currents, interference from other electrical equipment and distribution, and other sources of noise. Majority of the instrument and control system suppliers provide manuals giving advice and instructions for installation and engineering practices for the proper grounding, shielding, cable selection, cable routing, and wiring for control systems. The importance of these considerations are not to be under-estimated.

Final control elements – Good control at any hierarchical level needs good performance by the final control elements in the next lower level. At the higher control levels, the final control element can be a control application at the next lower control level. However, the control command is to ultimately affect the process through the final control elements at the regulatory control level, e.g., control valves.

Control valves – Material and energy flow rates are the most commonly selected manipulated variables for control schemes. Hence, good control valve performance is a necessary ingredient for achieving good control performance. A control valve consists of two principal assemblies, a valve body and an actuator. Good control valve performance needs a consideration of the process characteristics and requirements such as fluid characteristics, range, shut-off, and safety, as well as control requirements, e.g., installed control valve characteristics and response time. The proper selection and sizing of the control valves and actuators is important. Several of the control valve suppliers provide computer programmes for the sizing and selection of valves.

The valve body, the portion which contains the process fluid, consists of the internal valve trim, packing, and bonnet. The internal trim determines the relationship between the flow area and the stem position, which is normally proportional to the air signal. This relationship, referred to as the inherent valve characteristic, is frequently classified as linear, equal percentage, or quick opening. Actual valve characteristics normally fall within these three classifications. For a globe-type valve, the internal trim consists of plug, seat ring, plug stem, plug guide, and in some instances a cage. In rotary style valves, such as a ball-type or butterfly-type, the internal trim consists of a ball or vane, seal ring, rotary shaft, and bushings. Although the internal trim fixes the relationship between flow area and stem position, the relationship between the control air signal and the flow area can be modified by the use of different cams in the case of rotary style valves, or through software in the case of smart valves or control systems having software capabilities.

The actuator provides the force to move the stem or rotary shaft in response to the changes in the controller output signal. Actuators are to provide enough motive force to overcome the forces developed by the process fluid and the valve assembly, are to be responsive in quick and accurate positioning corresponding to the changes in the control signal, and are to be responsive in automatically positioning the valve in a safe position when a failure (e.g., instrument air or electrical, a safety interlock, or a shutdown) has occurred. Majority of the actuators are either of the spring-diaphragm type, piston type, or motor type.

The general approach to specifying a control valve involves selecting a valve body type, trim characteristics, size, and material based on the process fluid characteristics, desired installed valve characteristics, process conditions, and process requirements. The actuator is then specified based on the valve selected, process flow conditions, and the needed speed of response. Meeting the requirements of a safe fail-position involves considering both the valve and the actuator. Control valve can be configured as an air-to-open or fail-closed (F/C) valve. An air-to-close (fail-open) requirement can be met by using a spring diaphragm actuator below the diaphragm and providing the air supply above the diaphragm.

Different accessories can be supplied along with the control valves for special situations. Positioners ensure that the valve stem is accurately positioned following small or slowly changing control signals or where unbalanced valve forces exist. The valve positioner can be mounted on the side of the valve actuator, and can reduce the valve dead band from around +/- 5 % to +/- 0.5 %, a considerable enhancement. A digital valve positioner is useful in computer control since the normal sampling interval of one second is not fast enough for the majority of the flow control loops. When a valve positioner is incorporated into a control system, it effectively becomes a cascade control system. Boosters, which are actually pneumatic amplifiers, can increase the speed of response or provide adequate force in high pressure applications. Limit switches are sometimes included to provide remote verification which the valve stem has actually moved to a particular position.

In addition to control valves which regulate the flow of one stream, process facilities sometimes use modulating three-way control valves to adjust simultaneously the flows of two streams, either to divert one stream into two streams or to combine two streams into one. The stable operation of these three-way control valves needs that the flow tends to open the plugs. The most common uses of three-way control valves are in controlling the heat transferred by regulating the flow through and around a piece of heat transfer equipment and in controlling the blending of two streams.

The choice of using a diverting or mixing service in a heat transfer application is determined by pressure and temperature consideration. The upstream valve location (diverting service) is preferred if there is no overriding consideration. If there is a change of phase involved in the heat transfer equipment, then a diverting valve is to be used. Three way control valves are not to be used in services if the temperature is high (higher than 260 deg C), if there is a high temperature differential (higher than 150 deg C), or if there is a high pressure or pressure differential. If any of these conditions exist, two single seated two way valves are to be used to implement the bypass control strategy even though it is a more expensive initial investment.

Adjustable speed pumps – Instead of using fixed speed pumps and then throttling the process with a control valve, an adjustable speed pump can be used as the final control element. In these pumps speed can be varied by using variable speed prime movers such as turbines or electric motors. Adjustable speed pumps offer energy savings as well as performance advantages over throttling valves. One of these performance advantages is that, unlike a throttling valve, an adjustable pump does not contain dead time for small amplitude responses. Also, non-linearities associated with friction in the valve are not present in electronic adjustable speed pumps. However, adjustable speed pumps do not offer shutoff capability like control valves and extra flow check valves or automated on / off valves can be needed.

Other final control elements – Devices other than control valves and adjustable speed pumps are also used as final control elements. Dampers are used to control the flow of gases and vapours. Louvers are also used to control the flow of gases, e.g., flow of air in air fin coolers. Feeders such as screw feeders, belt feeders, and gravimetric feeders are frequently used to control the flow of solids. Metering pumps and certain feeders combine the functions of the measurement and final control element in some control loops.


Leave a Comment