News

Data Analysis and Management of Steel Organization


Data Analysis and Management of Steel Organization

A steel organization is very complex in nature. In such an organization, there are a large number of units working in conjunction with each other and there are a large variety of processes taking place simultaneously at all the times, generating huge amount of data. This large quantity of data need to be coordinated, collected, integrated, and analyzed for decision making in order to ensure the smooth running of the processes and units, as well as for the proper functioning of the steel organization. Hence data plays a very important role in efficient management of the steel organization. The speed and quality of the data analysis provide ultimately the steel organization the efficiency as well as a competitive advantage. Further while the majority of the data is generated internally in the organization, some of the data comes to the organization from the sources which are external to the organization.

The generated data in the steel organization are worthless in a vacuum unless its potential value is unlocked and leveraged to drive the decision making in the organization. To enable such evidence based decision making, the steel organization needs efficient processes to turn high volumes of fast-moving and diverse data into meaningful insights. The overall process of extracting insights from the large data can be broken down into five stages (Fig 1).  These five stages are (i) acquisition and recording, (ii) extraction cleaning and annotation, (iii) integration, aggregation and representation, (iv) modeling and analysis, and (v) interpretation. These five stages form the two main sub-processes namely (i) data management, and (ii) analysis. Data management involves processes and supporting technologies to acquire and store data and to prepare and retrieve it for analysis. Analysis, on the other hand, refers to techniques used to analyze and acquire intelligence from the data. Thus, the data analysis can be viewed as a sub-process in the overall process of ‘insight extraction’ from the large data.

Data analysis process

Fig 1 Data analysis process

Data analysis is the process of inspecting, cleaning, transforming, and modeling of the raw data with the goal of discovering useful information, suggesting conclusions, and supporting decision-making. Data analysis has multiple facets and approaches and encompasses diverse techniques under a variety of names.



Analysis refers to breaking a whole into its separate components for individual examination. Data is collected and analyzed for the monitoring of the processes as well as for their improvements.

Statistician John Tukey has defined data analysis in 1961 as the “procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data”.

In today’s complex competitive environment, the application of data analysis is growing in acceptance and importance. It is playing a critical role as a decision-making resource for executives, especially for those executives who are managing a steel organization. Basically, data analysis helps management in taking good decisions for the organization. Just giving reports with numbers does not help the decision making process. Data analysis provides information in a way that best suits the executives who are decision makers. Data analysis helps the management to overcome several problems faced by the organization and also help the management in the planning both present and future. In fact data analysis in present day environment has become very much a part of lifeblood of the steel organization, since it helps the organization to focus and accomplish a lot

Data analysis enables the rapid extraction, transformation, loading, search, analysis and sharing of massive data generated during the operations of the organization. By analyzing a large, integrated, real-time database rather than smaller, independent, batch-processed data sets, management seeks to quickly identify previously unseen correlations and patterns to improve the decision making process. Data analysis helps the organization in better measuring and managing the most critical functions of the operations.

There are several types of data which gets generated continuously in a steel organization. These types of data include (i) production related data, (ii) financials and costs data, (iii) human resource  based data including the training and development of the employees, (iv) data related to marketing and customers, (v) data related to purchase and suppliers,  (vi) consumption of materials and waste generation data, (vii) stores and inventory data, (viii) data related to logistics (ix) maintenance and equipment health data, (x) energy, environment and safety related data, (xi) data related to regulatory issues, (xii) investors related data, (xiii) project related data, and (xiv) other miscellaneous dat. These data can be generated because of (i) process parameters picked up by measuring and testing instruments, (ii) monitoring of production volumes, (ii) data recorded in the log sheets, (ii) process flow parameters, (iii) inspection and testing of the raw materials, store materials, intermediate, and final products, (iv) consumptions, emissions and waste generation monitoring, (v) financial accounting and the monitoring of the financial parameters such as cash flows, financial audits, and cost of the products etc.,  (vi) billing and invoices, (vii) various kinds of reports, (viii) minutes of meetings, (ix) e mails, and (x) various kinds of records etc. In present day terminology this data is known as ‘big data’.

Data analysis has been an integral part of management of steel plant organization from historical times. However, data analysis has undergone both evolutionary and revolutionary changes recently with the advent of information technology and digital data gathering and analysis. Analysis of the ‘big data’ provides deep insights to the management into the running of the organization and aid in optimized decision making.

The ‘big data’ of the steel organization includes date generated from both within and outside the organization, including structured and unstructured data, process data captured from the instruments, and online and mobile data. It provides the steel organization the basis for historical and forward-looking (statistical and predictive) views.

The idea of data creating organizational value is not new. However, the effective use of data is becoming the basis of competition. Steel organization has always wanted to derive insights from information in order to make better, smarter, real time, fact-based decisions. Data is usually defined normally by three dimensions namely (i) volume, (ii) velocity, and (iii) variety. These three ‘Vs’ (Fig 2) are further described as below.

  • Volume – Volume refers to the magnitude of the data. It is the data generated by instruments in various processes and is normally generated in very large quantities as compared with a non-traditional data. Definitions of data volumes are relative and vary by factors, such as time and the type of the data.
  • Velocity – Velocity refers to the rate at which data are generated and the speed at which it is analyzed and acted upon. These days, there is an unprecedented rate of data creation which is driving a growing need for real time data analysis and evidence based planning. Traditional data management systems are not capable of handling huge data feeds instantaneously. This is where large data analysis technologies come into play. They enable the organization to create real-time intelligence from high volumes of ‘perishable’ data.
  • Variety – Variety refers to the structural heterogeneity in a dataset. Technological advances now allow the usage of various types of structured, semi-structured, and unstructured data. In steel organization large variety of input data are used which in turn generates a large variety of data as output. The data comes from different sources and is being generated by process instruments as well as by the people.

In addition to the three V’s, other dimensions of the data have also been mentioned (Fig 2). These include the following.

  • Veracity – It represents the unreliability inherent in some sources of data. Since the big data is sourced from many different sources, as a result there is a need to test the quality / veracity of the data. The quality of the data only imparts the data its economic value. Thus the need to deal with imprecise and uncertain data is another facet of data analysis.
  • Variability – This also causes complexity and an additional dimension of the data analysis. Variability refers to the variation in the data flow rates. Often, data velocity is not consistent and has periodic peaks and troughs. Complexity refers to the fact that the large data are generated through a myriad of sources. This imposes a critical challenge that is the need to connect, match, cleanse and transform data received from different sources.
  • Value – It defines the attributes of the data. The data are often characterized by relatively “low value density”. That is, the data received in the original form usually has a low value relative to its volume. However, a high value can be obtained by analyzing large volumes of such data.

Three V's and additional dimensions

Fig 2 Three V’s and other dimensions of the data

Further the quality of data is very important for the data analysis. The aspects of data quality which are normally considered are as follows.

  • Validity – Validity means that the data measure what they are intended to measure.
  • Reliability – Reliability means that the data are measured and collected consistently according to standard definitions and methodologies and the results are the same when measurements are repeated.
  • Completeness – Completeness refers that all the data elements are included as per the definitions and methodologies specified.
  • Precision – Precision means that the data have sufficient detail.
  • Integrity – Integrity means that data are protected from deliberate bias or manipulation for different reasons.
  • Timeliness – Timeliness means that the data are up to date and information is available on time.

The rapid use of electronics and information technology in different processes during the last two decades has resulted in the steel organization sitting on an enormous amount of data whether it is direct and indirect, and/or past and present. The information technology system used for data analysis generates a large volume and variety of data. The increasing use of computers and embedded electronics has further resulted in the generation of a huge amount of structured and unstructured data.The emerging challenge for a steel organization is to derive meaningful insights from available data and re-apply it intelligently.

The data has fundamentally changed the way the steel organization operates these days. The organization invest in and successfully derive value from the data to have a distinct advantage by generating more relevant data and by adapting technologies that enable faster, easier data analysis and presenting the outputs at a very fast rate. While the ability to capture and store vast amounts of data has grown at an unprecedented rate, the technical capacity to aggregate and analyze these disparate data has not gone ahead with the same rate.

Evolving technology has taken the data analysis away from manual analysis, and extended the potential of using data-driven results into every facet of the steel organization. However, while advances in software and hardware have enabled the age of ‘big data’, technology is not the only consideration. Steel organization is required to take a holistic view that recognizes that success is built upon the integration of people, process, technology and dataand this means being able to incorporate data into the organization routines, in the strategy and in the daily operations.

The steel organizations need to understand what insights it needs in order to make good strategic and operational decisions. The first part of the challenge is sorting through all of the available data to identify trends and correlations that drives beneficial changes in the organizational behaviour. The next step is enriching the organizational information with that from sources outside the organization.

In a competitive environment which constantly and rapidly changes, future prediction becomes more important than the simple visualization of historical or current perspectives. For effective future prediction, data analysis using statistical and predictive modeling techniques are usually applied to enhance and support the organizational strategy. The collection and aggregation of ‘big data’, and other information from outside, enables the steel organization to develop its own analysis capacity and capability.

Data life cycle

The following are the steps in the life cycle of the data.

  • Creation – Data are to be collected at the place of their creation. Further with the help of new instruments, it is possible now to collect those data which were not feasible to collect earlier. Further new technology such as advanced sensors and customized software can now record the real time data for analysis. Also changes in the way the communication takes place today (e.g. social media vs. telephone vs. text/SMS vs. email vs. letter) have also increased the ability of the organization to have access to data instantly.
  • Processing – Before the introduction of information technology, it used to be very difficult to analyze and hence many of the data were not processed for various reasons and hence the use of the data got limited to process control in the area where it was getting generated. But now with new technologies available today for data collection and for effective data processing, allows the steel organization to process non-structured data at high speed making it easier to perform a more comprehensive analysis of the ‘big data’.
  • Output – It is easier to capture, store and process data. However, the data is not useful unless the information is relevant. It is also to be readily available to the right people who need the appropriate input in order to make insightful decisions leading to successful outcomes. Employees’ of the steel organization are usually trained to handle the complexities of ‘big data’ and with the ability to simplify the output for daily use.
  • Resources and processes – An important factor in being able to achieve success in the application of data analysis is having knowledgeable and competent employees. Today all the employees of the steel organization need to have basic knowledge in the data collection and analysis. However employees are to be supported by the processes and instrumentations. This when properly integrated helps in the data analysis and propels the organization towards the data-driven decision making.

A large amount of data is available from current operations in the steel organization lend itself to analysis and provide insights into the performance of the processes. The analysis of real time data from the shop floor helps in the optimization of parameters for the improvement in yield, quality and operational efficiency etc. The analysis can also help with the end-to-end supply chain planning processes, including capacity planning, order allocation, production scheduling, inventory planning and can even be made to extend to suppliers, customers and logistics partners. The data analysis can also predict equipment failure to a level of accuracy that allows predictive maintenance to be completely automated with minimal human intervention in decision making. This results in improved equipment efficiencies and lower quality costs.

The materials movement in a steel organization is very complex where huge movements of raw materials, intermediate products, by-products, waste products, semi-finished products and finished products take place. Such movement can often leads to bottlenecks which can result in downtimes due to equipment/processes being starved of inputs. Location based data of materials are required to be analyzed to find bottlenecks, which can then be targeted to improve operational efficiencies on the shop floor. Data analysis also bring into focus on operating costs, yields, quality and generation of waste for different processes and help decision making to identify improvement actions. Data from the shop floor, whether it is from the stationary sources (machines, PLCs, sensors) or mobile sources (materials, material handling equipment, and employees) are important inputs for the data analysis to support decision making.


Leave a Comment