Towards an ideal steel plant - Online liquid steel temperature prediction using R

by Bengt Maas, Hakan Koç


In this work, we introduce the development of an “Online Temperature Prediction Model” for application in steel works using the “R” Project for statistical computing. The obtained mathematical model is used for calculating default temperature values of the steel melt along the steel plant process chain, beginning with tapping temperatures at the converter, continuing with processing temperatures during ladle treatment and ending with tapping temperatures at the casting lines. Special focus lies on meeting the desired casting temperature, which again is highly dependent on the liquid steel temperature of each individual steel grade, to ensure optimal temperature distributions and heat flow during casting. Determination of target values for temperatures at each processing step allows for optimal throughput, capacity utilization, energy consumption and homogeneity of processing conditions. To take into considerations possible alternations in relevant input parameters, i.e. variations in process parameters or chemical composition, the developed temperature prediction model recalculates the temperature preset values dynamically during processing.

The following results have been produced during a collaborative work of the Salzgitter Man-nesmann Forschung GmbH and the Salzgitter Flachstahl GmbH of the Salzgitter Group, the second largest German steel manufacturer, with an annual capacity of around 8 million tonnes of crude steel. The production and casting of 220 tons of liquid steel (corresponds to one production unit) per process step, involves complex processing stages and requires immense energy demands. The transport of 220 tons of steel during production poses a large spatial/areal and logistical challenge. Therefore desired guidelines and set-inputs should be determined before the commencement of producing in order to ensure a smooth production flow.

The manufacturing covers primary and secondary metallurgy from steel converter through ladle treatment (alloying, heating, stirring and vacuum degassing) up to continuous casting. Some of the necessary production parameters for each individual steel grade are the desired process’ tempera-tures of liquid steel. For obtaining these temperatures the consideration of manufacturing process durations, chemical elements, etc. is necessary. These parameters guide the production steps with regard to the casting temperature.

Hence, the casting process is the last production process in steel plants, the previous processes and the predicted process temperatures have to be adjusted to the target casting temperature. By means of the liquid steel and casting temperature prediction longer production cycle times and energy loss by undercooling or overheating of the liquid steel and, consequently, a casting interruption, can be avoided. Through temperature prediction, an optimized process with a non heating-production through prediction of right target process values can be ensured.
An online integration of this Prediction Model in the steel plant automation system will allow for the determination of target steel melt temperatures before and during manufacturing for each steel grade individually while at the same time being adjustable to variations occurring during processing. Thus helping to ensure an optimized product and maintaining high production quality.


In steel plants stable process conditions are essential to ensure high process and material quality /1/, /2/. At the same time the process boundary conditions are set to a level altering productivity to its possible maximum. To meet both, the quality and the quantity goal, knowledge of the current process and material condition at each stage in the process chain is of great importance /3/, /4/, /5/. Online monitoring systems allow for gaining such knowledge, for example through monitoring essential process parameters and hence providing for a data basis for further modelling and data mining analysis.

The monitoring ought to cover every main and sub target variable of the production to ensure inte-grated optimization, determining accurate reactions to unstable or otherwise critical process situa-tions. In steel plant divisions monitoring can consist i.e. of measuring melt temperature, alloying element concentration and amounts of further admitted elements in order to be able to trace product specifications during manufacturing. The recorded datasets are building the basis for statistical methods, which can be used for creating prediction models to calculate target parameters and help determine ideal process conditions. The developed statistical models must be suited for direct implementation in the process control routines and have to be capable of determining online repeatedly the required target parameters.

Steel work process

In this work we try to forecast the target temperatures of each process step in the steel plant for the sake of building a temperature guideline accordingly to how the steel melt should litigate in terms of its thermal evolution. Figure 1 shows the three main process stages in steel plant consisting of steel converter (primary metallurgy), ladle plant (secondary metallurgy) and finally of the continuous casting lines. The described process route is in accordance with Salzgitter Flachstahl GmbH processes but can be assumed to be exemplarily for most other steel plant processes, too. Prior to Steel plant treatments, the crude iron melt has to be produced in i.e. a Blast Furnace or Electric Arc Furnace (EAF) by reducing iron ore (mostly oxides) through the reduction agent carbon or smelting of scrap and other surcharges.

Figure 1: Schematic Sketch of the production stages of a typical steel work plant, from Converter to Continuous Casting line, as it is analyzed in the described study.

In a primary step the liquid pig iron is treated in a desulphurization unit. Afterwards it passes the converter-station which removes the tramp elements (C, Si, Mn, P) with the aim to convert pig iron to crude steel through oxygen blowing. During a second step, Laddle Metallurgy, the melts final chemical composition is defined, largely influencing what steel grade specifications the melt will fulfil in the end. Besides a steel’s chemical composition, the steel grade also specifies its mechanical and microstructural properties, the ladder mainly depending on the thermo-mechanical treatment during subsequent processing. Laddle Treatment consists of alloying, heating, stirring (homogenization) and vacuum degassing. After the required specifications regarding i.e. oxygen content, carbon content and melt temperature are fulfilled, the liquid steel is cast in slabs for further production for example hot and cold strip mills in the case of flat steel products. In total, the analysis described in this work focuses, on the thermal condition of the different types of steel produced by SZFG and the reactions and material properties depending on the melt temperature in the three main steel shop production stages: Converter, Ladle Metallurgy and Continuous Casting.

The leaving and entering temperatures at the exchange between process stages are very important for meeting crucial process requirements like desired casting temperature and thermal conditions during ladle treatment, which are essential as reference data for optimal manufacturing. The melt’s thermal conditions are highly influential for the melt cleanliness in terms of undesired elements like P, S, N, Si and Ca or oxidic inclusions coming from i.e. slag infiltration or interactions between melt and lining. Different oxide inclusions can be detrimental for the steel’s mechanical properties during further processing and may lead to local material fatigue, which again might result in surface defects or whole component failure /11/, /12/, /13/, /14/. These inclusions are selectively reduced, carried to the melt surface by injected purging gases and dissolved at the steel-slag interface at different temperatures due to their temperature dependent solubility /15/. Alumina-based oxides for examples are reduced at 1400-1600°C (for more information see /21/). The cleanliness is therefore one of the key factors in high quality steel production. Hence meeting temperature goals may help improve steel cleanliness during ladle treatment. For more information see /16/, /17/. To determine ideal process temperatures, thermal and chemical energy coming from the injected inert gases, electrical stirring and desoxidation agents like added Al must be taken into account. Also slag and casting powder viscosity, lubrication and chemical reactivity are temperature dependent. Therefore the amount and composition of slag and casting powder are regarded during temperature modelling in terms of insulation factor and chemical reactions between melt and slag, too. Thus meeting an optimal casting temperature and setting an ideal temperature flow between already solidified and liquid steel in the mould is important for meeting both quality and quantity goals like throughput, casting velocity, process durations and energy loss and cost.

During production special control stations, shown in figure 2, are responsible for measuring the melt’s leaving, entering and process temperature plus the concentration of chemical elements solved in the liquid steel. The graph under the process guideline describes schematically a characteristic temperature history of the melt during production. The thermometers represent the monitoring stations where the actual temperature, alloying elements and process durations are measured.

Figure 2: Characteristic Temperature time curve along the steel shop process chain, with linear interpolation between the discrete measurement points, indicated by the thermometer symbols

The idea is to build a temperature guideline for meeting the target casting temperature by reproducing the measured temperatures through statistical calculations using the R-project. That way allowing for prediction of optimized temperature preset-values at each production step from converter to continuous casting line at the beginning of every manufacturing unit. At every leaving, entering and processing stage the mathematical prognosis model shall recalculate backwards from target casting temperature until actual process stage with the aim to purport optimized target temperatures for each current process stage. Therefore, the requirements in terms of dynamic recalculation after each process stage have to be observed to ensure a dynamic control.

The main aim of this work is thereby to be able to reproduce the above-named measured temperatures with input variables for example like measured chemical element-amounts, further the amount of admitted elements like cool scrap (metal scrap to cool the melt) , purging gas and the casted slab width. The (target) temperature calculation with these variables will supply new observation possibilities, because weightings for each input variable (steel grade specific) must be calculated to bring more understanding to the manufacturing process. From the weights one can trace key factors influencing the target temperature and the determine the desired prognosis functions.

The casting temperature depends on the steel grade and thus depends largely on the steel’s chemical composition. If the degree of influence of the input variables is known, the manufacturing can be ensured through temperature controlled rule system. In the next chapter we will demonstrate the datasets supplied through measuring the various manufacturing/process parameters and how one can calculate the weights of the input variables(manufacturing/process parameters) to build a prediction function to calculate steel grade specific target temperatures backwards from a desired casting temperature for every process stage back to the converter.

Data and Methods (R)

The data basis of the described advanced analytics models consists mostly of temperature and chemical composition measurements from samples taken directly from the steel melt. Complementary data includes time periods and durations of transportation and processing steps, the same as amounts of scrap and alloying elements added to the melt, electrical energy exerted on the heat during stirring and purging, gas volumes injected during ladle treatment and casting parameters such as casting speed and format.
From the database, the necessary measurements are assembled to steel grade-specific tables for further analysis. For each temperature step a mathematical model is fitted to the measured refer-ence temperature data. Each data set used for modeling undergoes thorough data preprocessing to counter any missing values, correlated input parameters, non-plausible data (i.e. false measure-ments) or constant, low variance attributes. Also a selection of useful or potentially relevant process and material parameters is carried out based on the experience of the SZFG process experts and introductory literature like /17/.

Take into account the above mentioned attribute selection and preprocessing operations, the steel grade specific data tables contain:
• Temperatures at each process stage,
• The chemical composition of the melt at each process stage
• Processing times and durations per process stage
• Amount of admitted purging gas
• Amount of admitted cooling scrap
• Stirring energy
• Slag amounts
• Casting format dimensions

These parameters are then used in calculating regression models to mathematically graph the target temperatures at each process stage and exchange point from converter to casting line as a function of the above named input data.
First, the order of the functional correlation-relationship between the target value and the input parameters must be known to define the polynomial degree of the multiple regression function. The polynomial degree depends mainly on the rate of temperature change with time (and analogously the sequential influence of the process parameters on the temperature of the melt). In first approximation, Newton’s Law of Cooling, see function (1) below, can be used easily through the assumption of a constant room temperature to visualize a general overview of the (melt) cooling-behavior. In this example, the time is given in minutes, so that the function ratio k is expected to be in the dimension per minute, which can now be calculated through exemplary use of the measured temperatures and time durations.

The next figure, figure 3 below, shows the melt cooling over the room temperature and the area above the liquidus temperature, indicated through dotted line. In the left figure, one can see an exponential trend the closer to the room temperature. In the right figure, above the liquidus temperature, the cooling trend has a steeper slope showing a linear decrease. The observed linearity of temperature evolution with time motivates the assumption, that a linear approach could pose a suitable means for the approximation of a functional relationship between melt temperature and the regarded input variables.

Thus, a linear multiple regression approach will be used to statistically determine the melt tempera-ture up to the point of casting. Below the liquidus temperature, the Newton’s law of cooling loses its validity because in steel phase transformation occurs.

Through multiple regression, we try to determine the weights (a, b1 ,…,bn) of the model function for each steel grade and process step, individually.

Figure 3: Exemplary sketch of the temperature-time curve of an idealized steel melt under simplified boun-dary conditions (no insulation, uniform heat transfer in all directions), according to Newton’s Law of Cooling. On the right hand side an enlarged temperature-time curve for the temperature range above liquidus tem-perature is shown. In first order approximation the temperature decline is linear with progress in time.

The analysis was performed with the statistics software package "R" /10/. In order to replenish the influence of outliers, robust-linear regression approaches using MM-estimation have been applied. Robust cost functions help identify potential outliers, reduce their influence on the calculated opti-mum and thus reflect the true attribute range of input and target variables more accurately than regular least-squares error functions. A data point is considered an outlier, when its value differs strongly from the mean of the parameter’s distribution in the regarded sample. Suitable threshold values for outlier detection could be i.e. the euclidian distance from predefined quantiles of the parameter distribution, like the 25% and 75% quartiles. The regression MM-estimator (Modified M-estimate) is a regression M-estimator with redescending-ψ-function (error function), whose initial values and scale estimate s(r1,…,rn) with a so-called S-estimator (see below) are calculated.

The “s” in "S-estimator" indicates that this estimator is based essentially on the minimization of a (robust) scale M-estimation /18/, /19/. The S-estimator is robust with ½ breaking point, if is sym-metric and bounded (the derivative =ψ is then redescending). To determine the best straight line as well as outlier detection, the Huber-K function, see figure 4 below, is used. The derivative is de-fined as the ψ-function. For more detailed information about robust statistics, refer to /18/, /19/.

Figure 4 shows a schematic plot of the Huber-k-Estimator and corresponding cost function

The measured data, which are in the estimated range of the s estimator "-r ... + r", are contribute to the second order to the cost function. Outside the [-r;+r] range the contribution to the cost function is linear.

To ensure full process control, the evaluation of the multiple regression formulas must be done for every existing steel grade, by systemizing the R calculations through embedding the R-Commands in Java. With this functionality one can apply the calculations through calling the R-Commands from Java for building steel grade specific models running several iterations.

The following Java Class “RCalculation” is an example of how R can be integrated into the computer controlled steelwork system by systemizing and controlling through Java applications. The implementation of R-Commands in Java-Projects, can be realized through the RCaller, for more informations see /22/, by implementing the RCaller-Library into the Java-project and installing the RUniversal package in R for example directly through the command below.


This package contains functions for converting R objects to Java variables. The installation of these packages in R and integration of the RCaller-Library into the Java Project allows for calling and run-ning the needed robust linear method rlm() function through loading the MASS package from R.

For calling R commands, the RCaller runs the RScript (to find in the “bin” directory of R) using java's Runtime. Then it executes R commands with the added arguments and transfers results using streams. With the RScript the RCaller can convert R objects to Java double or string arrays using (see for more information the package RUniversal).

After R is initialized with the needed Packages, the frame work JAVA program follows a usual calculation flow consisting of loading the target datasets, creating/calculation of the model coefficients and saving the results in a CSV file. With the command settings displayed below, the model will be created with the MM-estimator and Huber-k evaluation and within a cycle of 1000 iterations.

 Imports RCaller library in to the Java project
import rcaller.RCaller;
 This Java Class is an example how R Commands can be called
 from Java Projects
 @author SteelWork
public class RCalculation {
  Creating RCaller
private RCaller caller = new RCaller();
 Declaring the data separator
private String semikolon = ";";
 Declaring method for Multiple Regression
String method = "\"MM\""; 
 Initializing R
public void initializeR(){
 Full path of the Rscript. Rscript is an executable file shipped with R.
 For example C:\Program File\R\bin.... in Windows
/ Adding R Command for example to initialize the MASS package / caller.addRCode("require(MASS)"); }   / Adding R Command for example to execute Multiple Robust Linear Regression / public void calcModelCoef(String modell, String steelGrade, String processStage){ caller.addRCode("data<-read.csv(file='filePath/"+steelGrade+"',sep=\""+semikolon+"\")"); caller.addRCode("model<-rlm("+modell+",psi=psi.huber,method="+method+",maxit=1000,data)"); call-er.addRCode("write.csv(coefficients(model),file='filePath/"+steelGrade+"_"+processStage+"')"); / Calls execution of added R Commands / caller.runOnly(); } / Calling the steel grade and process stage specific R Calculation / public void startCal(){ this.initializeR(); String[] steelGrade = Bank.getsteelGrades(); int processStages = Bank.getProcessStages(); for(int i = 0; i<=Bank.getCounOftSteelGrades(); i++){ for(int j = 1; j<=processStages; j++){ if(steelGrade[i].equals("not_vakuum_steelGrade")){ noVakuumSteelGradeCalc(j,steelGrade[i]); } if(steelGrade[i].equals("vakuum_steelGrade")){ vakuum(j,steelGrade[i]); } } } }   / Preparing steel grade and process stage specific process parameters related to the steel grade specific target temperature and calcing the calcModelCoeff() mehthod / public void noVakuumCalc(int i, String steelGrade) { switch (i) { case 1: String processStage1 = "1"; String PARAMETER1 = "data$TEMP_1~data$Chemie_AL... +data$TIME...+data$PURGING_GAS...+data$Amount_Scrap...+..."; calcModelCoef(PARAMETER1,steelGrade,processStage1); break;   ...   case n: String process...Stagen = "n"; String PARAMETER_n ="data$TEMP_n~data$Chemie_AL... +data$TIME...+data$PURGING_GAS...+data$Amount_Scrap...+..."; calcModelCoef(PARAMETER_n, steelGrade, processStage_n); break; } }   / Preparing steel grade and process stage specific process parameters related to the steel grade specific target temperature and calcing the calcModelCoeff() mehthod / public void vakuumCalc(int i, String steelGrade) { switch (i) { case 1: String processStage1 = "1"; String PARAMETER1 = "data$TEMP_1~data$Chemie_AL... +data$TIME...+data$PURGING_GAS...+data$Amount_Scrap...+..."; calcModelCoef(PARAMETER1,steelGrade,processStage1); break;   ...   case n: String processStagen = "n"; String PARAMETER_n ="data$TEMP_n~data$Chemie_AL... +data$TIME...+data$PURGING_GAS...+data$Amount_Scrap...+..."; calcModelCoef(PARAMETER_n, steelGrade, processStage_n); break; } }

RCaller uses the package Runiversal which has functions for converting R objects to Java. It trans-lates Java arrays to R, send them and the embedded R commands to R interpreter for handling the results. The results can be handled with getter methods from Java.

These features of the RCaller are building a user-friendly to use library for external accessing to R functions and packages. It allows calling R from Java and a promptly/rapidly integration into indus-trial systems which are handled with Java applications and offers a graceful solution for R users who are not well-known with the internal technical-structure of R.


For each steel grade a series of regression models for the required target temperatures has been calculated, based on the above described input data (Chemical composition, processing time, purging and stirring energy, amounts of scrap and alloying surcharges). The obtained model coefficients are automatically exported to a database, designed to firstly assign the correct model to each individual steel grade and target temperature and secondly to allow for systematic interpolation in case of a new steel grade being processed for the first time. A framework program selects the relevant model coefficients once a new melt is planned, providing for a set of input parameters that reflect the mean composition and standard process conditions. Throughout the production cycle the temperature predictions are recalculated by applying the regression models to a new input variable vector whenever there is an update in one or more of the dependent variables i.e. when a prior estimated or set value for one of the chemical elements of the steel is replaced by an actual measurement result.

Figure 5 : Flow Chart of the R-Code functionalities carried out in the temperature model program. Whenever new measurement results for one of the model input parameters are available the temperature calculation is updated. The dynamic set-up allows also for renewal or first time calculation of regression coefficients for new steel grades.

The following results have been produced during the analysis and will be discussed exemplarily for one characteristic steel grade, a boron-alloyed quenching and tempering steel grade 26MnB5, from here on referred to as Grade A, with a mean composition of:

Table 1: Chemical composition of the regarded example steel grade 26MnB5, a boron-added quenching and tempering steel grade, taken from /7/. Its Manganese, Carbon and Boron content make it an especially de-manding steel in terms of cracking susceptibility during casting.

The presented results for Grade A are a representative example, since 26MnB5 is a demanding steel grade due to its sensitivity to casting temperature variations in terms of cracking susceptibility during casting, s. /8/ and /9/. Meeting the desired casting temperature and making good predictions of the thermal evolution of the melt during prior processing is therefore important for Grade A especially to avoid detrimental process conditions and help prevent material defects. Grade A is a so called vacuum grade, too. Desoxidation takes place under vacuum during ladle treatment of such grades. An additional target temperature to be met after the vacuum degassing has to be modeled for vacuum grades as well. Hence Grade A is a good candidate to determine the suitability of the temperature modeling approach used here.

Figure 6 shows the regression results from for each of the seven temperature measurement points modeled as a function of the input parameters known at each of the production steps. The predicted temperatures are plotted against the measured data. The red lines indicate the ±3K-tolerance level. Based on the mean temperature range of ca. 1500-1700°C, the expected accuracy of ±3K for the prediction results corresponds to a relative tolerance level of about 0.18 – 0.2%. Except for only few outliers all predicted temperature values lie within a 99% accuracy range in comparison to the measured temperatures. The regression models for Grade A were trained on a data set containing 111 samples a randomly drawn from one and a half year period, 03/2010 – 09/2011 and tested on an independent, new data set from 100 steel heats selected randomly from the same time period.

Figure 6: Comparison of prediction results and measured temperatures for each required measuring point along the process chain, Grade A, testdata N=100 samples, 03/2010 – 09/2011. Nearly all prediction results lie within a ±3K tolerance-intervall.

Each of the temperature prediction results graphed in figure 6 has been produced with an individual regression model, making it seven models for each of the seven temperature measurement points needed for process control. A summary plot containing the predicted temperature evolution and the root-mean-squared-error (RMSE) of the model precision at each of the seven temperature measurement sites is shown in figure 3. The critical scattering limit of ±3K is marked by the red bars in figure 3. Comparing the RMSE of every one of the seven temperature prediction models to the ±3K error tolerance shows that on an average the temperature model is well within the required error limits (s. fig. 7).

Figure 7: Mean Temperatures for each of the seven measuring points with error bars indicating the root-mean-squared 8RMSE) prediction error and ±3K error limits for each prediction model. None of the RMSE-bars exceeds the allowed ±3K tolerance level.


As described above mapping a steel melt’s temperature along the process chain in the steel shop as a function of i.e. its chemical composition, processing time, added scrap amounts and surcharges, electrical and chemical energy exerted on the melt through stirring and purging is possible to a highly accurate extent. Motivated from Newton’s Law of Cooling the multidimensional linear regression approach with robust optimization routines implemented in R seems a suitable means for solving the task of providing accurate, understandable and automatable models for the desired temperature predictions. The R-project has proved to be most useful for the implementation of the calculated results, the same as the external control of its functionalities in a process automation environment. The presented mathematical approach and the developed R-code and framework program enable steel plant production engineers and technical staff to plan, carry out and adjust their tasks and doings on the basis of highly stable and precise temperature preset-values. Instead of adding off-sets and thresholds to the assumed heat target temperatures and by that adding extra processing time and extra energy during each processing step, to be on the safe side and rather deliver the melt above the final casting temperature than below, the new temperature prediction model will allow for the optimization of process stability, throughput and material quality in the steel plant, especially in ladle treatment.
R-Code, interfaces and connections to data basis and process control systems (level3) dynamic recalculation and update function for new steel grades.

Figure 8: Photograph of a steel converter during charging, illustrating the dimension of the plant, the amount of liquid steel alas the size of the thermodynamic system regarded during the analysis described in this article, by courtesy of the SZFG.


/1/ B.G. Thomas: “Continuous casting of steel– A Review”. In: M. Dekker: “Modeling for Casting and Solidification Processing”, p. 499-540, New York, 2001
/2/ K. Schwerdtfeger: Metallurgie des Stranggießens”, Verlag Stahl Eisen, Düsseldorf, 2006
/3/ R. Fandirch, H. Kleimt, H. Liebig: „Stand der Pfannenmetallurgie und aktuelle Trends“. Stahl und Eisen, Vol. 131, Nr. 6, p. 75-89, 2011
/4/ P. Ramirez-Lopez, P. Lee, K. Mills: “Towards direct defect prediction on continuous casting”. Proceedings METEC-ECCC, Vol. 7, 151-161, 2011
/5/ U. Etzold, F. Friedel, V. Marx, W. Jäger: „Produktionsbegleitende Untersuchungen zum Rein-heitsgrad von Stahlflachprodukten“. Stahl und Eisen, Vol. 80, 67-72, 2004
/6/ M.S. Jenkins, B.G. Thomas, R.B. Mahapatra: “Investigation of strand surface defects using mold instrumentation and modeling”. Ironmaking & Steelmaking, Vol. 71, 121-130, 2004.
/7/ Salzgitter Flachstahl GmbH: “Product Specifications”,, 2011.
/8/ T.W. Clyne, M. Wolf, W. Kurz: “The Effect of Melt Composition on Solidification Cracking
of Steel with Particular Reference to Continuous Casting”. Metallurgical Transactions B, Vol. 13, p. 259-266, 1982
/9/ K. Schwerdtfeger: “Crack Formation in Continuous Casting”. VDEh Stahl Akademie, 2010
/11/ D.S. Kim, S.K. Kim. K.Y. Lee: „Origin of oxide inclusions observed at surface defects of cold rolled sheet for deep drawing automobile application”. Tetsu-to Hagane, Vol. 2, Nr. 3, p. 25-35, 1988
/12/ P.K. Tripathy, S. Das, B. Singh, A. Kumar: “Migration of slab defects during hot rolling”. Iron-making and Steelmaking, Vol. 33, No. 6, p. 477-483, 2006
/13/ S. Stratemeier, D. Senk, B. Böttger, E. Subasic, and K. Göhler: “Simulation and modelling of hot ductility for different steel grades”. Berg und Hüttenmännische Monatshefte, 152(11):361_366, 2007
/14/ D. Crowther: “The effects of microalloying elements on cracking during continuous casting”. Corus Group Technology Centre, Rotherham, UK, 2005
/15/ K. Wünnenberg, J. Cappel: “Maßnahmen zur Verbesserung des oxidischen Reinheitsgrades beim Stranggießen“. Stahl und Eisen, Vol. 130, No. 1, p. 55-61, 2010
/16/ H.-J. Bargel, G. Schulze: „Werkstoffkunde“ Springer-Verlag, Berlin Heidelberg, 2005
/17/ A.R. Thomas.: „Data Mining Methoden und Algorithmen intelligenter Datenanalyse“.
Vieweg+Teubner, Wiesbaden , 2010
/18/ R. Andreas: „Einführung in die robuste Schätzmethoden“ – Manuscript ZHAW Zürcher Hoch-schule für Angewandte Wissenschaft, 2008
/19/ P. J. Huber: “Robust Statistics”, Wiley, New York, 1981
/20/ H. Kuchling: „Taschenbuch der Physik“, 20th Edition, Hanser Fachbuchverlag, 2010
/21/ L. Zhang, B.G. Thomas: “State of the Art in Evaluation and Control of Steel Cleanliness”. Pro-ceedings XXIVth National Steelmaking Symposium, Morelia, Mich., Mexico, p. 138-183, 2003

Posted On Oct 31, 2011. Originally posted on