Multi-objective robust design of helical milling hole quality on AISI H13 hardened steel by normalized normal constraint coupled with robust parameter design

Helical milling is a hole-making process which has been applied in hardened materials. Due to the difficulties on achieving high-quality boreholes in these materials, the influence of noise factors, and multi-quality performance outcomes, this work aims the multi-objective robust design of hole quality on AISI H13 hardened steel. Experiments were carried out through a central composite design considering process and noise factors. The process factors were the axial and tangential feed per tooth of the helix, and the cutting velocity. The noise factors considered were the tool overhang length, the material hardness and the borehole height of measurement. Response models were obtained through response surface methodology for roughness and roundness outcomes. The models presented good explanation of data variability and good prediction capability. Mean and variance models were derived through robust parameter design for all responses. Similarity analysis through cluster analysis was realised, and average surface roughness and total roundness were selected to multi-objective optimisation. Mean square error optimisation was performed to achieve bias and variance minimization. Multi-objective optimisation through normalized normal constraint was performed to achieve a robust Pareto set for the hole quality outcomes. The normalized normal constraint optimisation results outperformed the results of other methods in terms of evenness of the Pareto solutions and number of Pareto optimal solutions. The most compromise solution was selected considering the lowest Euclidian distance to the utopia point in the normalized space. Individual and moving range control charts were used to confirm the robustness achievement with regard to noise factors in the most compromise Pareto optimal solution. The methodology applied for robust modelling and optimisation of helical milling of AISI H13 hardened steel was confirmed and may be applied to other manufacturing processes.

Molds and dies are important but complex tools for forming processes. Molds and dies demand evolves requesting products with better quality and high complexity, besides, reduction of time to market response and costs [1]. In this sense, molds and dies industry present important impact in the competitiveness of the forming processes. In a mold manufacturing it is estimated that 65% of the costs are due to finishing and semi-finishing processes by machining [2,3]. Between these processes, hole-making spends from 25% to 50% of the cycle time and 33% of the total number of operations, requesting reliability due to the high added value to the part being processed [4,5].
Hard milling has captured the attention of manufacturers of the molds and dies industry. In this field materials such as AISI P20, H13 and others are commonly cut. Traditionally, core and cavities of molds and dies are removed in the hardened state using electrical-discharge machining (EDM). Nowadays, new toolpaths are available to perform hard milling of these materials. The hardness of these materials can range from 45 HRC to as hard as 64 HRC [6]. To allow the adoption of new technologies for hard milling, computer-aided manufacturing (CAM) software developers have programmed special toolpaths so that tool deflection in milling sculptured surfaces can be compensated for [7]. For hole-making and cavities opening through milling, a helical milling path is an option which ensures the milling advantages to the detriment of hard drilling difficulties.
In helical milling, the tool proceeds a helical path concomitant to the tool rotation around its own axis. Due to the helical path and the use of a mill instead of a drill, the helical milling presents several advantages with regard to conventional drilling. In helical milling, holes with different diameters may be obtained with the same tool throughout the helical diameter adjustment, enabling tool inventory reduction and setup time economy. In this process, the material removal is realized by frontal and peripheral cutting edges continuously and discontinuously, respectively, while in conventional drilling the material removal is continuous through the frontal cutting edges [8]. In helical milling low cutting force levels in the axial direction are developed due to the helical trajectory, due to the tool geometry and the material removal aspects [9]. Easy chip evacuation and modern lubri-cooling techniques application, such as minimum quantity lubrication and air cooling, are allowed due to the offset between tool and borehole [10].
Borehole quality is generally assessed through roughness and roundness outcomes [11].
Improved dimensional, geometrical and microgeometrical borehole quality are obtained through helical milling [9] due to the helical path, low cutting forces, discontinuous material removal through peripheral cut and easy chip evacuation. Correction of dimension deviation may be realized through the adjustment of the helical diameter [12]. Tool wear may be monitored, since it occurs progressively, allowing tool life prediction and tool replacement [9]. A high quality of borehole entrance and exit with low burr levels and fracture in these regions are achieved [13]. In carbon fiber reinforced plastic (CFRP), low delamination levels are obtained through hole-making with helical milling technique [14]. Through helical milling it is possible to obtain finished boreholes in just one operation, avoiding reaming operations [15]. Due to these advantages, helical milling has been consolidating as a sustainable hole-making process. Some of the advantages of helical milling process are illustrated in Figure 1. Helical milling has been widely applied for hole-making in difficult-to-cut materials, such as titanium alloys and CFRP [16]. There are still few studies about helical milling in hardened steels.
Some important results in this field are the case of helical milling of AISI D2 steel with 60 HRC [9], AISI D3 steel with 55 HRC [17] and AISI 4340 steel from 34 to 45 HRC [18].
To study the helical milling in hardened materials presents itself as a possibility of increasing competitiveness of molds and die industry due to the difficulties of drilling these materials. Efforts have been made to shorten entire process chains by direct hard machining of components, aiming to reduce production time and to substitute conventional, time and resource intensive inflexible grinding and electrical discharge machining processes [19]. As Iyer et al. [9] highlights, the advantage on manufacturing molds and dies with the workpiece hardened, instead of roughing the material in its soft state followed by heat treatment and finishing by grinding or electrical discharge machining (EDM), comes from the finishing operations with intensive labour and logistical needs, Finished borehole in just one operation apart from guaranteeing a higher quality product eliminating the errors associated to heat treatment and several setups. Due to the lack of experimental studies about helical milling in hardened steels, there are scientific aspects poorly explored. Statistical and computational methods may be applied to achieve high-quality boreholes in these materials, bringing economic, social and environmental benefits to the whole chain of tool and die industry.
Process modelling through the design of experiments (DOE) aims to find an approximation for the response with regard to control factors. The response surface methodology (RSM) is one DOE methodology widely applied for modelling and optimization, including a set of tools for building and exploring empirical models [20]. To achieve these aims, designed experiments must be conducted. Among the available options, the central composite design is an interesting experimental design enabling sequential experimentation, curvature region search, economy in the experimentation, possibility of getting first and second order models, besides, low prediction variance [21].
Robust optimisation deals with uncertainties in the input data [22]. In these problems, it is desired the "robust solution" achievement, when a solution is referred as robust if its near-optimal under distinct scenarios of the input data; and "model robustness" if the solution is nearly feasible in all scenarios [23]. More specifically, Taguchi methods aim the robust design of a process or product with minimal costs and, especially, low sensitivity to fluctuations of uncontrollable variables [24].
Taguchi considers the problem on finding optimal levels of controllable factors which optimize the response taking into consideration the sensitiveness of the optimal solution obtained with regard to the noise variables which affect the process under the experimental region. Noise factors are those which affect the response and cannot be controlled by the engineer at production level [25]. The robust parameter design (RPD), aims to find levels of control factors which makes the optimal level of the response insensitive with regard to control factors variability [26,27]. The reason for applying RPD is related to the higher costs of controlling noise factors than to make the process insensitive to them [28].
Robust optimisation through the RPD concepts of Taguchi may be achieved through different approaches. The Taguchi idea of applying a statistical design to attain a product or process insensitive to environmental disturbances and having the lowest possible cost is considered important in the quality engineering field. However, Taguchi methods are labelled to be inefficient due to the application of saturated designs, interactions neglect, orthogonal arrays inconsistent use for optimisation, the difficulty of interpretation of signal-to-noise ratio and its use as universal criteria for optimisation, and other weaknesses [29]. The Taguchi crossed array designs and SNR metrics are   yet used but, the ideas and concepts with regard to robustness introduced by Taguchi are nowadays   used with more vigorous statistical and optimisation methods. RPD allied to RSM through combined array approach have been applied effectively, once it enables economy in the experimentation, allows to study the process×noise factors interactions and let getting mean and variance models for the response through the propagation of error with regard to the noise factors [30]. Mean square error (MSE) can also be applied together with RSM and RPD aiming to approximate the mean of the response to the target and, simultaneously, reduce the variance of the process [31].  Figure 2 highlights the differences between the conventional optimization and the robust optimization. Essentially, in the conventional optimization, Figure 2(b), there is a major concern in approximate the mean from the target defined with regard to the specification limits for the product.
However, the level of the mean response closest to the target of the response occasionally may present high variability with regard to the noise factors, resulting in a high probability of non- compliance with the specifications. In the robust optimization, Figure 2(c), through the RSM, RPD and MSE approaches deals with the trade-off between mean and variance. Despite in this approach the mean optimal level of the response may remain farther from the target, the robustness may be achieved, diminishing the variance of the response with regard to the noise factors, guaranteeing high possibility of meeting the specifications.
Another important aspect in engineering optimization, apart from the achievement of robustness, is the presence of several responses of interest to be optimized simultaneously.
Optimization procedures involving multiple responses are called multi-objective. In this approach, it is impossible to get simultaneously the individual optima for all the evaluated responses. Therefore, the optimal solution is not unique, but it consists of a set of solutions denominated Pareto optimal solutions. A solution is said Pareto optimal when it improves one of the objective functions with the prejudice of almost one of the remaining ones. Throughout a set of Pareto optimal solutions, the region called Pareto front is explored and the trade-off among the objective functions may be studied [32]. There are several multi-optimization approaches available in the literature. The normalized normal constraint method (NNC) [33] presents itself as a good alternative, guaranteeing good exploration of the Pareto frontier and, consequently, the trade-off between the responses.
Helical milling in the soft alloy Al 7075 was addressed considering robust multi-response optimisation [34] and considering multivariate robust optimisation [35]. In both cases the noise factor considered was the tool overhang length. However, the challenges in the helical milling of AISI H13 hardened steels are unique, these are difficult-to-cut materials and presents a significant demand in hole-making. Besides, it is important to take into consideration different noise factors which are critical in the field of hole-making in hardened steel.
Helical milling of AISI H13 hardened steel is challenging due to the high mechanical and wear resistance, inducing tool deflection, geometrical and microgeometrical deviation in helical milling. These problems may be related to the high tool wear rates and high cutting forces developed in helical milling of AISI H13 hardened steel. Consequently, it is important to conjoin different methods to achieve the robust modelling and optimisation of this feasible but challenging operation in a difficult-to-cut material. Then, the combination of RSM, RPD, MSE and NNC methods is justified by the difficulties on hole-making of AISI H13 hardened steel through helical milling." Since conventional hard drilling presents unfavourable conditions [36], helical milling is a practicable hole-making process which has been applied in these materials with good results [9,18].
As AISI H13 hardened steel has been insufficiently studied in helical milling of AISI H13 hardened steel, this study may be a reference for the manufacturers of moulds and dies. To choose suitable helical milling cutting conditions to assure high quality in hole-making in these materials is a difficult task. Due to the challenges in the machining of hardened steels, such as AISI H13 hardened, to consider noise factors in the experimental planning is a consistent proposal to attain variability minimization. As important quality parameters of boreholes are expressed with different and, frequently, dissimilar outcomes, such as roughness and roundness, it is important to apply multiobjective optimisation in the helical milling of AISI H13 hardened steel. By conjoining RSM/RPD and multi-objective optimisation, boreholes of AISI H13 hardened steel with robust optimal quality levels of multiple responses may be attained. The same methodology proposed in the present study can be applied in other manufacturing process and systems which present significant noise factors and multiple outcomes of interest as a statistical, mathematical and computational procedure.
The purpose of this investigation is the multi-objective robust modelling and optimization of borehole geometry and roughness in the helical milling of AISI H13 hardened steel. Through the results attained in this study, boreholes with high geometrical and microgeometrical quality with robustness in regard to noise may be achieved in helical milling process of AISI H13 hardened steel.
These goals are achieved through the application of the RPD, MSE and NNC methods. This statistical, mathematical and computational approach may also be applied to the robust optimisation of other manufacturing processes.

Helical milling kinematics
The helical milling kinematics is basically defined by a helical toolpath conjugated with tool rotation around its own axis. The kinematics of helical milling may be studied considering the vector decomposition of the helical feed velocity (vf) in the tangential and frontal components [14]. This allows the engineer to study the effect of peripheral and frontal cutting in the borehole quality [16].  Figure 3. Helical milling kinematics [35] with permission from Springer Nature, license number:

4467180469015
The axial depth of cut (ap) varies according to the contact angle (φ), considering the approach of Denkena et al. [14], or taking an arbitrary inspection ratio, it can be determined the proportions of ap relative to frontal and peripheral cut, according to the approach of Brinksmeier et al. [8].
However, the maximum depth of cut (ap * ) suits as a measure of geometrical mechanical load, since, it is related to chip geometry. The maximum axial depth of cut (ap * ) is mathematically dependent of the feed velocities components in axial and tangential directions [14]. Firstly, the helix angle (α) can be expressed through Equation 5 and, subsequently, ap * , which is the helical pitch, in [mm/rev], is calculated through the Equation 6, considering the helix angle and the length of the circular path (π×Dh). By manipulating this expression considering Equations 1-4, ap * may be defined with regard to fza and fzt to understand the effect of these portions of cut related to axial and peripheral cut, respectively [16]. According to Iyer et al. [37], in helical milling without a pre-hole the radial depth Dh.

Noise factors identified in the helical milling of hardened steel
As early stated, a noise factor is a variable which affects the process but cannot be controlled in production level. However, noise factors may be fixed at the experimental stage to achieve process robustness. An important noise factor in mold and die machining is the tool overhang length. Tool overhang length is not defined considering the engineer preferences, but according to the workpiece geometry [35,38]. In cavities machining, it is necessary high tool overhang length to avoid collision between the tool holder and the workpiece. The tool overhang length may originate tool deflection, vibration and, consequently, affect roughness, geometrical error, and productivity [39,40]. Hence, it is required to find levels of process parameters which allows robustness with regard to tool overhang length [38].
Workpiece hardness may present variation due to heat treatment, anisotropy, and other metallurgical characteristics. The effect of workpiece hardness was addressed in the helical milling of the AISI 4340 steel [18], nevertheless, it was not studied as a noise factor. The workpiece hardness has been studied as a noise factor in the context of hard machining [41]. Paiva et al. [41] affirmed that, in turning of AISI 52100, after some longitudinal turning passes, the hardness decreased from nearby 50 HRC to approximately 40 HRC. Besides the authors affirmed that after heat treatment, the hardness was between 49 and 52 HRC. Then, due to difficulties in heat treatment, the hardness may present significant variation. Since, RPD theory advocates that it is less costly to achieve robustness to difficult-to-control factors than to controll them, in hole-making of hardened materials it is important to achieve control factors levels' guaranteeing insensitive in quality parameters with regard to hardness variation.
Another specific difficult of hole-making is to achieve a borehole with steady quality in relation to the borehole surface height. In helical milling and other hole-making processes, usually the quality in the borehole end is poorer than in the beginning. It may be related to the high tool-workpiece contact area increasing tool deflection and radial force [14], resulting in dimensional, geometrical and microgeometrical variation [15,42]. Furthermore, in helical milling the tool realizes more orbital revolutions in the beginning than in the end of the borehole, resulting in the smoothing of the surface due to the peripheral cut. Then, the roughness is lower as the borehole height point is closer to the borehole entry [43].

Robust parameter design and mean square error
RPD aims to find levels of process factors robust with regard to noise factors [25,44]. Due to several statistical contradictions of the Taguchi's methods [29], its philosophy was adopted by several researchers in more reliable alternatives to RPD. Some of these proposals highlight the use of RSM procedures to RPD, where mean and variance modelling is achieved [30]. Welch et al. [25] proposed a design including process and noise factors which requires lower runs than the crossed arrays. Shoemaker et al. [26] called this RPD approach combined array, highlighting the importance of process×noise interactions. Through the study of process×noise interactions, the propagation of error is feasible and the achievement of levels of process factor robust to noise factors variation is possible.
Box and Jones [31] described a response model in function of process, x T = (x1, x2, …, xk), and noise factors, z T = (z1, z2, …, zr), which may be written in the matrix form according to Equation 8.
where 0  is the intercept, β is the vector of linear coefficients of process factors, B is the matrix of second-order coefficients of process factors, containing quadratic and interaction terms, γ is the vector of linear coefficients of noise factors and Δ is the matrix of process×noise interaction terms [38], as follows: Taking into account a coded design with continuous and random noise factors from a vector z, it may be presumed that E(z) = 0 and Var(z) = V, where Vr×r, is the variance-covariance matrix of z, symmetrical positive definite, usually assumed diagonal with terms σzj 2 = σz 2 , j = 1, …, r. V is assumed diagonal since the noise factors are considered not correlated among themselves [46,47].
Applying the mean operator with regard to z in the Equation 8 [30]: Analogously, the variance model is obtained through the application of the variance operator with respect to z in the Equation 8 [30].
The slope in the direction of noise factors is important to predict the process variance of the response in the experimental region. The slope is the vector of partial derivatives of the response model in the Equation 8 with respect to z [30,46,47], according to the Equation 11.
The variance-covariance matrix V is assumed diagonal and it may be presumed that σzj 2 = σz 2 = 1, due to the codification of the noise factors levels in ±1, in a way that the variance-covariance matrix is assumed as an identity matrix, V = I. Consequently, the Equation 10 may be rewritten as [48]: The matrix X * presents 1 + 2k + k(k − 1) + r + rk columns taking into account the intercept, the linear, second order interactions, and quadratic effects of the process factors, the linear effects of the noise factors, and the process×noise interactions, respectively. Similarly, β * comprises the response model coefficients β0, β, B, γ, and Δ. The least squares approximation for β * is found in the Equation 14 [47].
The matrix is composed by important submatrices, as illustrated in Figure 5.
The submatrix C *kl , k,l = 1,2,3, are the variance-covariance matrices for various subsets of factors of the response model, as illustrated in Figure 5   When using RPD combined array approach, the guarantee of homoscedasticity may not be confirmed due to the presence of the difficult-to-control noise factors. In these cases, the response model achieved by ordinary least squares (OLS) may present prediction problems. In these cases, the weighted least squares (WLS) may be employed.
The matrix C * may be considered to WLS estimates, standing as , where W * is a diagonal matrix containing the weights related to the heteroscedastic variance of the observations. The OLS method presents the assumption Cov(ε) = σ 2 I, which implies in the independence of yi and homocedasticity, .. x k z r C *11 C *12 C *13 C *21 C *22 C *23 C *31 C *32 C *33 i.e., variance homogeneity, σ 2 , i = 1, …, N, which is not easy to achieve in RPD. In the presence of heteroscedasticity, the model [49,50]. Romano and Wolf [51] showed that WLS is more efficient than OLS also with nonnormal error terms.
The use of a weighted residual sum of squares is remedial when some observations present more prominent error than others. Consequently, observations with lower variances will receive larger weight in the WLS regression. Then, instead of considering the variance constant for each observation, it can be considered that Var(yi) = σ 2 /wi. The variance function is yet constant, however the individual variance i = 1, …, N differs for each observation [50]. In the appendix A it is demonstrated that OLS results may be applied to WLS problems.
Mean square error optimization is commonly used together with RPD to allow bias and variance optimization. The mean square error measures the expectancy of the quadratic deviations of an estimator. Then, the MSE is a measure of approximation of the estimated value with regard to a target value. Box and Jones [31] proposed the first application of MSE in RPD. Vining and Myers [52] affirmed that this application is more recommended with combined array approach. Lin and Tu [53] proposed the same formulation of Box and Jones [31], denoting this approach as dual optimization. This formulation with regard to mean and variance Equations 9 and 12, is defined in

Similarity study through cluster analysis
The cluster analysis aims to group similar objects in the same group and the dissimilar ones in different groups. The analysis may be applied to group observations or variables. For m variables to be hierarchically grouped the analysis begging with m groups, each one with one variable and finishes with a group with all variables. In each step, the variables are grouped throughout an adopted similarity measure. Consider the data of the m variables in the matrix Y as follows [54].

Y (16)
A proximity matrix is necessary to start the analysis, storing the similarity or dissimilarity among the variables. For instance, the correlation is a similarity measure, while the distance is a dissimilarity one. The proximity matrix P of order m × m is a matrix of proximity coefficients prs between two variables yr and ys, with m(m ˗˗ 1) proximity measures results, as follows. The matrix P is symmetrical, i.e., prs = psr [54].
An example of similarity measure is the quadratic measure between two variables yr and ys as follows [54]: with Ψ as a metric of interest. When Ψ = I, being I and identity matrix, this quadratic distance is the Euclidian distance, which is recommended for the cases when the variables present similar scales.
When Ψ = diag(1/sii), with i = 1, …, N and sii the variance of i-th variable, drs 2 is the quadratic Euclidian distance, being this measure suitable for non-correlated variables with distinct scales. To consider the correlation, it is recommended to use Ψ = s -1 , with s as the sampling variancecovariance matrix, with drs 2 in this case as the generalized Mahalanobis distance [54].
Taking the vectors of values of the variables yr e ys in the N-dimensional space of the variables, a similarity measure srs = ssr presents domain [0; 1]. If srs = 1, the variables present perfect similarity, it is, yr = ys. The dissimilarity can be obtained considering the similarity through drs = 1 ˗ srs. However, the contrary is not possible, since distance measures present domain [0; ∞). The correlation coefficient rrs is used as controversial similarity measure, since its domain is [-1; 1] and, when rrs = 1, it does not mean that the similarity is perfect, but that there is a perfect linear relationship between the variables. It is suggested to use the absolute correlation |rrs| as a similarity measure, however, the problem of the perfect similarity persists. When clustering variables, it is generally used |rrs| as a similarity measure and drs = 1 ˗ |rrs| as dissimilarity measure [54].
The hierarchical grouping methods classify the variables in sequential steps, representing the groups using a dendrogram. A general algorithm for m variables may be described as follows [54]: , with mt and mrs as the number of variables in the groups rs and t respectively.
e. To repeat the steps c to d until grouping all the variables hierarchically.

Normalized normal constraint multi-objective optimization method
A multi-objective optimization task is related with the simultaneous optimization of m objective functions. In these problems, the solution is not unique, but a set of solutions called Pareto optimal or non-inferior solutions. Operationally, the performance of some of the m objective functions cannot be improved without the worsening of at least one of the m -1 remaining one [32].
Multi-objective tasks are frequently employed to solve engineering problems. Recently, multiobjective optimization was applied on steel case hardening to solve the trade-off between hardness and residual stresses [55]. The trade-off between minimum weldline temperature and clamping force in plastic injection moulding was also solved through multi-objective optimization [56].
The terminology of multi-objective optimization should be presented. A generic multiobjective optimization problem is defined as follows: were To avoid scale and size effect, it is important to normalize the objective functions and the objective space. In the normalized solution space, the utopia and pseudo-nadir points are The payoff matrix measures the trade-off limits among the m objective functions. The main diagonal is composed of the individual minima of the objective functions which composes the utopia vector f U . The pseudo-nadir vector is achieved as the maximization of the values of the payoff matrix rows. Besides, each column of the payoff matrix is an anchor point. The payoff matrix is exposed in the Equation 22 and the normalized payoff matrix is exposed in the Equation 23.
The normalized normal constraint method (NNC) [29] may be applied to achieve a set of Pareto optimal points. The NNC formulation is expressed as follows and may be solved to achieve each Pareto optimal solution with regard to a point ij Q , j = 1, …, nsub, in the utopia line.
Subject to: Through NNC, a set of well-distributed Pareto optimal solutions may be found in the Pareto frontier.
To achieve a set of solutions in the Pareto frontier, the point ij Q should be modified considering the weights wij, attributed to the objective functions, i = 1, …, m, according to Equations To solve the NNC method considering a set of weights' vector, j = 1, …, nsub, the number of subproblems (nsub) is calculated according to the Equation 29, considering the desired spacing δr among the points in the utopia line vector r N , where ηr = 1 + 1/δr is the number of points in the utopia line vector. For the bi-objective case nsub = ηr.
The NNC method can be formulated in algebraic notation to facilitate its understanding and implementation [57]. Considering m = 2 objective functions, r = m − 1 = 1. Consequently, the utopia line vector 1 N in the constraint in Equation 25 may be calculated as follows: The vector that links the point ij Q to the sought Pareto optimal solution, ij Q f  , for m = 2, is derived as follows: Consequently, by using the Equations 30 and 31, the constraint in Equation 25 may be expressed as follows: In this way, the NNC method for m = 2 objective functions can be explicitly formulated as follows: Subject to: After solving the NNC method formulated in Equations 24 and 25 or explicitly through NNC is a priori multi-objective method. In a priori approaches the decision maker needs to state its preference prior to the optimisation, for example, defining weights for each objective function [58]. Other a priori multi-objective optimization methods are the well-known weighted sum (WS) and the normal boundary intersection (NBI) [59]. Das and Dennis [60] showed that the WS method fails on achieving an even spread Pareto set in the frontier not only in non-convex Pareto fronts but also in convex Pareto fronts an even spread of weights does not produce an even spread of Pareto optimal solutions. Then, it was presented the NBI method aiming to achieve an evenly distributed Pareto optimal solutions [59]. The formulation of the bi-objective NBI method is similar to the formulation in Equations 33 and 34, however, in NBI the constraint in Equation 34 is an equality instead of an inequality [57].
The distribution of the points in the Pareto frontier is very important to guarantee that the weights express the desired preference for the objective functions in evaluation. A measure of the distributions evenness of the Pareto points was proposed and may be useful to quantify the distribution of the Pareto optimal solutions achieved [61]. Messac [61,62]. To deal with this limitation in the NNC method Messac and Mattson [61] proposed to relax the weights limits in the constraint exposed in Equation 27. Another approach to deal with this problem was proposed by Sanchis et al. [62] by transforming the payoff matrix, which is composed by the anchor points, to achieve its ideal form to guarantee a better exploitation of the Pareto front. The limitation of the anchor points and, consequently, of the payoff matrix, is due to the similarity of the objective functions in the optimisations.
The similarity may be measured considering the significant correlation between the objective functions into consideration and/or due to the limited distance between the responses in the multiobjective space, always evaluated in pairs. If two objective functions present high correlation or small distance in the multi-objective space, it means that the trade-off between them is irrelevant and a multi-objective optimisation with this scenario may not achieve interest results, since the optimisation of one of these functions may reach near optimal results for the other one.
Consequently, in the present work, it is proposed a similarity analysis earlier to the multi-objective optimisation to deal with the imperfections of the objective functions. The objective is mainly related to the optimisation, and not to the modelling, since some functions measure different performance characteristics but with similar results in optimisation.
After achieving the Pareto frontier, it is sometimes required to rank the Pareto solutions  The profilometer is aided by computer and software ultra from Taylor Hobson®. The cut-off was 0.25 mm. The measurements of roundness and cylindricity were carried out using a form talyround measurement system Talyround 131 from Taylor Hobson® with ruby probe, 2 mm high range, and high resolution of 6 nm. The roundness measurement system is also aided by computer and software ultra from Taylor Hobson®.  vc was chosen to regard tool manufacturer recommendations. Important noise factors were adopted to achieve robust levels of control factors with regard to noise factors influence in the outcomes. As argued in section 2, the noise factors (z) were material hardness (hd), tool overhang length (lto) and borehole height (lb). Control and noise factors' levels are in Table 1. Analyses were carried out in Matlab®, Minitab® and spreadsheets software. The NNC routine was programmed in Matlab® considering nsub = 51 solutions, therefore, with the increment δr = 0.02. The algorithm sequential quadratic programming (SQP) was used with the number of maximum iterations equals to 1500. The experimental constraint, x ∈ Ω for the CCD design was according to the CCD design region, i.e., fza 2 + fzt 2 + vc 2 ≤ ρ 2 , where ρ is the radius of the spherical region, set to satisfy the design rotatability criteria. Considering three control factors, ρ = (2 k ) 1/4 = (2 3 ) 1/4 = 1.682.   Substep a1: Manufacturing process selection. In the studied case it was the helical milling of AISI H13 hardened steel to evaluate borehole roughness and roundness.
Substep a2: Selection of control factors and its levels. In the present case, the control factors and its levels are depicted in Table 1.
Substep a3: Selection of noise factors and its levels. In the present case, the noise factors and its levels are also depicted in Table 1.

Substep a4:
Combined array definition. The definition of the combined array may be provided considering a response surface design. In the present case it was used a CCD design with 32 tests considering a half fraction factorial with resolution V (nf = 2 (k+r)-p = 2 6-1 ), 12 tests in centre points (nc) and 6 axial points for control factors (na = 2k = 2×3), with 50 tests in total. It is known that by adding more centre points the experimental error is better estimated. Considering the economy obtained by using the fractional design, the cost associated with 12 centre points, instead of 9, as recommended, is negligible. Substep d6: j < nsub? While j < nsub, solve NNCj by using the SQP algorithm.
Substep d7: j = j + 1, update wij, and ij Q . In positive case of substep d6, actualize these metrics to perform the next optimisation through NNC.   Figure 10 shows de Ward´s dendogram with absolute correlation as similarity measure. It can be observed that one group is formed for the roughness responses with similarity equals to 97.12%, while the other group is composed for the geometrical error outcomes with similarity equals to 82.04%. For the two groups, the correlation was positive. Considering the high similarity level, it is impractical to consider all these responses in a multi-objective optimization scenario, since high correlated outcomes should present similar results in the optimization. Therefore, in each group one response was selected considering its importance and the goodness-of-fit measures, as presented in the next sections, to assure good optimization results and reproducibility.

Similarity analysis
In the group of roughness Ra was selected, since it is the outcome defined by standards to determine the state of surfaces in technical drawings being the most important parameter in investigations of finishing manufacturing process. Also, as will be presented in section 7.3, Ra   Figure 11. One of the roundness measurements for test 36 Taking arbitrarily the measurement in position 8, in Figure 11, it can be observed the presence of protuberances in the quadrants of the roundness profile. As observed by Costa et al. [49], these geometrical errors are due to the machining centre axes' backlashes. The geometrical error in circular trajectories appears during the change of the movement direction of the machine table during the interpolation of the axes x and y [66]. The backlashes were identified in the tests which presented Ront smaller than 11 μm. This reflects the good geometrical quality of the obtained boreholes, and the machining centre limit in achieving better results, since with the absence of backlash improved results would be achieved.
Besides the total roundness, the total cylindricity was also evaluated. The cylindricity measurement method used, inherent to the measurement system, takes into account not only the cylindrical form error but also the eccentricity error, by calculating a least squares centre line considering the centre points of the roundness measurements planes considering the measurement planes. Then, to minimize cylindricity implies minimizing the cylindrical form error, besides, the eccentricity among the measured planes of the borehole.  Table 3 presents the ANOVA results for Ront and Cylt which may be analysed together with the main and quadratic effects and interaction effects plotted in Figures 13 and 14. For Ront, with regard to linear process effects, fzt was the unique control factor with significance, with a negative effect, as observed graphically in Figure 13. Observing graphically this effect mixed with the quadratic effect, also statistically significant, when increasing fzt until the level 0.11 μm/tooth, Ront slightly increases, while above this value, the increase of fzt resulted in the decrease of Ront. The factors fza and vc also presented quadratic effects with convexity upward. With regard to noise factors, only lto presented significance in the linear effect so that the increase in lto caused the increase in Ront. These effects can also be confirmed through the signal of the regression coefficients presented hereafter.
For Cylt, with regard to the linear process effects, only vc was significant. The quadratic effect for this factor also was the only one significant with convexity downward, with minimum cylindricity achieved nearby the centre point level vc = 60 m/min. The factors fza and fzt were not significant individually. However, graphically may be evaluated the effect of these parameters. The significance is related to the experimental error and Box and Drapper [20] claim that process knowledge and graphical analysis may be important to understand the results. are important to enable the RPD. However, the process×noise interactions are fundamental to achieve robustness. Concerning the interaction effects for Ront, there was no significant interaction between process factors, while for the process×noise interactions fza×lto and fzt×lto were statistically significant. Tool overhang length variation may cause tool deflection and vibrations with resulting geometrical error. Then, it may be achieved levels of fza and fzt which are robust to lto variation.
These levels will be achieved with robust multi-objective optimization.
About the interaction effects for Cylt, the interaction between process factors fzt×vc was statistically significant. The effect of fzt is positive when vc is low. However, when vc is high, the fzt effect is negative on Cylt. With regard to process×noise interactions, fza×lto, fza×hd, vc×lto, vc×hd and vc×lb were statistically significant. These interactions may be graphically analysed to view levels of process factors which makes the form error responses insensitive to noise variation. With regard to the first significant process×noise interaction, when increasing fza, the response Cylt becomes robust with regard to noise factor lto variation, as can be confirmed graphically in Figure 14. But, taking with consideration the interaction fza×hd, low levels of fza makes the outcome Cylt insensitive to noise factor hd. This trade-off between different process×noise interaction involving a fixed process factor is complex to manage but can be solved through optimization. The process×noise interactions involving the factor vc indicates lower levels of vc to make the outcome Cylt insensitive with regard to lto, hd and lb, as observed graphically in Figure 14.  Table 3 and the signal of the effects can be confirmed graphically in Figure 13.   As explained in section 7.1, about similarity analysis, the optimization was performed considering Ront to represent the outcomes with regard to geometrical error, since this output is the most frequently used to represent the geometrical error of boreholes and presented best fitting in the modelling. The modelling of Cylt is important to present models which could represent the cylindrical error of the borehole, however, its optimization would result in similar solutions to Ront, due to the high correlation between these outcomes.       Figure 18. The optimization results, presented in Table 6, were similar to dual optimization of E [Ront] and Var [Ront] with weight w1 = 0.8. Figure 18. Response surface plots for MSERont, xhold = xCtPt

Roughness on helical milling of AISI H13 hardened steel
Surface roughness is mostly used as an index to determine the surface finish and, consequently, workpiece quality in machining processes [67][68]. Besides, it is an important requirement to evaluate machined surfaces, affecting lubrication, friction, corrosion resistance, fatigue resistance, and other mechanical properties [69][70][71]. The average surface roughness, Ra, and the average maximum height of the profile, Rz, were evaluated in the borehole surfaces of AISI H13 hardened steel obtained through helical milling. While the first measures the average of the absolute sampled values of the filtered profile, the second measures the average of the maximum range obtained in each cut-off, aiming to evaluate average and dispersion of the measured profiles. Figure 19 shows the filtered profiled obtained in one of the measurements realised in the borehole of the workpiece obtained in tests 6. In Table 2 it can be observed that lb = -1, i.e., the measurement position was in the beginning of the borehole. Figure 19. Roughness measurement for test 6 yet finished the chip removal by the edge in action, the peak of the surface left by the frontal cutting edge can be removed by the peripheral cutting edge in action at the next turn. Therefore, it is expected lower roughness in the beginning of the borehole than nearby the exit of the borehole.
Cutting velocity, vc, presented a positive linear effect in Ra.    the NNC multi-objective optimization method was applied. The results for nsub = 51 sub-problems are plotted in Figure 24. In this case, it can be observed that no dominated solutions were found in filtering. For 11 most representative trade-off points, the results for NNC bi-objective optimization are summarized in Table 9. These results are options to manage the trade-off between mean and variance of roughness. When the engineer is interested in lower levels of the mean of roughness he should assign a higher level of w1 which is the weight related to E[Ra], being aware of the loss in variance, since its weight is w2 = 1 -w1. On the contrary, when the engineer wants to achieve a more promising robustness scenario for borehole surface roughness, with regard to tool overhang, workpiece hardness and borehole height variation, is recommended to the engineer select a Pareto optimal solution which prioritises Var[Ra], with w2 higher than w1.  As an approach to minimize bias and variance, the MSE[Ra] was also optimized. Figure 25 shows the response surface plot for MSE [Ra] in different perspectives. The optimization results of MSE[Ra] are summarized in Table 10. These results were similar to the ones obtained with w1 = 0.9 and w1 = 1 in NNC for E [Ra] and Var[Ra] as can be confirmed in Table 9.  Figure   26(a). For the sake of simplicity, only 11 Pareto optimal solutions are summarised in Table 12.   Table 12. As can be observed in the Figure 26( constrained Pareto optimal solution, although it is not possible to conclude graphically, due to the fzt * value which uncoded was equal to 1.576. Taking the square root of the sum of squares of the coded levels of process factors, it can be concluded that the result is equal to the squared spherical radius equals to (2 3 ) 1/2 . The solutions in the design space between the anchor points solutions are the most part not constrained Pareto optimal solutions. Only some of the solutions nearby the MSE [Ra] minima are in the experimental region limit.  The non-inferior solutions in the Pareto frontier in Figure 26(a) are possibilities to manage the trade-off between roughness and roundness. Considering the first column of To endorse the procedure, the NNC bi-objective optimisation results were compared with NBI and WS methods. Figure 27 shows the overlaid Pareto frontier for these three methods and the   After confirming that NNC outperformed NBI in terms of the number of Pareto optimal solutions and WS in terms of evenness the Pareto optimal solutions, it is important to confirm the optimisation to assure the reduction of the variability with regard to noise factors. Therefore, the experimental together with simulated results, considering the most compromise solution, f * [w1=0.5] = [3.759 μm 2 1.048 μm 2 ] T , were plotted in individual and moving range (MR) control charts in Figure   30. The variability expressed by the experimental runs is due to control and noise variables, while, the variability expressed in the simulated Pareto optimal points was calculated considering the slope, i.e., the partial derivatives with regard to the noise factors

Conclusions
This paper presents a methodology to the multi-objective robust optimization of borehole quality obtained by helical milling on AISI H13 hardened steel. Since hole-making in hardened material is a difficult task, the results achieved with the proposed methodology are important to attain competitive advantage in the molds and die industry. The proposed methodology for robust multi-objective modelling and optimisation may be applied in other manufacturing processes.
Experiments were carried out by following a CCD combined array considering process and noise helical milling factors. Tool overhang length, material hardness, and borehole height were considered as noise factors to achieve process factor levels insensitive with regard to noise variation.
Response models were obtained for Ront, Cylt, Ra, and Rz, in function of process and noise factors. The models presented good data variability explanation with Radj 2 equals to 99.97%, 98.51%, 99.45% and 97.91%, respectively. The models also presented good prediction capability for future observations with 98.78% for Ront, and 94.48%, 84.81% for Ra and Rz, except for Cylt with moderate predictability 58.29%. These models are useful to study the process and noise effects in the outcomes.
Ward's dendrogram was used to select helical milling outcomes to be optimized without similarity. Ront and Cylt were separated in a highly correlated group of geometrical error outcomes, while Ra and Rz were separated in a highly correlated group of roughness outcomes. Then in each group, one variable was selected to represent the group, since the multi-objective optimisation involving high correlated outcomes may result in similar results, since the trade-off, in this case, is negligible. The outcomes Ront and Ra were selected due to the best adjustment in each group and due to its importance in the characterization of geometrical error and roughness of boreholes, respectively.
Mean and variance equations were derived through RPD for all responses since they represent different performance characteristics. Mean square error optimisation was realised only for the selected responses Ront and Ra as an approach to reduce bias and variance. The Pareto optimal solution with lower Euclidian distance to U f was f * [w1=0.5] = [3.759 μm 2 1.048 μm 2 ] T , with the same weight for both functions, i.e. w1 = w2 = 0.5.
NNC multi-objective optimization method was used to achieve Pareto robust optimal solutions, giving to the experimenter several possibilities of process parameter levels to achieve desired borehole quality levels insensitive to the noise factors. NNC was applied to the optimisation of mean and variance of Ront and Ra to evaluate the trade-off between mean and variance. The best compromise Pareto optimal solution was selected considering the Euclidian distance The proposed methodology was detailed and may be applied in other manufacturing processes as a multi-objective robust modelling and optimization procedure. The obtained results may be employed in hole-making tasks of AISI H13 hardened steel assuring excellence in microgeometrical and geometrical borehole quality with robustness with regard to noise factors.
For future work, it is recommended to apply multi-objective evolutionary algorithms, such as Non-dominated Sorting Genetic Algorithm II, Multi-objective Evolutionary Algorithm Based on Decomposition, Multi-objective evolutionary particle swarm and other algorithms for solving multiobjective problems applied to robust optimisation of manufacturing process such as helical milling of AISI Hardened steel. Apart from borehole quality metrics, considered in the objective of the present work of borehole robust design, it is recommended to address other objectives of the helical milling process, such as productivity, cutting force and energy consumption outcomes.