CiteScore 2.0
More on impact ›

# Frontiers in Applied Mathematicsand Statistics ## REVIEW article

Front. Appl. Math. Stat., 16 September 2016 | https://doi.org/10.3389/fams.2016.00014

# Closedness Type Regularity Conditions in Convex Optimization and Beyond

• Faculty of Mathematics, Chemnitz University of Technology, Chemnitz, Germany

The closedness type regularity conditions have proven during the last decade to be viable alternatives to their more restrictive interiority type counterparts, in both convex optimization and different areas where it was successfully applied. In this review article we de- and reconstruct some closedness type regularity conditions formulated by means of epigraphs and subdifferentials, respectively, for general optimization problems in order to stress that they arise naturally when dealing with such problems. The results are then specialized for constrained and unconstrained convex optimization problems. We also hint toward other classes of optimization problems where closedness type regularity conditions were successfully employed and discuss other possible applications of them.

## 1. Introduction and Preliminaries

Regularity conditions play a key role in optimization and nonsmooth analysis, as additional hypotheses that guarantee the fulfillment of strong or converse duality, necessary and/or optimality conditions and various formulae, respectively. In convex optimization there have been proposed a number of regularity conditions (also called constraint qualification when they involve only the constraints of the considered problem), depending on the initial assumptions. For instance, in the differentiable case one employs the regularity conditions due to Abadie, Guignard, or Mangasarian-Fromowitz, while for nondifferentiable constrained optimization problems one has the Slater constraint qualification. However, the latter is a priori violated for many classes of problems where the constraint cone has an empty interior, and several generalizations were proposed for it, the so-called interiority type regularity conditions, that involve notions of generalized interior of a set. But even these fail for large classes of problems and, inspired by Precupanu's pioneering work , Burachik, Jeyakumar and their coauthors, and on the other hand Boţ, Wanka and their coauthors have proposed in  a new class of regularity conditions, the closedness type ones. They have proven first to be sufficient conditions for guaranteeing duality statements in optimization and subdifferential formulae in convex analysis, delivering in the meantime results and formulae in some related research fields as well. They have proven thus to be viable alternatives to their more restrictive interiority type counterparts. In this review paper, that enhances and completes a similar study provided in the book , we provide a general look on the usage of the closedness type regularity conditions in the literature until now, showing that they arise naturally while dealing with optimization problems and pointing toward different assertions from the literature that can be rediscovered as special cases of the mentioned general results. To this end we deconstruct and then reconstruct closedness type regularity conditions formulated by means of epigraphs and subdifferentials, respectively, for general optimization problems, showing afterwards how to particularize them for constrained and unconstrained optimization problems.

As mentioned above, the first papers dealing with closedness type regularity conditions for convex optimization problems were , having as a starting point earlier statements from Precupanu . Afterwards, it was noticed that other regularity conditions from the literature such as the Basic Constraint Qualification (see for instance, ) or the Farkas-Minkowski Constraint Qualification (cf. ) can be recovered as special cases of some general closedness type regularity conditions.

Let X and Y be locally convex Hausdorff vector spaces, whose topological dual spaces are denoted by X* and Y*, respectively. The dual spaces can be endowed with different topologies among which the weak* ones, denoted by ω(·*, ·) or shortly ω*, will be considered for formulating closedness type regularity conditions. The natural topology on ℝ is denoted by ${R}$. By 〈x*, x〉 = x*(x) we denote the value at xX of the linear continuous functional x* ∈ X*. A cone KX is a nonempty subset of X which fulfills αKK for all α ≥ 0. A convex cone KX induces on X the partial ordering “≦K” defined by xK y whenever yxK, where x, yX. If xK y and xy we write xK y. To X a greatest element with respect to “≦K” denoted ∞KX can be attached and we define ${X}^{•}=X\cup \left\{{\infty }_{K}\right\}$. Then for any xX one has xKK and we consider on X the operations x + ∞K = ∞K + x = ∞K for all xX and α · ∞K = ∞K for all α ≥ 0. The dual cone of K is K* = {x* ∈ X* : 〈x*, x〉 ≥ 0 ∀xK}. By convention, $〈{x}^{*},{\infty }_{K}〉=+\infty$ for all x* ∈ K*.

Given a subset U of X, by cl U, int U and cone U we denote its closure, interior and conical hull, respectively. Moreover, if U is convex, by sqri we denote its strong quasi-relative interior. When U ⊆ ℝn we denote by ri U its relative interior, which coincides in this case with sqri U. The indicator function of U is ${\delta }_{U}:X\to \overline{ℝ}=ℝ\cup \left\{±\infty \right\}$, defined as δU(x) = 0 if xU and δU(x) = +∞ otherwise, while its support function ${\sigma }_{U}:{X}^{*}\to \overline{ℝ}$ is given by ${\sigma }_{U}\left({x}^{*}\right)={\mathrm{sup}}_{x\in U}〈{x}^{*},x〉$. The normal cone associated to the set U at xU is ${N}_{U}\left(x\right)=\left\{{x}^{\ast }\in {X}^{\ast }:〈{x}^{\ast },y-x〉\le 0\text{\hspace{0.17em}}\forall y\in U\right\}$ and, for ε ≥ 0, the ε-normal set of U at xU is ${N}_{U}^{\epsilon }\left(x\right)=\left\{{x}^{\ast }\in {X}^{\ast }:〈{x}^{\ast },y-x〉\le -\epsilon \text{\hspace{0.17em}}\forall y\in U\right\}$. The projection function of X is PrX : X × YX, defined by PrX(x, y) = x for (x, y) ∈ X × Y, the identity function of X is id : XX, id(x) = x for xX and, for n ∈ ℕ, we use the notation ${\Delta }_{{X}^{n}}=\left\{\left(x,\dots ,x\right):x\in X\right\}\subseteq {X}^{n}$.

For being able to deliver by means of duality very general regularity conditions for the problem of guaranteeing the maximal monotonicity of the sum of two maximal monotone operators defined in reflexive Banach spaces, we introduced in Boţ et al.  the notion of a set closed regarding a subspace (see also ). For the investigations in this study even more general closedness notions are necessary, that were first proposed in Boncea and Grad  (and when Z = X × ℝ, in ; see also ).

Definition 1.1. Given ε ≥ 0, a set UX × ℝ is said to be (0, ε)-vertically closed regarding the set ZX × ℝ if (cl U) ∩ Z ⊆ (UZ) − (0, ε), while when Z = X × ℝ, U is called simply (0, ε)-vertically closed. Moreover, a set UX that fulfills (cl U) ∩ W = UW, where WX, is said to be closed regarding the set W.

Remark 1. A set UX is closed if and only if it is closed regarding the whole space X. Each closed set UX is closed regarding any other set WX, but a set UX that is closed regarding some WX is not necessarily closed, as shown below in Example 1.

Example 1. The interval [0, 1) ⊆ ℝ is closed regarding the set {0}, but not closed.

Example 2. The set {0} × (0, +∞) ⊆ ℝ2 is (0, ε)-vertically closed regarding the set [−1, 1] × (−1, +∞) for all ε > 0, but there is no ε ≥ 0 for it to be (0, ε)-vertically closed regarding the set [−1, 1] × (−∞, 0]. On the other hand, the set [0, 1] × (0, +∞) ⊆ ℝ2 is (0, ε)-vertically closed for all ε > 0, while the set [0, 1] × (0, 1) ⊆ ℝ2 is not (0, ε)-vertically closed for any ε ≥ 0.

Remark 2. In the literature one can find other definitions of ε-closed (see for instance [23, 24]) and vertically closed (see ) sets, respectively, introduced for purposes that have basically nothing in common with our present investigations.

Let us present now some preliminary notions and results involving functions. Given a function $f:X\to \overline{ℝ}$, its domain is dom f = {xX : f(x) < +∞}, its epigraph epi f = {(x, r) ∈ X × ℝ : f(x) ≤ r} and conjugate function ${f}^{*}:{X}^{*}\to \overline{ℝ}$, f*(x*) = sup{〈x*, x〉 − f(x) : xX}. If UX, the conjugate function of f regarding UX is ${f}_{U}^{*}:X\to \overline{ℝ}$, ${f}_{U}^{*}={\left(f+{\delta }_{U}\right)}^{*}$. We call f proper when f(x) > − ∞ for all xX and dom f ≠ ∅. When f is proper and ε ≥ 0, if f(x) ∈ ℝ the (convex) ε-subdifferential of f at x is ${\partial }_{\epsilon }f\left(x\right)=\left\{{x}^{\ast }\in {X}^{\ast }:f\left(y\right)-f\left(x\right)\ge 〈{x}^{\ast },y-x〉-\epsilon \text{\hspace{0.17em}}\forall y\in X\right\}$yX}, while if f(x) = +∞ we take by convention ∂εf(x) = ∅. The ε-subdifferential of f becomes in case ε = 0 its classical (convex) subdifferential denoted by ∂f. When UX and ε ≥ 0 one has ${\partial }_{\epsilon }{\delta }_{U}={N}_{U}^{\epsilon }$. The Young-Fenchel inequality says that ${f}_{U}^{*}\left({x}^{*}\right)+f\left(x\right)\ge 〈{x}^{*},x〉$ for all xU and x* ∈ X*. If U = X, this inequality is fulfilled as equality if and only if x* ∈ ∂f(x) and in general one has f*(x*) + f(x) ≤ 〈x*, x〉 + ε if and only if ${x}^{*}\in {\partial }_{\epsilon }f\left(x\right)$. When $f,g:X\to \overline{ℝ}$ are proper, their infimal convolution is $f\square g:X\to \overline{ℝ}$, $f\square g\left(a\right)={\mathrm{inf}}_{x\in X}\left[f\left(x\right)+g\left(a-x\right)\right]$, that is called exact at aX when there exists an xX such that fg(a) = f(x) + g(ax). For α ∈ ℝ, defining the function $\alpha f:X\to \overline{ℝ}$, (αf)(x) = αf(x) for xX, we take 0f = δdom f when α = 0. Given a linear continuous mapping A : XY, its adjoint is A* : Y* → X*, 〈A*y*, x〉 = 〈y*, Ax〉 for any (x, y*) ∈ X × Y*, while ImA = {Ax : xX} denotes its image.

Let KX be a convex cone $f:X\to \overline{ℝ}$ is said to be K-increasing if f(x) ≤ f(y) for all x, yX such that xK y. A vector function F : YX is said to be proper if its domain dom F = {yY : F(y) ∈ X} is nonempty and K-convex if F(tx + (1 − t)y) ≦K tF(x) + (1 − t)F(y) for all x, yY and all t ∈ [0, 1]. When K is closed, it is called K-epi-closed if its K-epigraph epiK F = {(y, x) ∈ Y × X : xF(y) + K} is closed. For x* ∈ K* the function $\left({x}^{*}F\right):Y\to \overline{ℝ}$ is defined by (x*F)(y) = 〈x*, F(y)〉, yY.

For an attained infimum (supremum) instead of inf (sup) we write min (max), while the optimal objective value of the optimization problem (P) is denoted by v(P).

## 2. General Perturbed Scalar Optimization Problems

The two main classical duality approaches in convex optimization, the Lagrange type for the constrained optimization problems and the Fenchel type for the unconstrained ones, have been unified in the general theory of conjugate duality by means of perturbations that can be found, for instance, in Rockafellar , Zălinescu , Ekeland and Temam , and Boţ et al. . Note that also other duality concepts from the literature can be reconsidered in the framework of this general theory, as we have done for the Wolfe and Mond-Weir duality concepts in Boţ et al. , Boţ and Grad [30, 31], and Grad and Pop  and the geometric duality in Boţ et al. . In the following we present briefly this general duality scheme before proceeding with the ε-duality investigations that lead to constructing closedness type regularity conditions for guaranteeing strong and stable duality, respectively.

Consider two locally convex Hausdorff vector spaces X and Y and the proper function $F:X\to \overline{ℝ}$. Let the general optimization problem

$(PG) infx∈XF(x).$

Using a proper perturbation function $\Phi :X×Y\to \overline{ℝ}$, fulfilling Φ(x, 0) = F(x) for all xX, a hypothesis that guarantees that 0 ∈ PrYdomΦ, the problem (PG) can be rewritten as

$(PG) infx∈XΦ(x,0).$

We call Y the perturbation space and its elements perturbation variables. Note that the way Φ is defined guarantees that 0 ∈ PrY(domΦ). To (PG) one attaches the following conjugate dual problem (cf., for instance, [19, 26, 27, 29])

$(DG) supy∗∈Y∗{−Φ∗(0,y∗)},$

and for this primal-dual pair of optimization problems weak duality always holds, i.e., v(DG) ≤ v(PG). In order to investigate further duality properties of these optimization problems, for each x* ∈ X* we consider the following problem

$(PGx∗) infx∈X[Φ(x,0)−〈x∗,x〉],$

obtained by linearly perturbing the objective function of (PG). Thus (PG) is embedded in the family of optimization problems $\left\{\left(P{G}_{{x}^{*}}\right):{x}^{*}\in {X}^{*}\right\}$, where it coincides with (PG0). To each problem in the mentioned family one can attach the corresponding conjugate dual problem, namely, for x* ∈ X*,

$(DGx∗) supy∗∈Y∗{−Φ∗(x∗,y∗)}.$

By construction, whenever x* ∈ X* one has $v\left(D{G}_{{x}^{*}}\right)\le v\left(P{G}_{{x}^{*}}\right)$, but more important is to find out when the optimal objective values of the primal and its corresponding dual problem coincide or they are different with less than a given small ε ≥ 0.

Definition 2.1. Let ε ≥ 0. We say that there is ε-duality gap for the problems (PG) and (DG) if v(PG) − v(DG) ≤ ε. If $v\left(P{G}_{{x}^{*}}\right)-v\left(D{G}_{{x}^{*}}\right)\le \epsilon$ for all x* ∈ X*, we say that for (PG) and (DG) one has stable ε-duality gap.

Definition 2.2. We say that there is strong duality for the problems (PG) and (DG) if v(PG) = v(DG) and (DG) has an optimal solution. When there is strong duality for (PG) and (DG) and (PG) has an optimal solution, too, one speaks about total duality. If $v\left(P{G}_{{x}^{*}}\right)=v\left(D{G}_{{x}^{*}}\right)$ and $\left(D{G}_{{x}^{*}}\right)$ has an optimal solution for all x* ∈ X*, we say that for (PG) and (DG) one has stable strong duality.

Definition 2.3. Let ε ≥ 0. An element xX is said to be an ε-optimal solution to (PG) if 0 ∈ ∂ε Φ (·, 0)(x).

In order to ensure strong duality for (PG) and (DG) one usually assumes that Φ is convex and a certain regularity condition is fulfilled. Various such additional hypotheses were considered in the literature (some of which are mentioned later in Remark 8), the most relevant being the interiority type ones (mentioned in [19, 27]) and the closedness type ones (cf. [3, 4, 6, 7, 10, 19, 29]). Moreover, the situation of total duality is closely related to some subdifferential formulae and the regularity conditions can be used to guarantee these, too. However, in some situations one can only show that the difference between the optimal objective values of the primal and dual problem is less than an ε ≥ 0, situation coined in Boncea and Grad [21, 22] as ε-duality gap. Using as a basis our investigations in these articles, where the ε-duality gap for composed and constrained optimization problems, respectively, were characterized via epigraph and subdifferential inclusions, we provide in the following section characterizations via epigraph inclusions of stable ε-duality gap (for an ε ≥ 0) for very general optimization problems, with the involved functions not necessary convex, only proper. Endowing then the functions with the classical convexity and topological properties, we derive new important equivalences as well as sufficient conditions, from which when ε = 0 closedness type regularity conditions are derived. These can then be employed, for instance, for subdifferential formulae, as done in Boţ et al. , Hiriart-Urruty , and Boţ and Grad  or, like in Boţ and Grad  and Grad and Wanka , for providing formulae for biconjugates of combinations of functions.

After presenting these investigations for general optimization problems, we deal with both constrained and unconstrained optimization problems, showing how the mentioned results can be specialized for them, too, by means of the perturbation theory (cf. [26, 27]). In this way some of our results from Grad , Boţ et al. [18, 20, 29, 38, 39], Boncea and Grad [21, 22], and Boţ and Grad  as well as different others from the literature (e.g., from [24, 6, 7, 19]) can be obtained as special cases of the general statements presented below.

Before proceeding, we recall a statement that will play an important role later in our investigations when additional convexity and topological hypotheses will be considered on the function Φ in order to derive closedness type regularity conditions for various duality situations.

Lemma 2.1. (cf. [40, Theorem 2.2 and Theorem 2.3], see also [19, Theorem 5.1 and Theorem 5.2] and ) Let the function Φ be also convex and lower semicontinuous. Then one has ${\left(\Phi \left(·,0\right)\right)}^{*}={\text{cl}}_{{\omega }^{*}}{\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left(·,{y}^{*}\right)$ and

$epi((Φ(·,0))*)=clω*⋃y*∈Y*epi(Φ*(·,y*))=clω*PrX*×ℝepiΦ*.$

To make this review paper as self-contained as possible, we recall the definition of the Lagrangian function for the pair of primal-dual problems $\left(P{G}_{{x}^{*}}\right)-\left(D{G}_{{x}^{*}}\right)$, where x* ∈ X*, and a nice connection between it and the optimal objective values of the mentioned problems (cf. [26, 27, 29]).

Definition 2.4. Let x* ∈ X*. The function ${L}^{\left(P{G}_{{x}^{*}}\right)}:X×{Y}^{*}\to \overline{ℝ}$ defined by

$L(PGx∗)(x,y∗)=infy∈Y [Φ(x,y)−〈x∗,x〉−〈y∗,y〉]$

is called the Lagrangian function of the pair of primal-dual problems $\left(P{G}_{{x}^{*}}\right)-\left(D{G}_{{x}^{*}}\right)$ relative to the perturbation function Φ.

Remark 3. Given x* ∈ X*, one can rewrite the primal-dual pair of problems $\left(P{G}_{{x}^{*}}\right)-\left(D{G}_{{x}^{*}}\right)$ by means of the Lagrangian ${L}^{\left(P{G}_{{x}^{*}}\right)}$ as follows. The dual problem $\left(D{G}_{{x}^{*}}\right)$ is equivalent to ${\mathrm{sup}}_{y*\in Y*}{\mathrm{inf}}_{x\in X}{L}^{\left(P{G}_{{x}^{*}}\right)}$, while if for any xX the function Φ(x, ·) is convex, lower semicontinuous and nowhere equal to −∞, $\left(P{G}_{{x}^{*}}\right)$ actually means ${\mathrm{inf}}_{x\in X}{\mathrm{sup}}_{y*\in Y*}{L}^{\left(P{G}_{{x}^{*}}\right)}$. Note that even without the additional hypotheses, $v\left(P{G}_{{x}^{*}}\right)\ge {\mathrm{inf}}_{x\in X}{\mathrm{sup}}_{y*\in Y*}{L}^{\left(P{G}_{{x}^{*}}\right)}$.

## 3. Characterizations Involving Epigraphs

Most of the closedness type regularity conditions that can be found in the literature are constructed by means of epigraphs and this section is dedicated to them.

Let ε ≥ 0. We begin our presentation with a characterization via epigraph inclusions of a situation of partial stable ε-duality gap for the problems (PG) and (DG) that holds for very general proper functions, no convexity or topological assumptions being necessary in order to derive it. Despite its simple formulation and proof, this statement is a key result for the following presentation and one shall see that using it virtually all the statements involving closedness type regularity conditions from the literature on duality for convex optimization problems, some of which have quite involved proofs in the original sources, can be rediscovered.

Theorem 3.1. Given a subset W of X*, one has

if and only if for each x* ∈ W there exists a ȳ* ∈ Y* such that

PROOF. If W is empty or for x* ∈ W it holds (Φ(·, 0))*(x*) = +∞, there is nothing to prove, since subtracting anything from an empty set gives again the empty set and ${\left(\Phi \left(·,0\right)\right)}^{*}\le {\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left(·,{y}^{*}\right)$, respectively.

Let x* ∈ W such that (Φ(·, 0))*(x*) ∈ ℝ. Then (x*, (Φ(·, 0))*(x*)) ∈ epi(Φ(·, 0))*. For ȳ* ∈ Y* one has (x*, ȳ*, (Φ(·, 0))*(x*)) ∈ epi Φ* − (0, 0, ε) if and only if Φ*(x*, ȳ*) ≤ (Φ(·, 0))*(x*) + ε, hence the desired equivalence.          □

Remark 4. One can rewrite the inequality (Equation 2) as −(Φ(·, 0))*(x*) ≤ −Φ*(x*, ȳ*) + ε, where in the left-hand side there is actually $v\left(P{G}_{{x}^{*}}\right)$. However, in the right-hand side one does not necessarily have $v\left(D{G}_{{x}^{*}}\right)+\epsilon$, since there is no guarantee that the supremum in $\left(D{G}_{{x}^{*}}\right)$ is attained at ȳ*. Consequently, Equation (2) implies $v\left(P{G}_{{x}^{*}}\right)\le v\left(D{G}_{{x}^{*}}\right)+\epsilon$, i.e., condition (1) implies that there is a partial (because x* ∈ WX*) stable ε-duality gap for the problems (PG) and (DG). Note also that employing (Equation 2), one can characterize the stable ε-duality gap for the primal-dual pair (PG) − −(DG) via the epigraph inclusion

When ε = 0 the epigraph inclusion and the inequality from Theorem 3.1 collapse both into equalities.

Corollary 3.2. Given a subset W of X*, one has

if and only if for each x* ∈ W there exists a ȳ* ∈ Y* such that

$(Φ(·,0))*(x*)=Φ*(x*,ȳ*).$

Taking W = X*, one obtains an epigraph inclusion that ensures the stable ε-duality gap for (PG) and (DG), that (taking into consideration Corollary 3.2) becomes stable strong duality when ε = 0.

Corollary 3.3. It holds

if and only if for each x* ∈ X* there exists a ȳ* ∈ Y* such that

$(Φ(·,0))*(x*)≥Φ*(x*,ȳ*)-ε.$

Corollary 3.4. It holds

$epi(Φ(·,0))*=PrX*×ℝepiΦ*$

if and only if for each x* ∈ X* there exists a ȳ* ∈ Y* such that

$(Φ(·,0))*(x*)=Φ*(x*,ȳ*)=miny*∈Y*Φ*(x*,y*).$

Using Theorem 3.1, one can derive necessary and sufficient ε-optimality conditions for primal-dual pairs $\left(P{G}_{{x}^{*}}\right)-\left(D{G}_{{x}^{*}}\right)$, where x* ∈ X*.

Theorem 3.5. Let W be a subset of X* and x* ∈ W.

(a) If $\overline{x}\in X$ is an optimal solution to $\left(P{G}_{{x}^{*}}\right)$ and Equation (1) is satisfied, then there exists an ε-optimal solution ȳ* ∈ Y* to $\left(D{G}_{{x}^{*}}\right)$, such that

or, equivalently,

(b) Assume that $\overline{x}\in X$ and ȳ* ∈ Y* fulfill (Equations 6 or 7). Then $\overline{x}$ is an ε-optimal solution to $\left(P{G}_{{x}^{*}}\right)$, ȳ* is an ε-optimal solution to $\left(D{G}_{{x}^{*}}\right)$ and $v\left(P{G}_{{x}^{*}}\right)\le v\left(D{G}_{{x}^{*}}\right)+\epsilon$.

PROOF. (a) From Theorem 3.1 and Remark 4 one obtains $\Phi \left(\overline{x},0\right)+{\Phi }^{*}\left({x}^{*},{ȳ}^{*}\right)\le 〈{x}^{*},\overline{x}〉+\epsilon$. The weak duality for $\left(P{G}_{{x}^{*}}\right)$ and $\left(D{G}_{{x}^{*}}\right)$ yields $v\left(D{G}_{{x}^{*}}\right)\le -{\Phi }^{*}\left({x}^{*},{ȳ}^{*}\right)+\epsilon$, i.e., ȳ is an ε-optimal solution to $\left(D{G}_{{x}^{*}}\right)$.

(b) From Equation (6) one gets $\Phi \left(\overline{x},0\right)-〈{x}^{*},\overline{x}〉\le \epsilon -{\Phi }^{*}\left(0,{ȳ}^{*}\right)$, that, employing also the weak duality, implies $v\left(D{G}_{{x}^{*}}\right)\le v\left(P{G}_{{x}^{*}}\right)\le \Phi \left(\overline{x},0\right)-〈{x}^{*},\overline{x}〉\le \epsilon -{\Phi }^{*}\left(0,{ȳ}^{*}\right)\le v\left(D{G}_{{x}^{*}}\right)+\epsilon \le v\left(P{G}_{{x}^{*}}\right)+\epsilon$. Then $\overline{x}$ is an ε-optimal solution to $\left(P{G}_{{x}^{*}}\right)$ and ȳ* one to $\left(D{G}_{{x}^{*}}\right)$.

□

Remark 5. Note that in Theorem 3.5 one does not obtain the kind of equivalence usually delivered in optimality conditions statements as in a) one assumes $\overline{x}$ to be an optimal solution to $\left(P{G}_{{x}^{*}}\right)$, while in b) $\overline{x}$ turns out to be only an ε-optimal solution to $\left(P{G}_{{x}^{*}}\right)$.

When ε = 0, relations (Equations 6 and 7) become optimality conditions for $\left(P{G}_{{x}^{*}}\right)$ and $\left(D{G}_{{x}^{*}}\right)$.

Corollary 3.6. Let W be a subset of X* and x* ∈ W.

(a) If $\overline{x}\in X$ is an optimal solution to $\left(P{G}_{{x}^{*}}\right)$ and the condition (4) is satisfied, then there exists an optimal solution ȳ* ∈ Y* to $\left(D{G}_{{x}^{*}}\right)$, such that

or, equivalently,

(b) Assume that $\overline{x}\in X$ and ȳ* ∈ Y* fulfill (Equations 8 or 9). Then $\overline{x}$ is an optimal solution to $\left(P{G}_{{x}^{*}}\right)$, ȳ* is an optimal solution to $\left(D{G}_{{x}^{*}}\right)$ and $v\left(P{G}_{{x}^{*}}\right)=v\left(D{G}_{{x}^{*}}\right)$.

Remark 6. When W = X*, Corollary 3.6 delivers stable optimality conditions for (PG) and (DG).

Remark 7. Taking x* = 0 in Theorem 3.5 one obtains ε-optimality conditions for the primal-dual pair of optimization problems (PG) − (DG) and, moreover, (or, alternatively, directly from Corollary 3.3) that the satisfaction of the condition (5) guarantees that there is ε-duality gap for these problems.

So far we have deconstructed the closedness type regularity conditions formulated by means of epigraph inclusions, showing that such inclusions are intimately connected to ε-duality gap statements. In the following we will reconstruct them and for this we add convexity and topological properties to the function Φ.

We begin with an characterization of Equation (2) by means of the notion of the (0, ε)-vertical closedness of the conjugate of Φ regarding a product set that can be obtained via Lemma 2.1 and Theorem 3.1.

Theorem 3.7. Let W be a subset of X* and take the function Φ also convex and lower semicontinuous. Then the set ${\text{Pr}}_{{X}^{*}×ℝ}\text{epi}{\Phi }^{*}$ is (0, ε)-vertically closed regarding W × ℝ in the topology ω(X*, X) × ${R}$ if and only if for every x* ∈ W there exists a ȳ* ∈ Y* such that Equation (2) holds.

When ε = 0 Theorem 3.7 collapses to Boţ [19, Theorem 9.1].

Corollary 3.8. Let W be a subset of X* and the function Φ be also convex and lower semicontinuous. Then the set ${\text{Pr}}_{{X}^{*}×ℝ}\text{epi}{\Phi }^{*}$ is closed regarding the set W×ℝ in the topology ω(X*, X) × ${R}$ if and only if for each x* ∈ W there exists a ȳ* ∈ Y* such that

$(Φ(·,0))*(x*)=Φ*(x*,ȳ*)=miny*∈Y*Φ*(x*,y*).$

Taking in Theorem 3.7 W = X* one gets the following statement.

Corollary 3.9. Let the function Φ be also convex and lower semicontinuous. Then the set ${\text{Pr}}_{{X}^{*}×ℝ}\text{epi}{\Phi }^{*}$ is (0, ε)-vertically closed in the topology ω(X*, X) × ${R}$ if and only if for each x* ∈ X* there exists a ȳ* ∈ Y* such that

$(Φ(·,0))*(x*)≥Φ*(x*,ȳ*)-ε.$

Taking in Corollary 3.8 moreover W = X* (or ε = 0 in Corollary 3.9, noticing also the comment preceding Corollary 3.1), one obtains a characterization of the stable strong duality for (PG) and (DG), rediscovering thus [29, Theorem 3.2.2] and [4, Theorem 3.1].

Corollary 3.10. Let the function Φ be also convex and lower semicontinuous. Then the set ${\text{Pr}}_{{X}^{*}×ℝ}\text{epi}{\Phi }^{*}$ is closed in the topology ω(X*, X) × ${R}$ if and only if for each x* ∈ X* there exists a ȳ* ∈ Y* such that

$(Φ(·,0))*(x*)=Φ*(x*,ȳ*)=miny*∈Y*Φ*(x*,y*).$

A crucial consequence of Theorem 3.7, via Corollary 3.10, is the strong duality statement for (PG) and (DG) that follows (see also [1, 3, 4, 19, 29]) and contains the weakest hypotheses that guarantee this outcome.

Corollary 3.11. Assume that Φ is convex and lower semicontinuous. If ${\text{Pr}}_{{X}^{*}×ℝ}$ epi Φ* is a closed set in the topology ω(X*, X) × ${R}$, then v(PG) = v(DG) and the dual problem (DG) has an optimal solution ȳ* ∈ Y*.

Remark 8. Several regularity conditions were proposed in the literature in order to achieve strong duality for (PG) and (DG). We list in the following the most important of those considered when the function Φ is convex (cf. [19, 27, 29]), namely the one involving continuity

a weak generalized interiority type one,

another one applicable when the dimension of the linear hull of PrY(dom Φ) is finite,

$(RC3G)|0∈ ri PrY(dom Φ)$

and finally the closedness type regularity condition already mentioned in Corollary 3.11,

Worth noticing is that that all these four regularity conditions ensure actually stable strong duality for the primal-dual pair of optimization problems (PG) − −(DG). One can thus notice that $\left(R{C}_{i}^{G}\right)$, i = 1, 2, 3, imply $\left(R{C}_{4}^{G}\right)$, which is equivalent to the stable strong duality for (PG) and (DG). An example to show that the closedness type regularity condition $\left(R{C}_{4}^{G}\right)$ is indeed weaker than its counterparts of continuity or interiority type follows, others being available for instance in Boţ et al. [18, 20].

Example 3. (cf. ) Let X = Y = ℝ and $\Phi :ℝ×ℝ\to \overline{ℝ}$, Φ(x, y) = δ+(x)+δ(x + y). Then ${\text{Pr}}_{{X}^{*}×ℝ}\text{epi}{\Phi }^{*}=ℝ×{ℝ}_{+}$ is closed, thus $\left(R{C}_{4}^{G}\right)$ is satisfied, while neither is Φ(0, ·) continuous at 0 nor is 0 ∈ ri PrY(dom Φ) = (−∞, 0) fulfilled, hence $\left(R{C}_{i}^{G}\right)$, i = 1, 2, 3, fail in this case.

Necessary and sufficient optimality conditions for $\left(P{G}_{{x}^{*}}\right)$ and $\left(D{G}_{{x}^{*}}\right)$, where x* ∈ WX* follow by means of Theorem 3.5 via Corollary 3.10 (see also [19, 27, 29]).

Corollary 3.12. Let W be a subset of X* and x* ∈ W.

(a) Assume that Φ is convex. Let $\overline{x}\in X$ be an optimal solution to $\left(P{G}_{{x}^{*}}\right)$ and assume that one of the regularity conditions $\left(R{C}_{i}^{G}\right)$, i ∈ {1, 2, 3, 4}, is fulfilled. Then there exists a ȳ* ∈ Y*, an optimal solution to $\left(D{G}_{{x}^{*}}\right)$, such that one has

or, equivalently,

(b) Assume that $\overline{x}\in X$ and ȳ* ∈ Y* fulfill (Equation 10) or (Equation 11). Then $\overline{x}$ is an optimal solution to $\left(P{G}_{{x}^{*}}\right)$, ȳ* is an optimal solution to $\left(D{G}_{{x}^{*}}\right)$ and $v\left(P{G}_{{x}^{*}}\right)=v\left(D{G}_{{x}^{*}}\right)$.

Remark 9. When W = X*, Corollary 3.12 delivers what may be called stable optimality conditions for (PG) and (DG). Taking there x* = 0 one obtains necessary and sufficient optimality conditions for (PG) and (DG) (see also [19, 27, 29]).

As byproducts of the duality investigations presented in this subsection one can also derive ε-Farkas statements and results involving (η, ε)-saddle points, inspired for instance by Boţ and Wanka , as follows. We begin with the ε-Farkas type results for $\left(P{G}_{{x}^{*}}\right)$ and $\left(D{G}_{{x}^{*}}\right)$, where x* ∈ WX*. They extend some recent Farkas type statements from the literature that generalize the classical Farkas Lemma. For more on the latest developments in the literature on Farkas type statements we refer the reader to the survey  together with the references therein and the additional discussion from the same issue of the journal.

Theorem 3.13. Let W be a subset of X*.

(a) Suppose the validity of Equation (1). For x* ∈ W, if one has Φ(x, 0) − 〈x*, x〉 ≥ ε/2 for all xX, then there exists a ȳ* ∈ Y* such that Φ*(x*, ȳ*) ≤ ε/2.

(b) For x* ∈ W, if there exists a ȳ* ∈ Y* such that Φ*(x*, ȳ*) ≤ − ε/2, then Φ(x, 0) − 〈x*, x〉 ≥ ε/2 for all xX.

PROOF. (a) The existence of ȳ* ∈ Y* such that −(Φ(·, 0))* (x*) ≤ ε − Φ*(x*, ȳ*) is guaranteed by Theorem 3.1. Then ε/2 ≤ ε − Φ*(x*, ȳ*) and the conclusion follows.

(b) The weak duality for $\left(P{G}_{{x}^{*}}\right)$ and $\left(D{G}_{{x}^{*}}\right)$ yields Φ(x, 0) − 〈x*, x〉 ≥ −Φ*(x*, ȳ*) ≥ ε/2.

□

Using Equation (3) as a regularity condition other ε-Farkas type results for $\left(P{G}_{{x}^{*}}\right)$ and $\left(D{G}_{{x}^{*}}\right)$, where x* ∈ WX*, can be formulated and proven analogously to the ones in Theorem 3.13.

Theorem 3.14. Let W be a subset of X*.

(a) Suppose that Equation (3) holds. For x* ∈ W, if Φ(x, 0) − 〈x*, x〉 ≥ ε/2 for all xX then ${\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left({x}^{*},{y}^{*}\right)\le \epsilon /2$.

(b) Given x* ∈ W, if ${\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left({x}^{*},{y}^{*}\right)\le -\epsilon /2$, then Φ(x, 0) − 〈x*, x〉 ≥ ε/2 for all xX.

If ε = 0, the ε-Farkas type results become equivalences, as follows.

Corollary 3.15. Let W be a subset of X* and suppose that Equation (4) holds. Given x* ∈ W, one has Φ(x, 0) − 〈x*, x〉 ≥ 0 for all xX if and only if there exists a ȳ* ∈ Y* such that Φ*(x*, ȳ*) ≤ 0.

In the following statement we assume that Equation (3) holds as an equality for ε = 0.

Corollary 3.16. Let W be a subset of X* and suppose that $\text{epi}{\left(\Phi \left(·,0\right)\right)}^{*}=\text{epi}{\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left(·,{y}^{*}\right)$. Given x* ∈ W, one has Φ(x, 0) − 〈x*, x〉 ≥ 0 for all xX if and only if ${\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left({x}^{*},{y}^{*}\right)\le 0$.

In order to deal with statements involving (η, ε)-saddle points, we generalize the classical notion of a saddle point (cf. [10, 21]).

Definition 3.1. Let η ≥ 0 and x* ∈ X*. We say that $\left(\overline{x},{ȳ}^{*}\right)\in X×{Y}^{*}$ is an (η, ε)-saddle point of the Lagrangian ${L}^{\left(P{G}_{{x}^{*}}\right)}$ if

Remark 10. The notion of an ε-saddle point of a function with two variables was already considered in the literature, see for instance [43, 44].

Slightly weakening the properness hypothesis imposed on Φ and adding to it convexity and topological assumptions, one obtains the following statement connecting the (η, ε)-saddle points of ${L}^{\left(P{G}_{{x}^{*}}\right)}$ with the (ε+η)-duality gap for the problems $\left(P{G}_{{x}^{*}}\right)$ and $\left(D{G}_{{x}^{*}}\right)$, and the existence of some (ε+η)-optimal solutions to them.

Theorem 3.17. Let η ≥ 0 and x* ∈ X*.

(a) If $\left(\overline{x},{ȳ}^{*}\right)\in X×{Y}^{*}$ is an (η, ε)-saddle point of ${L}^{\left(P{G}_{{x}^{*}}\right)}$ and $\Phi \left(\overline{x},·\right)$ is convex, lower semicontinuous and nowhere equal to −∞, then $\overline{x}$ is an (ε + η)-optimal solution to $\left(P{G}_{{x}^{*}}\right)$, ȳ* is an (ε + η)-optimal solution to $\left(D{G}_{{x}^{*}}\right)$ and there is (ε+η)-duality gap for the primal-dual pair of problems $\left(P{G}_{{x}^{*}}\right)-\left(D{G}_{{x}^{*}}\right)$.

(b) If ν ≥ 0, $\overline{x}\in X$ is an ε-optimal solution to $\left(P{G}_{{x}^{*}}\right)$, ȳ* ∈ Y* is an η-optimal solution to $\left(D{G}_{{x}^{*}}\right)$ and $v\left(P{G}_{{x}^{*}}\right)\le v\left(D{G}_{{x}^{*}}\right)+\nu$, then $\left(\overline{x},{ȳ}^{*}\right)\in X×{Y}^{*}$ is an (η+ε+ν, η+ε+ν)-saddle point of ${L}^{\left(P{G}_{{x}^{*}}\right)}$.

PROOF. (a) From Definition 3.1 one gets via Remark 3 that

Using the weak duality for the problems $\left(P{G}_{{x}^{*}}\right)$ and $\left(D{G}_{{x}^{*}}\right)$, Equation (12) yields $v\left(D{G}_{{x}^{*}}\right)-\eta \le \epsilon -{\Phi }^{*}\left({x}^{*},{ȳ}^{*}\right)$ and $\Phi \left(\overline{x},0\right)-〈{x}^{*},\overline{x}〉-\eta \le \epsilon -v\left(P{G}_{{x}^{*}}\right)$, hence $\overline{x}$ is an (ε + η)-optimal solution to $\left(P{G}_{{x}^{*}}\right)$ and ȳ* is an (ε + η)-optimal solution to $\left(D{G}_{{x}^{*}}\right)$. Relation Equation (12) implies also that $\Phi \left(\overline{x},0\right)-〈{x}^{*},\overline{x}〉-\eta \le \epsilon -{\Phi }^{*}\left({x}^{*},{ȳ}^{*}\right)$, consequently $v\left(P{G}_{{x}^{*}}\right)\le v\left(D{G}_{{x}^{*}}\right)+\eta +\epsilon$.

(b) Using again Remark 3, one obtains that $\Phi \left(\overline{x},0\right)-〈{x}^{\ast },\overline{x}〉\ge {\mathrm{sup}}_{{y}^{\ast }\in {Y}^{\ast }}{L}^{\left(P{G}_{{x}^{\ast }}\right)}\left(\overline{x},{y}^{\ast }\right)\ge {L}^{\left(P{G}_{{x}^{\ast }}\right)}\left(\overline{x},{\overline{y}}^{\ast }\right)$ and $-{\Phi }^{\ast }\left(0,{\overline{y}}^{\ast }\right)={\mathrm{inf}}_{x\in X}{L}^{\left(P{G}_{{x}^{\ast }}\right)}\left(x,{\overline{y}}^{\ast }\right)\le {L}^{\left(P{G}_{{x}^{\ast }}\right)}\left(\overline{x},{\overline{y}}^{\ast }\right)$. But $\overline{x}$ is an ε-optimal solution to $\left(P{G}_{{x}^{*}}\right)$ and ȳ* is an η-optimal solution to $\left(D{G}_{{x}^{*}}\right)$, consequently

Recalling that $v\left(P{G}_{{x}^{*}}\right)\le v\left(D{G}_{{x}^{*}}\right)+\nu$, one obtains from here

$v(PGx*)-η-ν≤L(PGx*)(x¯,ȳ*)≤v(DGx*)+ε+ν,$

followed by

Employing again the formulae derived above via Remark 3 one obtains that $\left(\overline{x},{ȳ}^{*}\right)\in X×{Y}^{*}$ is an (η + ε + ν, η + ε + ν)-saddle point of ${L}^{\left(P{G}_{{x}^{*}}\right)}$.

□

If one takes in Theorem 3.17 η = ε = ν = 0, the two assertions become equivalent, rediscovering [29, Theorem 3.3.2].

Corollary 3.18. Let x* ∈ X* and assume that Φ is a convex and lower semicontinuous function taking nowhere the value −∞. Then $\left(\overline{x},{ȳ}^{*}\right)\in X×{Y}^{*}$ is a saddle point of ${L}^{\left(P{G}_{{x}^{*}}\right)}$ if and only if $\overline{x}$ is an optimal solution to $\left(P{G}_{{x}^{*}}\right)$, ȳ* is an optimal solution to $\left(D{G}_{{x}^{*}}\right)$ and $v\left(P{G}_{{x}^{*}}\right)=v\left(D{G}_{{x}^{*}}\right)$.

The general scalar optimization problem (PG) encompasses as special cases different classes of scalar optimization problems. We dedicate the next subsections to writing constrained and unconstrained optimization problems as special cases of (PG) and dual problems will be assigned to them by employing carefully chosen perturbation functions (see [19, 29] for more details).

### 3.1. Constrained Scalar Optimization Problems

Consider the nonempty set SX and let the convex cone CY induce a partial ordering on Y. Take the proper functions $f:X\to \overline{ℝ}$ and h : XY, fulfilling the feasibility condition dom fSh−1(−C) ≠ ∅. The general primal constrained scalar optimization problem is

$(PC) infx∈Af(x),$

whose feasible set is

$A={x∈S:h(x)∈-C}.$

One can find several perturbation functions for which (PC) turns out to be a special case of (PG). We consider in our presentation two of them, each assigning another dual problem to (PC) that arises from (DG) (cf. [19, 29]).

The classical Lagrange dual problem to (PC),

$(DCL) supz∗∈C∗infx∈S [f(x)+(z∗h)(x)],$

can be seen as a special case of (DG) via the perturbation function

$ΦL:X×Y→ℝ¯,ΦL(x,z)={f(x), if x∈S,h(x)∈z−C,+∞, otherwise,$

that is proper as f and h are proper and the feasibility condition is satisfied, and whose conjugate is

We begin with a characterization via epigraph inclusions of a situation of stable ε-duality gap for the problems (PC) and (DCL) that is a special case of Theorem 3.1.

Theorem 3.19. Let W be a subset of X*. Then it holds

$epi(f+δA)*∩(W×ℝ)⊆⋃z*∈C*epi(f+(z*h))S*∩(W×ℝ)-(0,ε)$

if and only if for each x* ∈ W there exists a ${\overline{z}}^{*}\in {C}^{*}$ such that

$(f+δA)*(x*)≥(f+(z¯*h))S*(x*)-ε.$

Remark 11. Analogously, one can particularize the other statements regarding pairs of primal-dual problems $\left(P{G}_{{x}^{*}}\right)-\left(D{G}_{{x}^{*}}\right)$, x* ∈ X*, for constrained optimization problems and their Lagrange duals, rediscovering or improving different statements from Boţ et al. [20, 39], Boncea and Grad , and Jeyakumar and Li [45, 46]. Under additional assumptions which guarantee the convexity of the perturbation function ΦL (e.g., take S and f convex and h C-convex), the strong duality statement for the problems (PC) and (DCL) can be derived directly from Corollary 3.11 or Remark 8 by particularizing $\left(R{C}_{i}^{G}\right)$, i ∈ {1, 2, 3, 4} to (cf. [19, 29])

$(RC1L)|∃x′∈dom f∩S such that h(x′)∈−intC,$

that is actually the classical Slater constraint qualification,

then, when the linear hull of h(dom fS ∩ dom h) + C is finitely dimensional,

and

Another perturbation function employed to assign a conjugate dual problem to (PC) as a special case of (DG) is (cf. [19, 29])

$ΦFL:X×X×Y→ℝ¯, ΦFL(x,y,z)={f(x+y), if x∈S,h(x)∈z−C,+∞, otherwise,$

that is proper as well because f and h are proper and due to the fulfillment of the mentioned feasibility condition, and has as conjugate the function ${\left({\Phi }^{FL}\right)}^{*}:{X}^{*}×{X}^{*}×{Y}^{*}\to \overline{ℝ}$,

$(ΦFL)*(x*,y*,z*)=f*(y*)+(-(z*h)+δS)*(x*-y*)+δ-C*(z*).$

The dual problem it assigns to (PC) is the Fenchel-Lagrange dual problem

$(DCFL) supy∗∈X∗,z∗∈C∗{−f∗(y∗)−((z∗h)+δS)∗(−y∗)}.$

For reader's convenience we give the characterization via epigraph inclusions of a situation of stable ε-duality gap for the problems (PC) and (DCFL), that is a special case of Theorem 3.1.

Theorem 3.20. Let W be a subset of X*. Then it holds

if and only if for each x* ∈ W there exist ȳ* ∈ X* and ${\overline{z}}^{*}\in {C}^{*}$ such that

$(f+δA)*(x*)≥f*(ȳ*)+(z¯*h)S*(x*-ȳ*)-ε.$

Remark 12. Analogously, one can particularize the other statements regarding pairs of primal-dual problems $\left(P{G}_{{x}^{*}}\right)-\left(D{G}_{{x}^{*}}\right)$, x* ∈ X*, for constrained optimization problems and their Fenchel-Lagrange duals, rediscovering or improving different statements from Grad , Boţ et al. [20, 38], and Boncea and Grad . Under additional assumptions which guarantee the convexity of the perturbation function ΦL (e.g., take S and f convex and h C-convex), the strong duality statement for the problems (PC) and (DCFL) can be derived directly from Corollary 3.11 or Remark 8 by particularizing $\left(R{C}_{i}^{G}\right)$, i ∈ {1, 2, 3, 4} to (cf. [19, 29])

$(RC1FL)|∃x′∈dom f∩S such that f is continuous at x′ andh(x′)∈−int C,$

then, when dom $\text{dom\hspace{0.17em}}f×C\text{\hspace{0.17em}}-\text{\hspace{0.17em}}{\text{epi}}_{\text{\hspace{0.17em}}-C}\left(-h\right)\cap \left(S×Z$ if finitely dimensional,

$(RC3FL)|0∈ri (dom f×C− epi−C(−h)∩(S×Z)),$

and

One can employ other perturbation functions for proposing conjugate dual problems to (PC) as special cases of (DG), too. For instance, using the perturbation function ΦEFL : X × X × X × Y → ℝ,

that is proper as well because f and h are proper and due to the fulfillment of the mentioned feasibility condition and has as conjugate the function ${\left({\Phi }^{EFL}\right)}^{*}:{X}^{*}×{X}^{*}×{X}^{*}×{Y}^{*}\to \overline{ℝ}$,

one can attach to (PC) is the extended Fenchel-Lagrange dual problem (cf. [22, 38])

$(DCEFL) supy∗,t∗∈X∗,z∗∈C∗{−f∗(y∗)−(z∗h)∗(t∗)−σS(−y∗−t∗)},$

which will not be mentioned further (see [22, 38] for similar statements regarding this dual problem to (PC) that can be rediscovered as special cases of the main ones from this paper).

Remark 13. To give stable ε-duality statements for (PC) and the dual problems we assigned to it within this subsection one can introduce like in Boncea and Grad  and Jeyakumar and Li [45, 46] the functions h, ${h}_{S}^{◇}:{X}^{*}\to \overline{ℝ}$, defined by ${h}^{◇}={\text{inf}}_{{z}^{*}\in {C}^{*}}{\left({z}^{*}h\right)}^{*}$ and ${h}_{S}^{◇}={\text{inf}}_{{z}^{*}\in {C}^{*}}{\left({z}^{*}h\right)}_{S}^{*}$, respectively. Then, for instance, the stable ε-duality gap for the problems (PC) and (DCFL) is characterized by the epigraph inclusion

$epi(f+δA)*⊆epi(f*□hS◇)-(0,ε).$

Remark 14. One can obtain other significant results from the statements presented in this subsection by taking the function f to be identically zero, when characterizations via epigraph inclusions of relations involving the (indicator function of the) feasible set ${A}$ and, on the other hand, the constraint function h and the constraint set S can be derived, as done in Jeyakumar et al. [8, 9], Grad , Boncea and Grad , and Boţ et al. [38, 39].

### 3.2. Unconstrained Scalar Optimization Problems

Let the primal unconstrained optimization problem

$(PU) infx∈X[f(x)+g(Ax)],$

where A : XY is a linear continuous mapping and $f:X\to \overline{ℝ}$ and $g:Y\to \overline{ℝ}$ are proper functions fulfilling the feasibility condition dom fA−1(dom g)≠∅. The perturbation function considered for assigning to (PU) the classical Fenchel dual problem

$(DU) supy∗∈Y∗{−f∗(A∗y∗)−g∗(−y∗)},$

is (cf. [19, 27])

which is proper because f and g are proper and due to the fulfillment of the mentioned feasibility condition, and has as conjugate the function

$(ΦU)*:X*×Y*→ℝ¯,(ΦU)*(x*,y*)=f*(x*-A*y*)+g*(y*).$

Like in the previous subsection, we give only a characterization via epigraph inclusions of a situation of stable ε-duality gap for the problems (PU) and (DU), that is a special case of Theorem 3.1, where the notation $\left({A}^{\ast }×{\text{id}}_{ℝ}\right)\left(\text{epi\hspace{0.17em}}{g}^{\ast }\right)=\left\{\left({x}^{\ast },r\right)\in {X}^{\ast }×ℝ:\exists {y}^{\ast }\in {Y}^{\ast }$ such that A*y* = x* and (y*, r) ∈ epi g*} is used.

Theorem 3.21. Let W be a subset of X*. Then it holds

if and only if for each x* ∈ W there exists a ȳ* ∈ Y* such that

$(f+g∘A)∗(x∗)≥f∗(A∗y¯∗)+g∗(x¯∗−y¯∗)−ε.$

Remark 15. Analogously, one can particularize the other statements regarding pairs of primal-dual problems $\left(P{G}_{{x}^{*}}\right)-\left(D{G}_{{x}^{*}}\right)$, x* ∈ X*, for unconstrained optimization problems and their Fenchel duals, rediscovering or improving different statements from Boţ and Wanka [6, 7], Grad , Boţ et al. [18, 20], and Boncea and Grad . Under additional assumptions which guarantee the convexity of the perturbation function ΦU (e.g., take f and g convex), the strong duality statement for the problems (PU) and (DU) can be derived directly from Corollary 3.11 or Remark 8 by particularizing $\left(R{C}_{i}^{G}\right)$, i ∈ {1, 2, 3, 4} to (cf. [19, 29])

then, when the linear hull of dom gA(dom f) is finitely dimensional,

$(RC3U)|ri A(dom f)∩ri dom g≠∅,$

condition employed also in the framework of generalized convex optimization, for instance in [47, 48] and

Remark 16. Significant particular instances of (PU) can be derived by taking X = Y and A to be the identity mapping on X or f to be identically zero, respectively. The dual problem (DU) and the corresponding duality and optimality conditions statements can be then specialized for these problems, too.

Remark 17. One can alternatively write (PC) as an unconstrained optimization problem

$(PC) infx∈X[f(x)+δA(x)],$

where the notations are consistent with the ones in the previous subsection. Taking A: = idX, f: = f and g: = δ${A}$, a Fenchel dual problem can be attached to (PC) and different duality statements can be obtained for this primal-dual pair of optimization problems as special cases of the ones regarding pairs of primal-dual problems $\left(P{G}_{{x}^{*}}\right)-\left(D{G}_{{x}^{*}}\right)$, x* ∈ X*. When necessary, the convexity of the feasible set ${A}$ is ensured by taking S convex and h C-convex, while when S is a closed set and h a C-epi-closed vector function, the set ${A}$ is closed, too.

Remark 18. The investigations on ε-duality regarding unconstrained problems can be extended for problems consisting in the minimization of a sum of a function with a cone-increasing function composed with a vector function. Considering a convex cone CY, a proper function $f:X\to \overline{ℝ}$, a proper and C-increasing function $g:Y\to \overline{ℝ}$ and a proper vector function h : XY fulfilling the feasibility condition dom g ∩ (h(dom f) + C) ≠ ∅, to the unconstrained composed optimization problem

$(PO) infx∈X[f(x)+g(h(x))],$

different dual problems that are special cases of (DG) can be attached via perturbation theory. The statements regarding the pairs of primal-dual problems $\left(P{G}_{{x}^{*}}\right)-\left(D{G}_{{x}^{*}}\right)$, x* ∈ X*, can be adapted for (PO) and its duals, too, in some instances results from Grad , Boţ et al. , Boţ , and Boncea and Grad  being rediscovered. Alternatively, the assertions for (PU) and (DU) can be used for the same purpose by carefully constructing two functions of two variables, say F and G, such that (F + G)*(·, 0) = (f + gh)*, as done in Boţ et al.  and Boţ .

## 4. Characterizations Involving Subdifferentials

A second large class of closedness type regularity conditions makes use of (ε-)subdifferential inclusions instead of epigraphs. They were initially developed in connection to the notion of total duality for convex optimization problems (see for instance, [8, 38, 39] and some references therein) and in the following we present the most important issues concerning them available up to this moment.

Like in the previous section, we take an ε ≥ 0. The first statement we give presents connections between situations of stable ε-duality gap for the problems (PG) and (DG) and ε-subdifferential inclusions. Note that in this case there is no equivalence.

Theorem 4.1. Let xX and ν ≥ 0. If

holds for all ${x}^{*}\in {\partial }_{\nu }\Phi \left(·,0\right)\left(x\right)$, one has

Viceversa, Equation (14) yields for any ${x}^{*}\in {\partial }_{\nu }\Phi \left(·,0\right)\left(x\right)$ that

PROOF. When ∂νΦ(·, 0)(x) = ∅ there is nothing to prove.

Assume Equation (13) valid for an ${x}^{*}\in {\partial }_{\nu }\Phi \left(·,0\right)\left(x\right)$. This means that for each η > 0 there exists a ${y}_{\eta }^{*}\in {Y}^{*}$ such that ${\left(\Phi \left(·,0\right)\right)}^{*}\left({x}^{*}\right)\ge {\Phi }^{*}\left({x}^{*},{y}_{\eta }^{*}\right)-\eta -\nu -\epsilon$, which, using that ${x}^{*}\in {\partial }_{\nu }\Phi \left(·,0\right)\left(x\right)$, yields $〈{x}^{*},x〉+\eta +\epsilon +\nu \ge \Phi \left(x,0\right)+{\Phi }^{*}\left({x}^{*},{y}_{\eta }^{*}\right)$, i.e., $\left({x}^{*},{y}_{\eta }^{*}\right)\in {\partial }_{\epsilon +\eta +\nu }\Phi \left(x,0\right)$. Therefore ${x}^{*}\in {\bigcap }_{\eta >0}{\text{Pr}}_{{X}^{*}}{\partial }_{\epsilon +\eta +\nu }\Phi \left(x,0\right)$, and because x* was arbitrarily chosen in ∂Φν(·, 0)(x), (14) follows.

Assume Equation (14) true and let ${x}^{*}\in {\partial }_{\nu }\Phi \left(·,0\right)\left(x\right)$. Then for each η > 0 there exists a ${y}_{\eta }^{*}\in {Y}^{*}$ such that $\left({x}^{*},{y}_{\eta }^{*}\right)\in {\partial }_{\epsilon +\eta +\nu }\Phi \left(x,0\right)$. Fixing an ${x}^{*}\in {\partial }_{\nu }\Phi \left(·,0\right)\left(x\right)$ and η > 0, it follows that $\Phi \left(x,0\right)+{\Phi }^{*}\left({x}^{*},{y}_{\eta }^{*}\right)\le 〈{x}^{*},x〉+\epsilon +\eta +\nu$, that is equivalent to ${\Phi }^{*}\left({x}^{*},{y}_{\eta }^{*}\right)-\epsilon -\nu \le 〈{x}^{*},x〉-\Phi \left(x,0\right)+\eta$, which yields ${\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left({x}^{*},{y}^{*}\right)-\epsilon -\nu \le {\left(\Phi \left(·,0\right)\right)}^{*}\left({x}^{*}\right)+\eta$. The latter inequality holds for any η > 0, so letting η tend toward 0 we obtain Equation (15) because x* was arbitrarily chosen in ∂νΦ(·, 0)(x).

In case ν = 0, Theorem 4.1 turns into an equivalence, providing a characterization via subdifferential inclusions of a situation of stable ε-duality gap for the problems (PG) and (DG).

Corollary 4.2. Let xX. Then

holds if and only if Equation (13) is valid for all x* ∈ ∂ Φ(·, 0)(x).

Remark 19. Let xX. Employing the weak duality statements for $\left(P{G}_{{x}^{*}}\right)$ and $\left(D{G}_{{x}^{*}}\right)$, x* ∈ X*, one can show that for any ν ≥ 0 it holds ${\partial }_{\epsilon +\nu }\Phi \left(·,0\right)\left(x\right)\supseteq {\bigcap }_{\eta >0}{\text{Pr}}_{{X}^{*}}$ν+ε+ηΦ(x, 0). Thus, when ε = 0 Equations (14) and (16) turn into equalities and so does also Equation (13), while Theorem 4.1 yields that if for all ν > 0 one has

then whenever μ > 0 one has ${\left(\Phi \left(·,0\right)\right)}^{*}\left({x}^{*}\right)\ge {\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left({x}^{*},{y}^{*}\right)-\mu$ for all ${x}^{*}\in {\partial }_{\mu }\Phi \left(·,0\right)\left(x\right)$. Consequently, for all ${x}^{*}\in {\cap }_{\mu >0}{\partial }_{\mu }\Phi \left(·,0\right)\left(x\right)=\partial \Phi \left(·,0\right)\left(x\right)$ (here we have used the fact that the intersection regarding μ > 0 of the ε + μ-subdifferentials of a function at a point coincides with its ε-subdifferential at that point) it holds ${\left(\Phi \left(·,0\right)\right)}^{*}\left({x}^{*}\right)\ge {\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left({x}^{*},{y}^{*}\right)$, which actually turns into an equality since the opposite inequality holds in general and, via Theorem 4.1, yields Equation (17) for all ν > 0. Note also that in Theorem 4.1 and Corollary 4.2 we correct [10, Theorem 2.11 and Theorem 2.12] where Equations (14) and (16) were given as equalities, respectively.

Remark 20. The inequality (Equation 13) can be rewritten as $v\left(P{G}_{{x}^{*}}\right)\le v\left(D{G}_{{x}^{*}}\right)+\epsilon$, i.e., in Corollary 4.2 an equivalent characterization via subdifferential inclusions of the ε-duality gap for $\left(P{G}_{{x}^{*}}\right)$ and $\left(D{G}_{{x}^{*}}\right)$, when x* ∈ ∂ Φ(·, 0)(x), i.e., x is an optimal solution to the problem $\left(P{G}_{{x}^{*}}\right)$, is provided.

One can develop Remark 19 even further as follows.

Theorem 4.3. One has

$∂νΦ(·,0)(x)=⋂η>0PrX*∂η+νΦ(x,0),$

for all xX and all ν > 0 if and only if for all x* ∈ X* it holds ${\left(\Phi \left(·,0\right)\right)}^{*}\left({x}^{*}\right)={\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left({x}^{*},{y}^{*}\right)$.

PROOF. Let xX. If (x, 0) ∉ dom Φ, there is nothing to prove, so we consider the case Φ(x, 0) ∈ ℝ. Take now x* ∈ X*. If (Φ(·, 0))*(x*) = +∞ there is nothing to prove, as x* ∉ ∂ Φ(·, 0)(x), otherwise ${x}^{*}\in {\partial }_{\mu }\Phi \left(·,0\right)\left(x\right)$ for all μ ≥ Φ(x, 0)+(Φ(·, 0))*(x*) − 〈x*, x〉.

The validity of Equation (17) for ν = Φ(x, 0) + (Φ(·, 0))*(x*) − 〈x*, x〉 implies like in the proof of Theorem 4.1 that ${\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left({x}^{*},{y}^{*}\right)-\nu \le 〈{x}^{*},x〉-\Phi \left(x,0\right)+\eta$ for all η > 0. Letting η tend toward 0 and replacing ν with its value, it follows ${\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left({x}^{*},{y}^{*}\right)\le {\left(\Phi \left(·,0\right)\right)}^{*}\left({x}^{*}\right)$, which, due to the general validity of the opposite inequality proves the sufficiency.

To show the necessity, let ν > 0 and ${x}^{*}\in {\partial }_{\nu }\Phi \left(·,0\right)\left(x\right)$. Then the hypothesis yields $\Phi \left(x,0\right)+{\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left({x}^{*},{y}^{*}\right)\le 〈{x}^{*},x〉+\nu$. If η > 0, there exists a ${y}_{\eta }^{*}\in {Y}^{*}$ such that $\Phi \left(x,0\right)+{\Phi }^{*}\left({x}^{*},{y}_{\eta }^{*}\right)\le 〈{x}^{*},x〉+\nu +\eta$, i.e., ${x}^{*}\in {\text{Pr}}_{{X}^{*}}{\partial }_{\eta +\nu }\Phi \left(x,0\right)$. As η, x and ν were arbitrarily chosen, the conclusion follows via Remark 19.           □

A characterization via subdifferential inclusions of a situation of ε-duality gap for (PG) and (DG) follows.

Theorem 4.4. Let xX. Then

$∂εΦ(·,0)(x)=⋂η>0PrX*∂ε+ηΦ(x,0)$

holds if and only if for each ${x}^{*}\in {\partial }_{\epsilon }\Phi \left(·,0\right)\left(x\right)$ one has

PROOF. Let xX. If (x, 0) ∉ dom Φ, there is nothing to prove, so we consider the case Φ(x, 0) ∈ ℝ.

To show the necessity, let ${x}^{*}\in {\partial }_{\epsilon }\Phi \left(·,0\right)\left(x\right)$. Then for each η > 0 there exists a ${y}_{\eta }^{*}\in {Y}^{*}$ such that $\left({x}^{*},{y}_{\eta }^{*}\right)\in {\partial }_{\epsilon +\eta }\Phi \left(x,0\right)$, i.e., $\Phi \left(x,0\right)+{\Phi }^{*}\left({x}^{*},{y}_{\eta }^{*}\right)\le 〈{x}^{*},x〉+\epsilon +\eta$. This yields $\Phi \left(x,0\right)+{\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left({x}^{*},{y}^{*}\right)\le 〈{x}^{*},x〉+\epsilon +\eta$ for any η > 0. Letting η tend toward 0, Equation (18) follows.

To show the opposite implication let ${x}^{*}\in {\partial }_{\epsilon }\Phi \left(·,0\right)\left(x\right)$. By Equation (18), for each η > 0 there exists a ${y}_{\eta }^{*}\in {Y}^{*}$ such that $\Phi \left(x,0\right)+{\Phi }^{*}\left({x}^{*},{y}_{\eta }^{*}\right)\le 〈{x}^{*},x〉+\epsilon +\eta$, i.e., $\left({x}^{*},{y}_{\eta }^{*}\right)\in {\partial }_{\epsilon +\eta }\Phi \left(x,0\right)$. Therefore ${x}^{*}\in {\bigcap }_{\eta >0}{\text{Pr}}_{{X}^{*}}{\partial }_{\epsilon +\eta }\Phi \left(x,0\right)$. The reverse inclusion follows by Remark 19.          □

Remark 21. For an ${x}^{*}\in {\partial }_{\epsilon }\Phi \left(·,0\right)\left(x\right)$, the right-hand side of Equation (18) is actually $v\left(D{G}_{{x}^{*}}\right)+\epsilon$, while in the left-hand side we have a quantity that can be larger than or equal to $\left(P{G}_{{x}^{*}}\right)$. Thus, Equation (18) guarantees ε-duality gap for $\left(P{G}_{{x}^{*}}\right)$ and $\left(D{G}_{{x}^{*}}\right)$ and Theorem 4.4 provides a sufficient condition based on ε-subdifferential inclusions that guarantees it.

Other characterizations via subdifferential inclusions of ε-duality gap situations for (PG) and (DG) follow.

Theorem 4.5. Let xX. Then

holds if and only if for each x* ∈ ∂ Φ(·, 0)(x) there exists a y* ∈ Y* such that

PROOF. Inclusion Equation (19) holds if and only if for each x* ∈ ∂ Φ(·, 0)(x) there exists a y* ∈ Y* such that $\left({x}^{*},{y}^{*}\right)\in {\partial }_{\epsilon }\Phi \left(x,0\right)$, i.e., Φ(x, 0) + Φ*(x*, y*) ≤ 〈x*, x〉 + ε. But x* ∈ ∂ Φ(·, 0)(x) if and only if (Φ(·, 0))*(x*) = 〈x*, x〉 − Φ(x, 0) and the desired equivalence follows.

Theorem 4.6. One has

for all xX and all ν > 0 if and only if for all x* ∈ X* it holds ${\left(\Phi \left(·,0\right)\right)}^{*}\left({x}^{*}\right)=\underset{{y}^{*}\in {Y}^{*}}{\text{min}}{\Phi }^{*}\left({x}^{*},{y}^{*}\right)$.

PROOF. Let xX. If (x, 0) ∉ dom Φ, there is nothing to prove, so we consider the case Φ(x, 0) ∈ ℝ. Take now x* ∈ X*. If (Φ(·, 0))*(x*) = +∞ there is nothing to prove, so we consider further that (Φ(·, 0))*(x*) ∈ ℝ.

The validity of Equation (21) for ν = Φ(x, 0) + (Φ(·, 0))*(x*) − 〈x*, x〉 implies the existence of a ȳ* ∈ Y* such that Φ*(x*, ȳ*) ≤ 〈x*, x〉 − Φ(x, 0) + ν, which actually means Φ*(x*, ȳ*) ≤ 〈x*, x〉 − Φ(x, 0) + Φ(x, 0) + (Φ(·, 0))*(x*) − 〈x*, x〉. Consequently, Φ*(x*, ȳ*) ≤ (Φ(·, 0))*(x*), which, combined with the always valid inequality ${\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left({x}^{*},{y}^{*}\right)\ge {\left(\Phi \left(·,0\right)\right)}^{*}\left({x}^{*}\right)$, yields the sufficiency.

To show the necessity, let ν > 0 and ${x}^{*}\in {\partial }_{\nu }\Phi \left(·,0\right)\left(x\right)$. This yields Φ(x, 0) + (Φ(·, 0))*(x*) ≤ 〈x*, x〉 + ν. The hypothesis guarantees the existence of a ȳ* ∈ Y* such that Φ*(x*, ȳ*) = (Φ(·, 0))*(x*), consequently Φ(x, 0) + Φ*(x*, ȳ*) ≤ 〈x*, x〉 + ν, i.e., ${x}^{*}\in {\text{Pr}}_{{X}^{*}}{\partial }_{\nu }\Phi \left(x,0\right)$. Since one always has ${\partial }_{\nu }\Phi \left(·,0\right)\left(x\right)\supseteq {\text{Pr}}_{{X}^{*}}{\partial }_{\nu }\Phi \left(x,0\right)$, Equation (21) follows.                             □

The following statement can be shown in a similar manner.

Theorem 4.7. Let xX. Then

$∂εΦ(·,0)(x)=PrX*∂εΦ(x,0)$

holds if and only if for each ${x}^{*}\in {\partial }_{\epsilon }\Phi \left(·,0\right)\left(x\right)$ there exists a ȳ* ∈ Y* such that

$Φ(x,0)-〈x*,x〉≤-Φ*(x*,ȳ*)+ε.$

Adding to the function Φ also the classical convexity and topological properties, another characterization of Equation (17) (for an alternative proof by means of epigraph inclusions consult [35, Theorem 3.1]) as well as a consequence of Corollary 4.2 can be delivered.

Theorem 4.8. Let the function Φ be also convex and lower semicontinuous. The formula (17) is valid for all xX and all ν > 0 if and only if the function ${\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left(·,{y}^{*}\right)$ is ω(X*, X)-lower semicontinuous.

PROOF. By Lemma 2.1, the hypotheses yield that the function (Φ(·, 0))* is actually the ω(X*, X)-lower semicontinuous hull of ${\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left(·,{y}^{*}\right)$. The conclusion is then a consequence of Theorem 4.3.                    □

Corollary 4.9. If the function Φ is also convex and lower semicontinuous and the function ${\mathrm{inf}}_{y*\in Y*}{\Phi }^{*}\left(·,{y}^{*}\right)$ is ω(X*, X)-lower semicontinuous, then for all xX it holds

$∂Φ(·,0)(x)=⋂η>0PrX*∂ηΦ(x,0).$

Under the same additional assumptions for the function Φ, one can deliver another characterization of Equation (21) that follows via Corollary 3.11 (see also ), by means of a closedness type regularity condition this time.

Theorem 4.10. Let the function Φ be also convex and lower semicontinuous. The formula (21) is valid for all xX and all ν > 0 if and only if the set ${\text{Pr}}_{{X}^{*}×ℝ}\text{epi}{\Phi }^{*}$ is closed in the topology ω(X*, X) × ${R}$.

Using Theorem 4.5 and Corollary 3.11 one can provide the following statement (see also ).

Corollary 4.11. If Φ is also convex and lower semicontinuous and the set ${\text{Pr}}_{{X}^{*}×ℝ}\text{epi}{\Phi }^{*}$ is closed in the topology ω(X*, X) × ${R}$, then for all xX one has

$∂Φ(·,0)(x)=PrX*∂Φ(x,0).$

Remark 22. One can clearer notice the differences between the closedness type regularity conditions considered in Corollary 4.9 and Corollary 4.11 when comparing the way these can be equivalently written as formulae for the conjugate of Φ(·, 0). The first of them consists of an infimum, thus it characterizes the stable zero duality gap for (PG) and (DG), while the other one means that the same infimum is also attained, i.e., there is stable strong duality for (PG) and (DG), being thus obviously stronger than its counterpart. An example to underline this fact can be found in Boţ and Wanka . The difference between these two conditions can be seen also when we equivalently characterize them as formulae for the ε-subdifferential of Φ(·, 0) in Theorem 4.8 and Theorem 4.10, respectively.

One can employ the results from this subsection for providing ε-optimality conditions for the primal-dual pair (PG) − − (DG), too. We begin with a consequence of Theorem 4.1.

Theorem 4.12. (a) If Equation (3) holds and $\overline{x}\in X$ is an ε-optimal solution to (PG), for each η > 0 there exists a ${ȳ}_{\eta }^{*}\in {Y}^{*}$ such that $\left(0,{ȳ}_{\eta }^{*}\right)\in {\partial }_{\eta +\epsilon }\Phi \left(\overline{x},0\right)$, i.e., $\Phi \left(\overline{x},0\right)+{\Phi }^{*}\left(0,{ȳ}_{\eta }^{*}\right)\le \eta +\epsilon$. Moreover, ${ȳ}_{\eta }^{*}$ is an η + ε-optimal solution to (DG).

(b) If $\overline{x}\in X$ and for each η > 0 there exists a ${ȳ}_{\eta }^{*}\in {Y}^{*}$ such that $\left(0,{ȳ}_{\eta }^{*}\right)\in {\partial }_{\eta +\epsilon }\Phi \left(\overline{x},0\right)$, then $\overline{x}$ is an ε-optimal solution to (PG), each ${ȳ}_{\eta }^{*}$ is an η+ε-optimal solution to (DG) and there is η+ε-duality gap for (PG) and (DG).

Analogously one can employ Theorem 4.7 in order to achieve ε-optimality conditions for (PG) and (DG), as follows.

Theorem 4.13. (a) Assuming that the regularity condition (21) is fulfilled and that $\overline{x}\in X$ is an ε-optimal solution to (PG), there exists a ȳ* ∈ Y* such that $\left(0,{ȳ}^{*}\right)\in {\partial }_{\epsilon }\Phi \left(\overline{x},0\right)$, i.e., $\Phi \left(\overline{x},0\right)+{\Phi }^{*}\left(0,{ȳ}^{*}\right)\le \epsilon$. Moreover, ȳ* is an ε-optimal solution to (DG).

(b) If $\overline{x}\in X$ and ȳ* ∈ Y* fulfill $\left(0,{ȳ}^{*}\right)\in {\partial }_{\epsilon }\Phi \left(\overline{x},0\right)$, then $\overline{x}$ is an ε-optimal solution to (PG), ȳ* an ε-optimal solution to (DG) and there is ε-duality gap for (PG) and (DG).

Remark 23. The other statements given in this subsection can be employed for delivering ε-optimality conditions for (PG) and (DG), too. Taking ε = 0 in Theorem 4.13 or in the corresponding statements following from Theorem 4.5, Theorem 4.6 or Theorem 4.10 one rediscovers the optimality condition given in Corollary 3.6.

In the following we particularize the primal problem to be constrained and unconstrained, respectively, as done in the previous section, too.

### 4.1. Constrained Scalar Optimization Problems

Consider again the framework of Section 3.1. Using first the Lagrange perturbation function ΦL, one obtains from Theorem 4.4 the following statement where a subdifferential inclusion characterizes a situation of ε-duality gap for (PC) and (DCL).

Theorem 4.14. Let xX. Then

$∂ε(f+δA)(x)=⋂η>0⋃z*∈C*∂ε+η+(z*h)(x)(f+δS+(z*h))(x)$

if and only if for each ${x}^{*}\in {\partial }_{\epsilon }\left(f+{\delta }_{{A}}\right)\left(x\right)$ one has

$f(x)-〈x*,x〉≤supz*∈C*-(f+(z¯*h))S*(x*)+ε.$

Analogously one can particularize the other statements involving the (ε-)subdifferential of Φ(·, 0)(x) to the present framework, too. For instance, Theorem 4.7 turns into the following assertion.

Theorem 4.15. Let xX. Then

$∂ε(f+δA)(x)=⋃z*∈C*∂ε+(z*h)(x)(f+δS+(z*h))(x)$

if and only if for each ${x}^{*}\in {\partial }_{\epsilon }\left(f+{\delta }_{{A}}\right)\left(x\right)$ there exists a z* ∈ C* such that

$f(x)-〈x*,x〉≤-(f+(z¯*h))S*(x*)+ε.$

Adding convexity and topological hypotheses to the functions and sets involved, one obtains the following consequences of Theorem 4.8 and Theorem 4.10, respectively.

Theorem 4.16. Let S be a closed and convex set, f a convex and lower semicontinuous function and h a C-convex and C-epi-closed vector function. The formula

$∂ν(f+δA)(x)=⋂η>0⋃z*∈C*∂ν+η+(z*h)(x)(f+δS+(z*h))(x)$

is valid for all xX and all ν > 0 if and only if the function ${\text{inf}}_{{z}^{*}\in {C}^{*}}{\left(f+\left({\overline{z}}^{*}h\right)\right)}_{S}^{*}$ is ω(X*, X)-lower semicontinuous.

Theorem 4.17. Let S be a closed and convex set, f a convex and lower semicontinuous function and h a C-convex and C-epi-closed vector function. The formula

$∂ν(f+δA)(x)=⋃z*∈C*∂ν+(z*h)(x)(f+δS+(z*h))(x)$

is valid for all xX and all ν > 0 if and only if the set ${\cup }_{{z}^{*}\in {C}^{*}}\text{epi}{\left(f+\left({z}^{*}h\right)+{\delta }_{S}\right)}^{*}$ is closed in the topology ω(X*, X) × ${R}$.

The other perturbation function we employed to assign a conjugate dual problem to (PC) as a special case of (DG) is ΦFL. The statements particularized above for ΦL become in this case the following ones.

Theorem 4.18. Let xX. Then

$∂ε(f+δA)(x)=∩η>0∪z∗∈C∗,ε1,ε2≥0,ε1+ε2=ε+η+(z∗h)(x)(∂ε1f(x)+∂ε2((z∗h) + δS)(x))$

if and only if for each ${x}^{*}\in {\partial }_{\epsilon }\left(f+{\delta }_{{A}}\right)\left(x\right)$ one has

$f(x)-〈x*,x〉≤supz*∈C*,y*∈X*-f*(y*)-(z¯*h)S*(x*-y*)+ε.$

Theorem 4.19. Let xX. Then

$∂ε(f+δA)(x)=⋃z*∈C*,ε1,ε2≥0,ε1+ε2=ε+η+(z*h)(x)∂ε1f(x)+∂ε2((z*h)+δS)(x)$

if and only if for each ${x}^{*}\in {\partial }_{\epsilon }\left(f+{\delta }_{{A}}\right)\left(x\right)$ there exist z* ∈ C* and y* ∈ X* such that

$f(x)-〈x*,x〉≤-f*(y*)-(z*h)S*(x*-y*)+ε.$

Theorem 4.20. Let S be a closed and convex set, f a convex and lower semicontinuous function and h a C-convex and C-epi-closed vector function. The formula

$∂ν(f+δA)(x)=∩η>0∪ z∗∈C∗,ε1,ε2≥0,ε1+ε2=ν+η+(z∗h)(x)(∂ε1f(x)+∂ε2((z∗h) + δS)(x))$

is valid for all xX and all ν > 0 if and only if the function ${\text{inf}}_{{z}^{*}\in {C}^{*}}f\square {\left({\overline{z}}^{*}h\right)}_{S}^{*}$ is ω(X*, X)-lower semicontinuous.

Theorem 4.21. Let S be a closed and convex set, f a convex and lower semicontinuous function and h a C-convex and C-epi-closed vector function. The formula

$∂ν(f+δA)(x)=∪ z∗∈C∗,ε1,ε2≥0,ε1+ε2=ν+(z∗h)(x)(∂ε1f(x)+∂ε2((z∗h)+δS)(x))$

is valid for all xX and all ν > 0 if and only if the set $\text{epi}{f}^{*}+{\cup }_{{z}^{*}\in {C}^{*}}\text{epi}{\left({z}^{*}h\right)}_{S}^{*}$ is closed in the topology ω(X*, X) × ${R}$.

Analogously one can particularize the other statements involving the (ε-)subdifferential of Φ(·, 0)(x) for ΦFL, too.

### 4.2. Unconstrained Scalar Optimization Problems

Consider now the framework of Section 3.2. From Theorem 4.4 one obtains the following statement where a subdifferential inclusion characterizes a situation of ε-duality gap for (PU) and (DU).

Theorem 4.22. Let xX. Then

$∂ε(f+g∘A)(x)=∩η>0∪ε1,ε2≥0,ε1+ε2=ε+η(∂ε1f(x)+A∗∂ε2g(Ax))$

if and only if for each ${x}^{*}\in {\partial }_{\epsilon }\left(f+g\circ A\right)\left(x\right)$ one has

$f(x)+g(Ax)−〈x∗,x〉≤supy∗∈X∗{−f∗(A∗y∗)−g∗(x∗−y∗)}+ε.$

Further, Theorem 4.7 turns into the following assertion.

Theorem 4.23. Let xX. Then

$∂ε(f+g∘A)(x)=∪ε1,ε2≥0,ε1+ε2=ε(∂ε1f(x)+A∗∂ε2g(Ax))$

if and only if for each ${x}^{*}\in {\partial }_{\epsilon }\left(f+g\circ A\right)\left(x\right)$ there exists a y* ∈ X* such that

$f(x)+g(Ax)-〈x*,x〉≤-f*(A*y*)-g*(x*-y*)+ε.$

Adding convexity and topological hypotheses to the functions and sets involved, one obtains the following consequences of Theorem 4.8 and Theorem 4.10, respectively.

Theorem 4.24. Let the functions f and g be also convex and lower semicontinuous. The formula

$∂ν(f+g∘A)(x)=∩η>0∪ε1,ε2≥0,ε1+ε2=ν+η(∂ε1f(x)+A∗∂ε2g(Ax))$

is valid for all xX and all ν > 0 if and only if the function ${\mathrm{inf}}_{{y}^{*}\in {C}^{*}}\text{\hspace{0.17em}}\left[{f}^{*}\left({A}^{*}{y}^{*}\right)+{g}^{*}\left(·-{y}^{*}\right)\right]$ is ω(X*, X)-lower semicontinuous.

Theorem 4.25. Let the functions f and g be also convex and lower semicontinuous. The formula

$∂ν(f+g∘A)(x)=∪ε1,ε2≥0,ε1+ε2=ν(∂ε1f(x)+A∗∂ε2g(Ax))$

is valid for all xX and all ν > 0 if and only if the set $\text{epi}{f}^{*}+\left({A}^{*}×{\text{id}}_{ℝ}\right)\left(\text{epi}{g}^{*}\right)$ is closed in the topology ω(X*, X) × ${R}$.

Analogously one can particularize the other statements involving the (ε-)subdifferential of Φ(·, 0)(x) to the present framework, too. Moreover, one can see (PC) as an unconstrained optimization problem like in Remark 17 and the corresponding counterparts of the statements given above can be formulated for it, too.

Remark 24. Particularizing the results provided in this section for constrained and unconstrained convex optimization problems (and adding where necessary additional hypotheses) one can rediscover various statements from Jeyakumar et al. , Boţ et al. [38, 39]. Moreover, the classical Basic Constraint Qualification (see for instance, ) and the Farkas-Minkowski Constraint Qualification (cf. ) prove to be special instances of the regularity conditions presented in this section.

Remark 25. From the statements provided in Sections 3.1 and 3.2 one can derive ε-optimality conditions for (PC) and (PU) and their dual problems, as done in the general case in Theorem 4.12 and Theorem 4.13.

Remark 26. More characterizations of (stable) ε-duality gap and strong/total duality statements via epigraph and/or subdifferential inclusions similar to the ones provided within this chapter for constrained optimization problems can be found in Boncea and Grad , while in Boncea and Grad  the same kind of assertions are delivered for unconstrained composed optimization problems (see also ). Moreover, in Boţ and Grad  we have provided equivalent characterizations of zero duality gap and stable strong duality via epigraph inclusions for both constrained and unconstrained, as well for composed optimization problems with the involved functions taken convex.

## 5. Conclusions, Remarks and Further Directions of Research

The closedness type regularity conditions have proven during the last decade to be viable alternatives to their more restrictive interiority type counterparts, in both convex optimization and different areas where it was successfully applied. In this survey paper we have deconstructed and reconstructed some closedness type regularity conditions formulated by means of epigraphs and (ε-)subdifferentials, respectively, for general optimization problems, showing thus that they arise naturally when dealing with such problems. Some of the general results were particularized for constrained and unconstrained convex optimization problems, respectively.

Closedness type regularity conditions were employed by different authors in other related research fields, too, like subdifferential calculus (e.g., by [10, 18, 29, 5052]), DC programming (e.g., by [51, 5357]), generalized convex optimization (e.g., by ), semiinfinite programming (e.g., by [15, 16, 50, 53, 6164]), semidefinite programming (e.g., by [43, 45, 46, 65]), robust optimization (e.g., by [63, 66, 67]), location optimization (e.g., by ), vector optimization (e.g., by [10, 29, 63, 64, 69]), monotone operators ([17, 7078]), machine learning () or variational inequalities (), and the list is far from being complete.

Other possible immediate research fields where we believe that the the closedness type regularity conditions may prove to be useful are bilevel optimization (possibly via the Fenchel-Lagrange approach of ), error bounds (maybe via the approach of ), equilibrium problems (possibly via variational inequalities, inspired by [80, 82, 83]) and even numerical optimization (e.g., for primal-dual algorithms, by guaranteeing strong duality).

## Author Contributions

The author S-MG worked alone on this review paper containing also results obtained with/by other authors, that have been properly cited.

## Funding

Research partially supported by DFG (German Research Foundation), projects WA 922/8-1 and GR3367/4-1.

## Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

## Acknowledgments

The author is thankful to two referees for comments and suggestions that have contributed to an improved presentation of the paper.

## References

1. Precupanu T. Closedness conditions for the optimality of a family of nonconvex optimization problems. Math Operationsforsch Stat Ser Optim. (1984) 15:339–46.

2. Burachik RS, Jeyakumar V. A simple closure condition for the normal cone intersection formula. Proc Am Math Soc. (2005) 133:1741–8. doi: 10.1090/S0002-9939-04-07844-X

3. Burachik RS, Jeyakumar V. A dual condition for the convex subdifferential sum formula with applications. J Convex Anal. (2005) 12:279–90.

4. Burachik RS, Jeyakumar V, Wu ZY. Necessary and sufficient conditions for stable conjugate duality. Nonlinear Anal Theory Methods Appl. (2006) 64:1998–2006. doi: 10.1016/j.na.2005.07.034

5. Burachik RS, Jeyakumar V. A new geometric condition for Fenchel's duality in infinite dimensional spaces. Math Progr. (2005) 104:229–33. doi: 10.1007/s10107-005-0614-3

6. Boţ RI, Wanka G. A weaker regularity condition for subdifferential calculus and Fenchel duality in infinite dimensional spaces. Nonlinear Anal Theory Methods Appl. (2006) 64:2787–804. doi: 10.1016/j.na.2005.09.017

7. Boţ RI, Wanka G. An alternative formulation for a new closed cone constraint qualification. Nonlinear Anal. Theory Methods Appl. (2006) 64:1367–81. doi: 10.1016/j.na.2005.06.041

8. Jeyakumar V, Dinh N, Lee GM. A New Closed Cone Constraint Qualification for Convex Optimization. Applied Mathematics Report 04/8, University of New South Wales (2004).

9. Jeyakumar V, Song W, Dinh N, Lee GM. Stable Strong Duality in Convex Optimization. Applied Mathematics Report 05/22, University of New South Wales (2005).

10. Grad SM. Vector Optimization and Monotone Operators via Convex Duality. Berlin; Heidelberg: Springer-Verlag (2015).

11. Hiriart-Urruty JB, Lemaréchal C. Convex Analysis and Minimization Algorithms I: Fundamentals. Grundlehren Math. Wiss. Vol. 305. Berlin: Springer (1993).

12. Tiba D, Zălinescu C. On the necessity of some constraint qualification conditions in convex programming. J Convex Anal. (2004) 11:95–110.

13. Hu H. Characterizations of the strong basic constraint qualifications. Math Oper Res. (2005) 30:956–65. doi: 10.1287/moor.1050.0154

14. Goberna MA, López MA, Pastor J. Farkas-Minkowski systems in semi-infinite programming. Appl Math Optim. (1981) 7:295–308.

15. Dinh N, Goberna MA, López MA, Son TQ. New Farkas-type constraint qualifications in convex infinite programming. ESAIM Control Optim Calc Var. (2007) 13:580–97. doi: 10.1051/cocv:2007027

16. Goberna MA, Jeyakumar V, López MA. Necessary and sufficient constraint qualifications for solvability of systems of infinite convex inequalities. Nonlinear Anal Theory Methods Appl. (2008) 68:1184–194. doi: 10.1016/j.na.2006.12.014

17. Boţ RI, Grad SM, Wanka G. Maximal monotonicity for the precomposition with a linear operator. SIAM J Optim. (2007) 17:1239–52. doi: 10.1137/050641491

CrossRef Full Text

18. Boţ RI, Grad SM, Wanka G. A new constraint qualification for the formula of the subdifferential of composed convex functions in infinite dimensional spaces. Math Nachr. (2008) 281:1088–107. doi: 10.1002/mana.200510662

19. Boţ RI. Conjugate Duality in Convex Optimization. Lecture Notes in Economics and Mathematical Systems, Vol. 637. Berlin; Heidelberg: Springer (2010). doi: 10.1007/978-3-642-04900-2

CrossRef Full Text

20. Boţ RI, Grad SM, Wanka G. New regularity conditions for Lagrange and Fenchel-Lagrange duality in infinite dimensional spaces. Math Inequal Appl. (2009) 12:171–89.

21. Boncea HV, Grad SM. Characterizations of ε-duality gap statements for composed optimization problems. Nonlinear Anal. Theory Methods Appl. (2013) 92:96–107. doi: 10.1016/j.na.2013.07.004

22. Boncea HV, Grad SM. Characterizations of ε-duality gap statements for constrained optimization problems. Cent Eur J Math. (2013) 11:2020–33. doi: 10.2478/s11533-013-0294-9

23. Anbo Y. Nonstandard arguments and the characterization of independence in generic structures. RIMS Kôkyûroku (2009) 1646:4–17. Available online at: http://hdl.handle.net/2433/140691

24. Friedman HM. A way out. In: Link G, editor. One Hundred Years of Russell's Paradox. Berlin; New York, NY: Walter de Gruyter (2004). p. 49–86.

25. Rubinov AM, Glover BM. Quasiconvexity via two step functions. In: Crouzeix JP, Martínez Legaz JE, Volle M, editors. Generalized Convexity, Generalized Monotonicity: Recent Results, Nonconvex Optimization and Its Applications. Vol. 27. Dordrecht: Kluwer (1998). p. 159–183.

26. Rockafellar RT. Convex Analysis. Princeton, NJ: Princeton University Press (1970).

27. Zălinescu C. Convex Analysis in General Vector Spaces. Singapore: World Scientific (2002).

28. Ekeland I, Temam RM. Convex Analysis and Variational Problems. Amsterdam: North-Holland Publishing Company (1976).

29. Boţ RI, Grad SM, Wanka G. Duality in Vector Optimization. Berlin; Heidelberg: Springer (2009).

30. Boţ RI, Grad SM. Wolfe duality and Mond-Weir duality via perturbations. Nonlinear Anal Theory Methods Appl. (2010) 73:374–84. doi: 10.1016/j.na.2010.03.026

31. Grad SM, Pop EL. Alternative generalized Wolfe type and Mond-Weir type vector duality. J Nonlinear Convex Anal. (2014) 15:867–84.

32. Boţ RI, Grad SM. Extending the classical vector Wolfe and Mond-Weir duality concepts via perturbations. J Nonlinear Convex Anal. (2011) 12:81–101.

33. Boţ RI, Grad SM, Wanka G. Fenchel-Lagrange versus geometric programming in convex optimization. J Optim Theory Appl. (2006) 129:33–54. doi: 10.1007/s10957-006-9047-2

CrossRef Full Text

34. Hiriart-Urruty JB. ε-subdifferential calculus. In: Aubin JP, Vinter RB, editors. Convex Analysis and Optimization, Pitman Research Notes in Mathematics Series, Vol. 57. Boston, MA: Pitman (1982). p. 43–92.

35. Boţ RI, Grad SM. Lower semicontinuous type regularity conditions for subdifferential calculus. Optim Methods Softw. (2010) 25:37–48. doi: 10.1080/10556780903208977

36. Boţ RI, Grad SM. Regularity conditions for formulae of biconjugate functions. Taiwanese J Math. (2008) 12:1921–42.

37. Grad SM, Wanka G. On biconjugates of infimal functions. Optimization (2015) 64:1759–75. doi: 10.1080/02331934.2015.1046873

38. Boţ RI, Grad SM, Wanka G. New regularity conditions for strong and total Fenchel-Lagrange duality in infinite dimensional spaces. Nonlinear Anal Theory Methods Appl. (2008) 69:323–36. doi: 10.1016/j.na.2007.05.021

39. Boţ RI, Grad SM, Wanka G. On strong and total Lagrange duality for convex optimization problems. J Math Anal Appl. (2008) 337:1315–25. doi: 10.1016/j.jmaa.2007.04.071

40. Boţ RI, Grad SM, Wanka G. Generalized Moreau-Rockafellar results for composed convex functions. Optimization (2009) 58:917–33. doi: 10.1080/02331930902945082

41. Boţ RI, Wanka G. Farkas-type results with conjugate functions. SIAM J Optim. (2005) 15:540–54. doi: 10.1137/030602332

42. Dinh N, Jeyakumar V. Farkas' lemma: three decades of generalizations for mathematical optimization. TOP (2014) 22:1–22. doi: 10.1007/s11750-014-0319-y

43. Kim GS, Lee GM. On ε-approximate solutions for convex semidefinite optimization problems. Taiwanese J Math. (2007) 11:765–84.

44. Tidball MM, Pourtallier O, Altman E. Approximations in dynamic zero-sum games. SIAM J Optim. (2006) 35:2101–17. doi: 10.1137/S0363012994272460

45. Jeyakumar V, Li GY. New dual constraint qualifications characterizing zero duality gaps of convex programs and semidefinite programs. Nonlinear Anal Theory Methods Appl. (2009) 71:2239–49. doi: 10.1016/j.na.2009.05.009

46. Jeyakumar V, Li GY. Stable zero duality gaps in convex programming: complete dual characterizations with applications to semidefinite programs. J Math Anal Appl. (2009) 360:156–67. doi: 10.1016/j.jmaa.2009.06.043

47. Boţ RI, Grad SM, Wanka G. Fenchel's duality theorem for nearly convex functions. J Optim Theory Appl. (2007) 132:509–15. doi: 10.1007/s10957-007-9234-9

48. Boţ RI, Grad SM, Wanka G. Almost convex functions: conjugacy and duality. In: Konnov IV, Luc DT, Rubinov AM, editors. Generalized Convexity and Related Topics, Lecture Notes in Economics and Mathematical Systems, Vol. 583. Berlin: Springer (2007). p. 101–114. doi: 10.1007/978-3-540-37007-9_5

CrossRef Full Text

49. Boţ RI, Grad SM, Wanka G. New constraint qualification and conjugate duality for composed convex optimization problems. J Optim Theory Appl. (2007) 135:241–55. doi: 10.1007/s10957-007-9247-4

50. Mordukhovich BS, Nghia TTA. Constraint qualifications and optimality conditions for nonconvex semi-infinite and infinite programs. Math Program (2013) 139:271–300. doi: 10.1007/s10107-013-0672-x

51. Dinh N, Mordukhovich BS, Nghia TTA. Subdifferentials of value functions and optimality conditions for DC and bilevel infinite and semi-infinite programs. Math Program (2010) 123:101–38. doi: 10.1007/s10107-009-0323-4

52. Correa R, Hantoute A, Jourani A. Characterizations of convex approximate subdifferential calculus in Banach spaces. Trans Am Math Soc. (2016) 368:4831–54. doi: 10.1090/tran/6589

53. Dinh N, Mordukhovich BS, Nghia TTA. Qualification and optimality conditions for DC programs with infinite constraints. Acta Math Vietnam (2009) 34:125–55.

54. Dinh N, Nghia TTA, Vallet G. A closedness condition and its applications to DC programs with convex constraints. Optimization (2010) 59:541–60. doi: 10.1080/02331930801951348

55. Dinh N, Nghia TTA, Vallet G. Farkas-type results and duality for DC programs with convex constraints. J Convex Anal. (2008) 15:235–62.

56. Sun XK, Guo XL, Zeng J. Necessary optimality conditions for DC infinite programs with inequality constraints. J Nonlinear Sci Appl. (2016) 9:617–26.

57. Fang DH, Li C, Yang XQ. Stable and total Fenchel duality for DC optimization problems in locally convex spaces. SIAM J Optim. (2011) 21:730–60. doi: 10.1137/100789749

58. Fajardo MD, Vidal J. Stable strong Fenchel and Lagrange duality for evenly convex optimization problems. Optimization (2016) 65:1675–91. doi: 10.1080/02331934.2016.1167207

59. Volle M, Martínez-Legaz JE, Vicente-Pérez J. Duality for closed convex functions and evenly convex functions. J Optim Theory Appl. (2015) 167:985–97. doi: 10.1007/s10957-013-0395-4

60. Martínez-Legaz JE, Vicente-Pérez J. The e-support function of an e-convex set and conjugacy for e-convex functions. J Math Anal Appl. (2011) 376:602–12. doi: 10.1016/j.jmaa.2010.10.058

CrossRef Full Text

61. Fang DH, Li C, Ng KF. Constraint qualifications for extended Farkas's lemmas and Lagrangian dualities in convex infinite programming. SIAM J Optim. (2009) 20:1311–32. doi: 10.1137/080739124

62. Fang DH, Li C, Ng KF. Constraint qualifications for optimality conditions and total Lagrange dualities in convex infinite programming. Nonlin Anal. (2010) 73:1143–59. doi: 10.1016/j.na.2010.04.020

63. Goberna MA, Jeyakumar V, Li G, Vicente-Pérez J. Robust solutions of multiobjective linear semi-infinite programs under constraint data uncertainty. SIAM J Optim. (2014) 24:1402–19. doi: 10.1137/130939596

64. Goberna MA, Guerra-Vazquez F, Todorov MI. Constraint qualifications in linear vector semi-infinite optimization. Eur J Oper Res. (2013) 227:12–21. doi: 10.1016/j.ejor.2012.09.006

65. Jeyakumar V. A note on strong duality in convex semidefinite optimization: necessary and sufficient conditions. Optim Lett. (2008) 2:15–25. doi: 10.1007/s11590-006-0038-x

66. Boţ RI, Jeyakumar V, Li G. Robust duality in parametric convex optimization. Set Valued Var Anal. (2013) 21:177–89. doi: 10.1007/s11228-012-0219-y

67. Wang M, Fang D, Chen Z. Strong and total Fenchel dualities for robust convex optimization problems. J Inequal Appl. (2015) 2015:70. doi: 10.1186/s13660-015-0592-9

68. Wanka G, Wilfer O. Duality Results for Extended Multifacility Location Problems. Preprint, 2016–05. Chemnitz: Chemnitz University of Technology (2016).

69. Goberna MA, Guerra-Vazquez F, Todorov MI. Constraint qualifications in convex vector semi-infinite optimization. Eur J Oper Res. (2016) 249:32–40. doi: 10.1016/j.ejor.2015.08.062

70. Boţ RI, Grad SM, Wanka G. Weaker constraint qualifications in maximal monotonicity. Numer Funct Anal Optim. (2007) 28:27–41. doi: 10.1080/01630560701190224

71. Boţ RI, Grad SM, Wanka G. A new regularity condition for subdifferential calculus and Fenchel duality in infinite dimensional spaces. Applications for maximal monotone operators. In: Castellani G, editor. Seminario Mario Volpato, 3. Venice: Ca'Foscari University of Venice (2007). p. 16–30.

72. Boţ RI, Csetnek ER, Wanka G. A new condition for maximal monotonicity via representative functions. Nonlinear Anal Theory Methods Appl. (2007) 67:2390–402. doi: 10.1016/j.na.2006.09.006

73. Boţ RI, László SC. On the generalized parallel sum of two maximal monotone operators of Gossez type (D). J Math Anal Appl. (2012) 391:82–98. doi: 10.1016/j.jmaa.2012.02.030

74. Jeyakumar V, Wu ZY. A dual criterion for maximal monotonicity of composition operators. Set Valued Var Anal. (2007) 15:265–73. doi: 10.1007/s11228-006-0025-5

75. Csetnek ER. Overcoming the Failure of the Classical Generalized Interior-point Regularity Conditions in Convex Optimization. Applications of the Duality Theory to Enlargements of Maximal Monotone Operators. Berlin: Logos-Verlag (2010).

76. Boţ RI, Grad SM. Closedness type regularity conditions for surjectivity results involving the sum of two maximal monotone operators. Cent Eur J Math. (2011) 9:162–72. doi: 10.2478/s11533-010-0083-7

77. Boţ RI, Grad SM, Wanka G. Brézis-Haraux-type approximation in nonreflexive Banach spaces. In: Allevi E, Bertocchi M, Gnudi A, Konnov IV, editors. Nonlinear Analysis with Applications in Economics, Energy and Transportation Bergamo: Bergamo University Press (2007). p. 155–170.

78. Boţ RI, Grad SM, Wanka G. Brézis-Haraux-type approximation of the range of a monotone operator composed with a linear mapping. In: Kása Z, Kassay G, Kolumbán J, editors. Proceedings of the International Conference In Memoriam Gyula Farkas. Cluj-Napoca: Cluj University Press (2006). p. 36–49.

79. Boţ RI, Heinrich A. Regression tasks in machine learning via Fenchel duality. Ann Oper Res. (2014) 222:197–211. doi: 10.1007/s10479-012-1304-1

80. Altangerel L, Boţ RI, Wanka G. On gap functions for equilibrium problems via Fenchel duality. Pac J Optim. (2006) 2:667–78.

81. Altangerel L, Battur G. Perturbation approach to generalized Nash equilibrium problems with shared constraints. Optim Lett. (2012) 6:1379–91. doi: 10.1007/s11590-012-0510-8

82. Cioban L, Csetnek ER. Duality for ε-variational inequalities via the subdifferential calculus. Nonlinear Anal. (2012) 75:3142–56. doi: 10.1016/j.na.2011.12.012

83. Cioban L, Csetnek ER. Revisiting the construction of gap functions for variational inequalities and equilibrium problems via conjugate duality. Cent Eur J Math. (2013) 11:829–50. doi: 10.2478/s11533-012-0151-2

84. Aboussoror A, Adly S. A Fenchel-Lagrange duality approach for a bilevel programming problem with extremal-value function. J Optim Theory Appl. (2011) 149:254–68. doi: 10.1007/s10957-011-9831-5

85. Boţ RI, Csetnek ER. Error bound results for convex inequality systems via conjugate duality. Top (2012) 20:296–309. doi: 10.1007/s11750-011-0187-7

Keywords: convex optimization, duality, closedness type regularity conditions, conjugate functions, epigraphs, subdifferentials

AMS mathematics subject classification. 90C25, 26A51, 90C46.

Citation: Grad S-M (2016) Closedness Type Regularity Conditions in Convex Optimization and Beyond. Front. Appl. Math. Stat. 2:14. doi: 10.3389/fams.2016.00014

Received: 25 June 2016; Accepted: 31 August 2016;
Published: 16 September 2016.

Edited by:

Daniel Toader Onofrei, University of Houston, USA

Reviewed by:

Vladimir Shikhman, Université Catholique de Louvain, Belgium
Wim Van Ackooij, EDF, France

Copyright © 2016 Grad. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.