1. Speaker: Prof. Radu Ioan Bot, University of Vienna, Austria
Title: A General Double-Proximal Gradient Algorithm for D.C. Programming
The possibilities of exploiting the special structure of d.c. programs, which
consist of optimizing the difference of convex functions, are currently more
or less limited to variants of the DCA methods. These assume that either the convex or the concave part, or both, are evaluated by one of their subgradients.
In this talk we propose an algorithm which allows the evaluation of both
the concave and the convex part by their proximal points. Additionally, we
allow a smooth part, which is evaluated via its gradient.
For this algorithm we show that every cluster point is a solution of the
optimization problem. Furthermore, we show the connection to the Toland
dual problem and prove a descent property for the objective function values
of a primal-dual formulation of the problem. Convergence of the iterates is
shown if this objective function satisfies the Kurdyka - Lojasiewicz property.
In the last part, we apply the algorithm to an image processing model.
2. Speaker: Prof. Panos M. Pardalos, University of Florida, USA
Title: Data Uncertainty and Robust Machine Learning Optimization Models
This talk presents robust chance-constrained support vector machines (SVM) with second-order moment information and obtains equivalent semidefinite programming (SDP) and second-order cone programming (SOCP) reformulations. Three types of estimation errors for mean and covariance matrix are considered and the corresponding formulations and techniques to handle these types of errors are presented. A method to solve robust chance-constrained SVM with large scale data is proposed based on a stochastic gradient descent method.
3. Speaker: Prof. Suresh P. Sethi, The University of Texas at Dallas, USA
Title: Feedback Stackelberg Games for Dynamic Supply Chains with Cost Learning
We consider a decentralized two-period supply chain in which a manufacturer produces a product with benefits of cost learning, and sells it through a retailer facing a price-dependent demand. The manufacturer’s second-period production cost declines linearly in the first-period production, but with a random learning rate. The manufacturer may or may not have the inventory carryover option. We formulate the resulting problems as two-period Stackelberg games and obtain their feedback equilibrium solutions explicitly. We then examine the impact of mean learning rate and learning rate variability on the pricing strategies of the channel members, on the manufacturer’s production decisions, and on the retailer’s procurement decisions. We show that as the mean learning rate or the learning rate variability increases, the traditional double marginalization problem becomes more severe, leading to greater efficiency loss in the channel. We obtain revenue sharing contracts that can coordinate the dynamic supply chain. In particular, when the manufacturer may hold inventory, we identify two major drivers for inventory carryover: market growth and learning rate variability. Finally, we demonstrate the robustness of our results by examining a model in which cost learning takes place continuously.