Optimization

Math Topics A - Z listing

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Optimization Topics

Sort by:

Number partitioning problem

Given a set of nonnegative integers, the number partitioning problem requires the division of into two subsets such that the sums of number in each subset are as close as possible. This problem is known to be NP-complete, but is perhaps "the easiest hard problem" (Hayes 2002; Borwein and Bailey 2003, pp. 38-39).

Optimization theory

A branch of mathematics which encompasses many diverse areas of minimization and optimization. Optimization theory is the more modern term for operations research. Optimization theory includes the calculus of variations, control theory, convex optimization theory, decision theory, game theory, linear programming, Markov chains, network analysis, optimization theory, queuing systems, etc.

Global optimization

The objective of global optimization is to find the globally best solution of (possibly nonlinear) models, in the (possible or known) presence of multiple local optima. Formally, global optimization seeks global solution(s) of a constrained optimization model. Nonlinear models are ubiquitous in many applications, e.g., in advanced engineering design, biotechnology, data analysis, environmental management, financial planning, process control, risk management, scientific modeling, and others. Their solution often requires a global search approach.A few application examples include acoustics equipment design, cancer therapy planning, chemical process modeling, data analysis, classification and visualization, economic and financial forecasting, environmental risk assessment and management, industrial product design, laser equipment design, model fitting to data (calibration), optimization in numerical mathematics,..

Genetic algorithm

A genetic algorithm is a class of adaptive stochastic optimization algorithms involving search and optimization. Genetic algorithms were first used by Holland (1975).The basic idea is to try to mimic a simple picture of natural selection in order to find a good algorithm. The first step is to mutate, or randomly vary, a given collection of sample programs. The second step is a selection step, which is often done through measuring against a fitness function. The process is repeated until a suitable solution is found.There are a large number of different types of genetic algorithms. The step involving mutation depends on how the sample programs are represented, as well as whether the programmer includes various crossover techniques. The test for fitness is also up to the programmer.Like a gradient flow optimization, it is possible for the process to get stuck in a local maximum of the fitness function. One advantage of a genetic algorithm is that..

Traveling salesman constants

Let be the smallest tour length for points in a -D hypercube. Then there exists a smallest constant such that for all optimal tours in the hypercube,(1)and a constant such that for almost all optimal tours in the hypercube,(2)These constants satisfy the inequalities(3)(4)(5)(6)(7)(8)(9)(Fejes Tóth 1940, Verblunsky 1951, Few 1955, Beardwood et al. 1959),where(10) is the gamma function, is an expression involving Struve functions and Bessel functions of the second kind,(11)(OEIS A086306; Karloff 1989), and(12)(OEIS A086307; Goddyn 1990).In the limit ,(13)(14)(15)and(16)where(17)and is the best sphere packing density in -D space (Goddyn 1990, Moran 1984, Kabatyanskii and Levenshtein 1978). Steele and Snyder (1989) proved that the limit exists.Now consider the constant(18)so(19)Nonrigorous numerical estimates give (Johnson et al. 1996) and (Percus and Martin 1996).A certain self-avoiding space-filling function..

Evolution strategies

A differential evolution method used to minimize functions of real variables. Evolution strategies are significantly faster at numerical optimization than traditional genetic algorithms and also more likely to find a function's true global extremum.These methods heuristically "mimic" biological evolution: namely, the process of natural selection and the "survival of the fittest" principle. An adaptive search procedure based on a "population" of candidate solution points is used. Iterations involve a competitive selection that drops the poorer solutions. The remaining pool of candidates with higher "fitness value" are then "recombined" with other solutions by swapping components with another; they can also be "mutated" by making some smaller-scale change to a candidate. The recombination and mutation moves are applied sequentially; their aim is to generate..

Differential evolution

Differential evolution is a stochastic parallel direct search evolution strategy optimization method that is fairly fast and reasonably robust. Differential evolution is implemented in the Wolfram Language as NMinimize[f, vars, Method -> "DifferentialEvolution"] and NMaximize[f, vars, Method -> "DifferentialEvolution"].Differential evolution is capable of handling nondifferentiable, nonlinear and multimodal objective functions. It has been used to train neural networks having real and constrained integer weights.In a population of potential solutions within an -dimensional search space, a fixed number of vectors are randomly initialized, then evolved over time to explore the search space and to locate the minima of the objective function.At each iteration, called a generation, new vectors are generated by the combination of vectors randomly chosen from the current population (mutation)...

Linear programming

Linear programming, sometimes known as linear optimization, is the problem of maximizing or minimizing a linear function over a convex polyhedron specified by linear and non-negativity constraints. Simplistically, linear programming is the optimization of an outcome based on some set of constraints using a linear mathematical model.Linear programming is implemented in the Wolfram Language as LinearProgramming[c, m, b], which finds a vector which minimizes the quantity subject to the constraints and for .Linear programming theory falls within convex optimization theory and is also considered to be an important part of operations research. Linear programming is extensively used in business and economics, but may also be used to solve certain engineering problems.Examples from economics include Leontief's input-output model, the determination of shadow prices, etc., an example of a business application would be maximizing..

Convex optimization theory

The problem of maximizing a linear function over a convex polyhedron, also known as operations research or optimization theory. The general problem of convex optimization is to find the minimum of a convex (or quasiconvex) function on a finite-dimensional convex body . Methods of solution include Levin's algorithm and the method of circumscribed ellipsoids, also called the Nemirovsky-Yudin-Shor method.

Simulated annealing

There are certain optimization problems that become unmanageable using combinatorial methods as the number of objects becomes large. A typical example is the traveling salesman problem, which belongs to the NP-complete class of problems. For these problems, there is a very effective practical algorithm called simulated annealing (thus named because it mimics the process undergone by misplaced atoms in a metal when its heated and then slowly cooled). While this technique is unlikely to find the optimum solution, it can often find a very good solution, even in the presence of noisy data.The traveling salesman problem can be used as an example application of simulated annealing. In this problem, a salesman must visit some large number of cities while minimizing the total mileage traveled. If the salesman starts with a random itinerary, he can then pairwise trade the order of visits to cities, hoping to reduce the mileage with each exchange...

Simplex method

The simplex method is a method for solving problems in linear programming. This method, invented by George Dantzig in 1947, tests adjacent vertices of the feasible set (which is a polytope) in sequence so that at each new vertex the objective function improves or is unchanged. The simplex method is very efficient in practice, generally taking to iterations at most (where is the number of equality constraints), and converging in expected polynomial time for certain distributions of random inputs (Nocedal and Wright 1999, Forsgren 2002). However, its worst-case complexity is exponential, as can be demonstrated with carefully constructed examples (Klee and Minty 1972).A different type of methods for linear programming problems are interior point methods, whose complexity is polynomial for both average and worst case. These methods construct a sequence of strictly feasible points (i.e., lying in the interior of the polytope but never on..

Building problem

A hypothetic building design problem in optimization with constraints proposed by Bhatti (2000, pp. 3-5). To save energy costs for heating and cooling, an architect wishes to design a cuboidal building that is partially underground. Let be the number of stories (which therefore must be a positive integer), the depth of the building below ground, the height of the building above ground, the length of the building, and the width of the building. The floor space needed is at least , the lot size requires that , the building shape is specified so that (the golden ratio, approximately 1.618), each story is 3.5 m high, heating and cooling costs are estimated at for exposed surface of the building, and it has been specified that annual climate control costs should not exceed . The problem then asks to minimize the volume that must be excavated to build the building.This is equivalent to minimizing the function(1)subject to the constraints (2)(3)(4)(5)(6)(7)(8)There..

Branch and bound algorithm

Branch and bound algorithms are a variety of adaptive partition strategies have been proposed to solve global optimization models. These are based upon partition, sampling, and subsequent lower and upper bounding procedures: these operations are applied iteratively to the collection of active ("candidate") subsets within the feasible set . Their exhaustive search feature is guaranteed in similar spirit to the analogous integer linear programming methodology. Branch and bound subsumes many specific approaches, and allows for a variety of implementations. Branch and bound methods typically rely on some a priori structural knowledge about the problem. This information may relate, for instance to how rapidly each function can vary (e.g., the knowledge of a suitable "overall" Lipschitz constant, for each function and ); or to the availability of an analytic formulation and guaranteed smoothness of all functions..

Set covering deployment

Set covering deployment (sometimes written "set-covering deployment" and abbreviated SCDP for "set covering deployment problem") seeks an optimal stationing of troops in a set of regions so that a relatively small number of troop units can control a large geographic region. ReVelle and Rosing (2000) first described this in a study of Emperor Constantine the Great's mobile field army placements to secure the Roman Empire. Set covering deployment can be mathematically formulated as a (0,1)-integer programming problem.To formulate the Roman domination problem, consider the eight provinces of the Constantinian Roman Empire illustrated above. Each region is represented as a white disk, and the red lines indicate region connections. Call a region secured if one or more field armies are stationed in that region, and call a region securable if a field army can be deployed to that area from an adjacent area. In addition,..

Jeep problem

Maximize the distance a Jeep can penetrate into the desert using a given quantity of fuel. The Jeep is allowed to go forward, unload some fuel, and then return to its base using the fuel remaining in its tank. At its base, it may refuel and set out again. When it reaches fuel it has previously stored, it may then use it to partially fill its tank. This problem is also called the exploration problem (Ball and Coxeter 1987).Given (with ) drums of fuel at the edge of the desert and a Jeep capable of holding one drum (and storing fuel in containers along the way), the maximum one-way distance which can be traveled (assuming the Jeep travels one unit of distance per drum of fuel expended) is(1)(2)where is the Euler-Mascheroni constant and the polygamma function.For example, the farthest a Jeep with drum can travel is obviously 1 unit. However, with drums of gas, the maximum distance is achieved by filling up the Jeep's tank with the first drum, traveling 1/3 of a unit,..

Interior point method

An interior point method is a linear or nonlinear programming method (Forsgren et al. 2002) that achieves optimization by going through the middle of the solid defined by the problem rather than around its surface.A polynomial time linear programming algorithm using an interior point method was found by Karmarkar (1984). Arguably, interior point methods were known as early as the 1960s in the form of the barrier function methods, but the media hype accompanying Karmarkar's announcement led to these methods receiving a great deal of attention. However, it should be noted that while Karmarkar claimed that his implementation was much more efficient than the simplex method, the potential of interior point method was established only later. By 1994, there were more than 1300 published papers on interior point methods.Current efficient implementations are mostly based on a predictor-corrector technique (Mehrotra 1992), where the Cholesky..

Ant colony algorithm

The ant colony algorithm is an algorithm for finding optimal paths that is based on the behavior of ants searching for food.At first, the ants wander randomly. When an ant finds a source of food, it walks back to the colony leaving "markers" (pheromones) that show the path has food. When other ants come across the markers, they are likely to follow the path with a certain probability. If they do, they then populate the path with their own markers as they bring the food back. As more ants find the path, it gets stronger until there are a couple streams of ants traveling to various food sources near the colony.Because the ants drop pheromones every time they bring food, shorter paths are more likely to be stronger, hence optimizing the "solution." In the meantime, some ants are still randomly scouting for closer food sources. A similar approach can be used find near-optimal solution to the traveling salesman problem.Once the food..

Operations research

Operations research is a vast branch of mathematics which encompasses many diverse areas of minimization and optimization. Thousands of books have been written worldwide on the subject of operations research.The central objective of operations research is optimization, i.e., "to do things best under the given circumstances." This general concept has great many applications, for instance, in agricultural planning, biotechnology, data analysis, distribution of goods and resources, emergency and rescue operations, engineering systems design, environmental management, financial planning, health care management, inventory control, manpower and resource allocation, manufacturing of goods, military operations, production process control, risk management, sequencing and scheduling of tasks, telecommunications, and traffic control.Closely related disciplines (with significant overlaps among these) include..

Griewank function

The Griewank function is a function widely used to test the convergence of optimization functions. The Griewank function of order is defined byfor (Griewank 1981), plotted above for . It has a global minimum of 0 at the point .The function has 191 minima, with global minimum at and local minima at for (OEIS A177889), 12.5601, 18.8401, 25.1202, .... Restricting the domain of the function to , the numbers of local minima for for , 2, ... are therefore given by 1, 1, 1, 1, 1, 1, 3, 3, 3, 3, 3, 3, 5, 5, 5, 5, 5, 5, 7, ... (OEIS A178832).

Subscribe to our updates
79 345 subscribers already with us
Math Subcategories
Check the price
for your project