Gekko (optimization software)

Python package From Wikipedia, the free encyclopedia

The GEKKO Python package[1] solves large-scale mixed-integer and differential algebraic equations with nonlinear programming solvers (IPOPT, APOPT, BPOPT, SNOPT, MINOS). Modes of operation include machine learning, data reconciliation, real-time optimization, dynamic simulation, and nonlinear model predictive control. In addition, the package solves Linear programming (LP), Quadratic programming (QP), Quadratically constrained quadratic program (QCQP), Nonlinear programming (NLP), Mixed integer programming (MIP), and Mixed integer linear programming (MILP). GEKKO is available in Python and installed with pip from PyPI of the Python Software Foundation.

pip install gekko
Quick Facts Developer(s), Stable release ...
GEKKO
Developer(s)Logan Beal and John D. Hedengren
Stable release
1.2.1 / July 3, 2024; 8 months ago (2024-07-03)
Repository
Operating systemCross-Platform
TypeTechnical computing
LicenseMIT
Websitegekko.readthedocs.io/en/latest/
Close

GEKKO works on all platforms and with Python 2.7 and 3+. By default, the problem is sent to a public server where the solution is computed and returned to Python. There are Windows, MacOS, Linux, and ARM (Raspberry Pi) processor options to solve without an Internet connection. GEKKO is an extension of the APMonitor Optimization Suite but has integrated the modeling and solution visualization directly within Python. A mathematical model is expressed in terms of variables and equations such as the Hock & Schittkowski Benchmark Problem #71[2] used to test the performance of nonlinear programming solvers. This particular optimization problem has an objective function and subject to the inequality constraint and equality constraint . The four variables must be between a lower bound of 1 and an upper bound of 5. The initial guess values are . This optimization problem is solved with GEKKO as shown below.

from gekko import GEKKO

m = GEKKO()  # Initialize gekko
# Initialize variables
x1 = m.Var(value=1, lb=1, ub=5)
x2 = m.Var(value=5, lb=1, ub=5)
x3 = m.Var(value=5, lb=1, ub=5)
x4 = m.Var(value=1, lb=1, ub=5)
# Equations
m.Equation(x1 * x2 * x3 * x4 >= 25)
m.Equation(x1 ** 2 + x2 ** 2 + x3 ** 2 + x4 ** 2 == 40)
m.Minimize(x1 * x4 * (x1 + x2 + x3) + x3)
m.solve(disp=False)  # Solve
print("x1: " + str(x1.value))
print("x2: " + str(x2.value))
print("x3: " + str(x3.value))
print("x4: " + str(x4.value))
print("Objective: " + str(m.options.objfcnval))

Applications of GEKKO

Summarize
Perspective

Applications include cogeneration (power and heat),[3] drilling automation,[4] severe slugging control,[5] solar thermal energy production,[6] solid oxide fuel cells,[7][8] flow assurance,[9] Enhanced oil recovery,[10] Essential oil extraction,[11] and Unmanned Aerial Vehicles (UAVs).[12] There are many other references to APMonitor and GEKKO as a sample of the types of applications that can be solved. GEKKO is developed from the National Science Foundation (NSF) research grant #1547110 [13][14][15][16] and is detailed in a Special Issue collection on combined scheduling and control.[17] Other notable mentions of GEKKO are the listing in the Decision Tree for Optimization Software,[18] added support for APOPT and BPOPT solvers,[19] projects reports of the online Dynamic Optimization course from international participants.[20] GEKKO is a topic in online forums where users are solving optimization and optimal control problems.[21][22] GEKKO is used for advanced control in the Temperature Control Lab (TCLab)[23] for process control education at 20 universities.[24][25][26][27]

Machine learning

Thumb
Artificial Neural Network

One application of machine learning is to perform regression from training data to build a correlation. In this example, deep learning generates a model from training data that is generated with the function . An artificial neural network with three layers is used for this example. The first layer is linear, the second layer has a hyperbolic tangent activation function, and the third layer is linear. The program produces parameter weights that minimize the sum of squared errors between the measured data points and the neural network predictions at those points. GEKKO uses gradient-based optimizers to determine the optimal weight values instead of standard methods such as backpropagation. The gradients are determined by automatic differentiation, similar to other popular packages. The problem is solved as a constrained optimization problem and is converged when the solver satisfies Karush–Kuhn–Tucker conditions. Using a gradient-based optimizer allows additional constraints that may be imposed with domain knowledge of the data or system.

from gekko import brain
import numpy as np

b = brain.Brain()
b.input_layer(1)
b.layer(linear=3)
b.layer(tanh=3)
b.layer(linear=3)
b.output_layer(1)
x = np.linspace(-np.pi, 3 * np.pi, 20)
y = 1 - np.cos(x)
b.learn(x, y)

The neural network model is tested across the range of training data as well as for extrapolation to demonstrate poor predictions outside of the training data. Predictions outside the training data set are improved with hybrid machine learning that uses fundamental principles (if available) to impose a structure that is valid over a wider range of conditions. In the example above, the hyperbolic tangent activation function (hidden layer 2) could be replaced with a sine or cosine function to improve extrapolation. The final part of the script displays the neural network model, the original function, and the sampled data points used for fitting.

import matplotlib.pyplot as plt

xp = np.linspace(-2 * np.pi, 4 * np.pi, 100)
yp = b.think(xp)

plt.figure()
plt.plot(x, y, "bo")
plt.plot(xp, yp[0], "r-")
plt.show()

Optimal control

Thumb
Optimal control problem benchmark (Luus) with an integral objective, inequality, and differential constraint.

Optimal control is the use of mathematical optimization to obtain a policy that is constrained by differential , equality , or inequality equations and minimizes an objective/reward function . The basic optimal control is solved with GEKKO by integrating the objective and transcribing the differential equation into algebraic form with orthogonal collocation on finite elements.

from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt

m = GEKKO()  # initialize gekko
nt = 101
m.time = np.linspace(0, 2, nt)
# Variables
x1 = m.Var(value=1)
x2 = m.Var(value=0)
u = m.Var(value=0, lb=-1, ub=1)
p = np.zeros(nt)  # mark final time point
p[-1] = 1.0
final = m.Param(value=p)
# Equations
m.Equation(x1.dt() == u)
m.Equation(x2.dt() == 0.5 * x1 ** 2)
m.Minimize(x2 * final)
m.options.IMODE = 6  # optimal control mode
m.solve()  # solve
plt.figure(1)  # plot results
plt.plot(m.time, x1.value, "k-", label=r"$x_1$")
plt.plot(m.time, x2.value, "b-", label=r"$x_2$")
plt.plot(m.time, u.value, "r--", label=r"$u$")
plt.legend(loc="best")
plt.xlabel("Time")
plt.ylabel("Value")
plt.show()

See also

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.