Optimization

Optimization

By Bryan Bergeron    View In Digital Edition  


Whether you refer to it as AI, Linear Programming, or Inverse Kinematics, much of the computational work in robotics is some form of optimization.

Think determining the best path for a quadcopter to deliver two packages and then return to base, with a given battery capacity and a schedule to meet. Or, whether to characterize another combat robot as foe or friend.

Or, determining the optimum speed for a carpet rover to traverse a given distance over loose gravel, and then a grassy lawn. Or, determine the best angles for a robot arm to lift and place a particular object, given joint capacity, time, and battery capacity.

If you recall your basic linear algebra, a basic optimization problem consists of multiple equations with multiple unknowns and one or more additional constraints.

If you want to review this form of optimization without resorting to manually manipulating matrices, check out the Wolfram Alpha site (www.wolframalpha.com) and search for the interactive linear programming calculator.

Other computational techniques of optimization include artificial neural networks, genetic algorithms, and simulated annealing. The goal is to minimize some cost function — whether the cost is expressed in dollars, time, distance, or some other metric.

I’ve had good results with artificial neural network optimization routines because for some optimization problems — such as pattern recognition — they’re plug-and-play.

Simply train an artificial neural network of the proper architecture with a data set until the recognition error is in an acceptable range and you’re done.

The downside is that artificial neural networks — which are roughly modeled after biological neural networks — are optimization black boxes.

There’s no sure way to extract the actual optimization parameters, so there’s no way to determine when the optimization routine will fail. There’s also an art to choosing the best architecture and most appropriate training set.

Genetic algorithms — loosely modeled after the same genetic manipulations that occur within living cells — handle the optimization problem less opaquely. It may take several thousand iterations or mutations to come up with the best solution, but the algorithms are straightforward.

Furthermore, the optimization parameters resulting from a run of a genetic algorithm are easily stored and saved for future use, or as a starting point for additional optimization runs.

Simulated annealing — which roughly models the annealing of a metal — is also relatively transparent. For some optimization problems, it’s the most computationally efficient approach.

The types of problems best suited for this or any other form of optimization are beyond this editorial, but it’s important to know that there isn’t a one-size-fits-all optimization approach.

Optimization is akin to magic — especially inverse optimization. An inverse problem starts with the answer and works backwards to identify the value of the parameters involved in the optimization.

This may seem a trivial distinction from forward optimization, where the variables are manipulated and the output is monitored for the best answer. However, nothing could be further from reality.

Think about the expansive search space involved in defining parameters that interact with each other, as well as determine the final solution.

As with much of robotics, optimization is both an art and a science. Anyone can look up the science on the Web with a few keystokes. Developing the art, however, takes time and hands-on experience.

Good luck with your optimization adventures.  SV



Article Comments