r,optimization,simulated-annealing

A possible approach would be to make use of so-called Lagrange multipliers (cf., http://en.wikipedia.org/wiki/Lagrange_multiplier). For example, set efficientFunction <- function(v) { lambda <- 100 t(v) %*% Cov_Mat %*% v + lambda * abs( sum(v) - 1 ) } , so that in order to minimize the objective function efficientFunction the...

java,algorithm,search,optimization,simulated-annealing

So you are trying to find an n-dimensional point P' that is "randomly" near another n-dimensional point P; for example, at distance T. (Since this is simulated annealing, I assume that you will be decrementing T once in a while). This could work: double[] displacement(double t, int dimension, Random r)...

java,algorithm,simulated-annealing,stochastic,hill-climbing

The left hand side of the equation p will be a double between 0 and 1, inclusively. oldFitness, newFitness and T can also be doubles. You will have something similar to this in your code: double p = 1 / (1 + Math.exp((oldFitness - newFitness) / T)); if (Math.random() <...

discrete-mathematics,solver,simulated-annealing,rostering,tabu-search

If you go with OptaPlanner and don't want to follow the Employee Rostering design of assigning 8 hours Shifts (planning entities) to Employees (planning value), because of your 2th constraint, then you could try to follow the Cheap Time Example design, something like this: @PlanningEntity public class WorkAssignment { Employee...

neural-network,genetic-algorithm,encog,simulated-annealing,particle-swarm

It seems logical, however it will not work. With the default parameters of the RPROP, this sequence will not likely work. The reason why is that after your previous training the weights of the neural network will be near a local optimum. Because of the nearness to a local optimum...