java,mathematical-optimization,apache-commons-math

Polynomials are special functions (in the general sense of "special") and have lots of distinctive, useful properties. My advice is to exploit those properties instead of trying to use a method for more general functions. Specifically, the extreme values of a polynomial are the roots of its derivative (where the...

python,scipy,mathematical-optimization

fmin has a full_output=True parameter: xopt, fopt, iter, funcalls, warnflag = fmin(T,0, full_output=True, disp=False) print(xopt, fopt) # (array([ 301.9498125]), 0.71004171552448025) ...

Maybe two problems. First you haven't input the parameter meq which tells the solver how many of your constraint equations are equality constraints vs. how many are inequality so it defaults to meq=0 which makes them all equality constraints so you've overdetermined your solution. Looking at your problem, I might...

algorithm,mathematical-optimization

This problem is strongly NP-hard by reduction from 3-partition (prepare several duplicate rows consisting of the 3-partition input, one for each desired partition). A mixed-integer programming (MIP) solver likely is not better than brute force in the worst case, but it's easy enough to be worth a try. Suppose that...

python,vector,linear-algebra,mathematical-optimization,approximate

I agree that in general this is a pretty tough optimization problem, especially at the scale you're describing. Each objective function evaluation requires O(nm + n^2) work for n points of dimension m -- O(nm) to compute distances from each point to the new point and O(n^2) to compute the...

algorithm,math,mathematical-optimization

You are probably looking for ternary search . Ternary search will help to find f(m) as your requirement in O(logN) time , where N is number of points on the curve . It basically takes two points m1 and m2 in range (l,r) and then recursively searches in 1/3 rd...

matrix,mathematical-optimization,genetic-algorithm

Any structure of fixed size and shape that is made of numbers (or any other elements) can be rewritten to a 1-D vector and back. You can then use any operator you like which works on vectors. If you wanted to work with matrices (or any other structures) directly you...

algorithm,mathematical-optimization,traveling-salesman,ant-colony

1) You'd usually store the best-so-far path of all iterations (as in Elitist Ant System etc.), which is also the final result then. 2) With cycle you mean iteration? Anyway, as for "Path found on this level or not", I'd only allow successful (complete) paths in the first place, or...

algorithm,graph,dynamic-programming,mathematical-optimization

EDIT: Under the newly added restriction that every node can be visited only once, the problem is most definitely NP-hard via reduction to Hamilton path: For a general undirected, unweighted graph, set all edge weights to zero and every vertex weight to 1. Then the maximum reachable score is n...

I think I worked up some kind of a solution You must order your list, from higher to lower. It gives us [20, 10, 5, 2] Then, we're gonna work with some kind of a stack. we begin with n=0 a. you put the n'th element of your list at...

ruby,algorithm,primes,mathematical-optimization

Your efficiency problem is that you are attempting to find target prime numbers by construction of a very large set of possibilities and filtering by results. The set of all building blocks and their combinations is much larger than the available targets, so you spend a huge amount of time...

My approach was wrong. Just I have to use this rules: t1 hours/cake = 1/t1 cakes/hour. For all three, they eat 1/t1+1/t2+1/t3 cakes in one hour. In h hours, they eat h(1/t1+1/t2+1/t3) cakes. The time to eat a whole cake must satisfy h(1/t1+1/t2+1/t3)=1 or h=1/(1/t1+1/t2+1/t3). And code will be: double...

Change your function to this: suma <- function(par){ sume=0 for(p in seq(from=1,to=10, by=1)){ sume = sume + (log(vpk$V1[p])/log(n^(-1))-outer(par[1],par[2],Rab))^2 } return(sume) } Package optimx might be of interest to you....

r,mathematical-optimization,mosek

Your last constraint defines one sheet of the hyperbola defined by 4000x1 + 6400x2 - 384x1^2 - 1280x1x2 - 999x2^2 = 10000. This is not convex. You cannot directly formulate it as a convex conic constraint.

math,3d,mathematical-optimization,plane

Your code looks good. You can shorten it somewhat to this: void position_from_plane(float r_co[3], const float p[4]) { const float d = -p[3] / (p[0]*p[0] + p[1]*p[1] + p[2]*p[2]); r_co[0] = p[0]*d; r_co[1] = p[1]*d; r_co[2] = p[2]*d; } You could get slightly shorter code if you were to intersect...

algorithm,matrix,mathematical-optimization

I ended up realizing that this problem can be reformulated as a bipartite graph, and then the maximum size submatrix would be equivalent to a maximum edge biclique in bipartite graph, which is an NP-complete problem. I found the BiMax implementation in the R package biclust very useful!

python,optimization,order,mathematical-optimization

elements = ['a', 'b', 'c', 'd'] scores = [['d-a', 0], ['a-b', 0], ['b-a', 0], ['a-c', 2], ['c-a', 0], ['a-d', 2], ['d-b', 1], ['b-c', 2], ['c-b', 0], ['b-d', 2], ['d-c', 2], ['c-d', 2]] s = dict(scores) print(min(itertools.permutations(elements), key=lambda p: sum(s[a+'-'+b] for a, b in zip(p, p[1:])))) Prints ('c', 'a', 'b', 'd')....

algorithm,max,mathematical-optimization,genetic-algorithm,evolutionary-algorithm

So let's say for some function f(x,y,z), I want to find what would give the highest value that would be maximization, right? And if I wanted to find the lowest value that would be minimization? Yes, that's by definition true. So if a genetic algorithm is a search algorithm...

r,mathematical-optimization,quadprog,quadratic-programming

You can do this with the solve.QP function from quadprog. From ?solve.QP, we read that solve.QP solves systems of the form min_b {-d'b + 0.5 b'Db | A'b >= b0}. You are solving a problem of the form min_w {-A'w + pw'Cw | w >= 0, 1'w = 1}. Thus,...

java,optimization,mathematical-optimization

this might help you long weight=1; long finalSum=0; while(number>0){ long a=number%10; finalSum+=(a*weight); weight++; number/=10; } if((finalSum%11)==10){ System.out.println("Final sum when divided by 11 gives remainder 10"); } ...

math,powershell,mathematical-optimization,money

One solution would be to use the [Math]::Max() function like this: $remainamount = [Math]::Max($currentamount[0] - $checkamount,0) That will give you the higher of the two numbers, so if they still owe something it gives that, or it gives 0....

java,math,complexity-theory,mathematical-optimization,algebra

Mathematically, this corresponds to finding all linear combinations of the kernel vectors using all possible sets of coefficients that are n-tuples mod p. It amounts to a matrix multiplication mod p between a p^n x n coefficient matrix and a n x m kernel matrix. The p^n x n matrix...

matlab,math,mathematical-optimization,indoor-positioning-system

This quote is somewhat poorly worded but the way I read it was. For the first system, the reported location 2.3 meters or less away from the true location 90% of the time. Also for the first system the reported location is 3.5 meters or less away from the true...

python,mathematical-optimization,gurobi

You are creating your arrays of variables wrong, it should be r_plus = [] for i in range(row): r_plus.append(m.addVar(name="r_plus%d" % i)) r_minus = [] for i in range(row): r_minus.append(m.addVar(name = "r_minu%d" % i)) beta = [] for j in range(col): beta.append(m.addVar(name = "beta%d" % j)) or more briefly r_plus =...

python,scipy,mathematical-optimization

First, change the signature of func_1 and func_2 to something like: def func_1(t): x,y=t return (x*x) + (y*y) def func_2(t, a): x,y=t return ((x-a)*(x-a)) + ((y-a) *(y-a)) In the first case, use from scipy.optimize import fmin from numpy import array fmin(func_1,array([1,1])) In the second case, you must pass to fmin...

python,algorithm,mathematical-optimization,knapsack-problem,greedy

This is an instance of the Knapsack problem. It is NP-hard, so with 45 items, you'll have to use some heuristic algorithm (Hill Climbing, for example) to find an acceptable estimate. To find the optimal solution, you have no other option than to try all possibilities (which is infeasible). Knowledge...

In this case your objective function has infinitely many optimal points (not necessarily just different local maxima). Anywhere one of the parameters is zero 0 is just as good as any other point where a parameter is near 0. I'm not sure if you were expecting (0,0), but (0,34) has...

Here's the workaround I got from the author of DEoptim() library(DEoptim) #From help(DEoptim) Rosenbrock <- function(x){ x1 <- x[1] x2 <- x[2] 100 * (x2 - x1 * x1)^2 + (1 - x1)^2 } lower <- c(-10,-10) upper <- -lower ## run DEoptim and set a seed first for replicability...

algorithm,geometry,dynamic-programming,mathematical-optimization

You can for example model your problem as an assignment problem (Wikipedia link), where you define the cost C_ij of assigning point A_i (from set A) to point B_j (from set B) to be equal to the distance between them. This assignment problem can then be solved using the Hungarian...

php,performance,mathematical-optimization,file-writing

First and best advice: benchmark it! Do not take any advice you get as a definitive fact (not even mine). Performance will vary based on your operating system, hardware and PHP version. The following should be a fast approach and directly includes a micro-benchmark for you. Please test it and...

math,mathematical-optimization,genetic-algorithm

≺ is called precedes, ≻ succeeds. According to the well known Evolutionary Optimization Algorithms the standard notation for A dominates B is A ≻ B: 20.1 Pareto Optimality [...] Domination: a point x* is said to dominate x if the following two conditions hold: fi(x*) <= fi(x) for all i...

r,statistics,mathematical-optimization,minimization

On this occasion optim will not work obviously because you have equality constraints. constrOptim will not work either for the same reason (I tried converting the equality to two inequalities i.e. greater and less than 15 but this didn't work with constrOptim). However, there is a package dedicated to this...

optimization,mathematical-optimization,linear-programming,scientific-computing,integer-programming

Edit: Setting Variable2 equal to min or max of Variable1 and Parameter. min(Parameter,Variable1): If you are sure that Variable2 "wants" to be small in the objective function, then you just need to require Variable2 to be less than or equal to both Parameter and Variable1: Variable2 <= Variable1 Variable2 <=...

math,mathematical-optimization,curve-fitting

Two curves need to intersect to find an unique solution where both curve equations hold true. If they intersect at more than 1 point, there exists that many unique solutions, as the number of intersections. When the two curves do not meet, then the system is does not have any...

matlab,mathematical-optimization

I am not familiar with your particular example but most minimizers are reasonably similar and you just have to follow a bit of a framework to get the optimizer working. This is some function minimization code which I tweaked to put up here but have used in the past. fmincon...

r,visual-studio-2012,mathematical-optimization

Here is answer with source: 2.9 Can I use Rcpp with Visual Studio ? Not a chance. And that is not because we are meanies but because R and Visual Studio simply do not get along. As Rcpp is all about extending R with C++ interfaces, we are bound by...

r,mathematical-optimization,portfolio,quadratic

There were two issues with the code you posted: The posted Dmat is not actually symmetric; you had accidentally included value 212.31581 instead of 12.31581 The meq=2 option means your first two constraints are held at equality, meaning your weights sum to 1 and your return is exactly 5.2%. The...

python,algorithm,mathematical-optimization,genetic-algorithm

A solution x is pareto dominated if there exists a solution y where y is no worse than x in any dimension and is strictly better in some dimension. I'll assume you're trying to minimize in all dimensions for your problem. Then you can take the following approach (the code...

python,numpy,scipy,mathematical-optimization

As @pv. pointed out as a comment, I made a mistake in computing the gradient. First of all, the correct (mathematical) expression for the gradient of my objective function is: (notice the minus sign.) Furthermore, my Python implementation was completely wrong, beyond the sign mistake. Here's my updated gradient: def...

matlab,mathematical-optimization,ellipse,data-fitting

This answer is not a direct fit in 3D, it instead involves first a rotation of the data so that the plane of the points coincides with the xy plane, then a fit to the data in 2D. % input: data, a N x 3 array with one set of...

r,mathematical-optimization,solver,nonlinear-optimization

Try different starting values (initScal) than zero; the sum( w * 0^2) = 0 for all w.

mathematical-optimization,julia-lang

Okay, so I found the following by delving into how White created the benchmarks for the package. >a.minimum 2-element Array{Float64,1}: 1.00001 1.00001 I have also found >a.iterations 60 However, reading over Composite Types again I am still not sure how to find the list of elements of composite types. ...

r,excel,mathematical-optimization

You need to construct a vector for the objective function and a constraint matrix, finally solving with one of the R LP solvers: library(lpSolve) costs <- matrix(c(9, 10, 11, 4, 5, 10, 1, 3, 5, 7, 5, 4), nrow=3) nr <- nrow(costs) nc <- ncol(costs) columns <- t(sapply(1:nc, function(x) rep(c(0,...

r,function,optimization,mathematical-optimization

I think you want to minimize the square of a-fptotal ... ff <- function(x) myfun(x)^2 > optimize(ff,lower=0,upper=30000) $minimum [1] 28356.39 $objective [1] 1.323489e-23 Or find the root (i.e. where myfun(x)==0): uniroot(myfun,interval=c(0,30000)) $root [1] 28356.39 $f.root [1] 1.482476e-08 $iter [1] 4 $init.it [1] NA $estim.prec [1] 6.103517e-05 ...

r,mathematical-optimization,lpsolve,lpsolveapi

I don't think you can mix between these two packages (lpSolveAPI doesn't import or depend on lpSolve). Consider a simple LP in lpSolve: library(lpSolve) costs <- c(1, 2) mat <- diag(2) dirs <- rep(">=", 2) rhs <- c(1, 1) x1 = lp("min", costs, mat, dirs, rhs) x1 # Success: the...

python,python-2.7,scipy,mathematical-optimization

You could try using the scipy.optimize.fmin_cobyla function, I don't know the numerical details so you should check it with values for which you know the expected answer and see if it works for your needs, play with the tolerance arguments rhoend and rhobeg and see if you get an expected...

python,image-processing,optimization,numpy,mathematical-optimization

Compute the distance from each non-masked point to (r,c). The maximal radius is the minimum of the distances. import numpy as np import matplotlib.pyplot as plt import matplotlib.patches as patches h, w = 600, 700 r, c = 250, 350 def make_mask(): x, y = np.ogrid[:h, :w] y -=...

optimization,mathematical-optimization,linear-programming,scientific-computing,integer-programming

First create a binary variable that equals 1 if Variable1 > 0 and 0 if Variable1 < 0: Variable1 <= UB * BinaryVar LB * (1 - BinaryVar) <= Variable1 (If Variable1 > 0, then BinaryVar must equal 1. If Variable1 < 0, then BinaryVar must equal 0. Note that...

python,numpy,mathematical-optimization,polynomials

The floating point arithmetic is leading you astray. If you look at the determinant of that matrix a, it's something incredibly small like 1.551864434916621e-51. If you compute the determinate with the entries as integers (and avoid floating point arithmetic weirdness) you'll see it's actually 0, and the rank of your...

r,mathematical-optimization,linear

You can use constrOptim with cost function least square and contraints defined such that ui %*% a >= ci. Suppose n=3. You want constraints such as: a1 >= 0 a2 >= 0 a3 >= 0 -a1 -a2 -a3 >= -1 Thus you have to provide constrOptim the following parameters: ui...

r,apply,mathematical-optimization,sapply

list_dat is not a list, it is an array of lists. Your definition of min.RSS defines data as it's argument, but then refers to list # You don't really need to preallocate the list, but if you insist list_dat <- vector(length=2, mode='list') list_dat[[1]] =data.frame(x=c(1,2,3,4,5,6,7,8), y=c(1,3,5,6,8,12,15,19)) list_dat[[2]] =data.frame(x=c(1,2,3,4,5,6), y=c(1,3,5,6,8,12)) min.RSS...

c#,double,resharper,mathematical-optimization,epsilon

No, it is not a sensible value for your epsilon. The code you have is no different than a straight equality check. double.Epsilon is the smallest possible difference that there can be between any two doubles. There is no way for any two doubles to be closer to each other...

python,mathematical-optimization,linear-programming,gurobi,integer-programming

To obtain the best feasible answer so far, you should first verify that there is a feasible solution by checking the Status attribute on your model object, then querying the X attribute on your variable objects. If you have named your variables (with the name parameter on e neat way...

r,loops,intervals,mathematical-optimization,maximize

This optimization problem is essentially going to be impossible for any optimizer (such as optimize()) that assumes the objective function is smooth and has a single minimum. You didn't give a reproducible example, but here's an example of an objective function that's just about as ugly as yours: set.seed(101) r...

r,mathematical-optimization,minimization,quadratic-programming

Given the similarities of the optimization problem here to your previous question, I will borrow some language directly from my answer to that question. However they are quite a bit different (the previous problem was a linear programming problem and this is a quadratic programming problem, and the constraints differ),...

algorithm,geometry,distribution,computational-geometry,mathematical-optimization

This is a linear program that can be solved by general LP solvers. It can also be modelled more specifically as a min-cost max-flow problem: Put the attractors on the left side of a bipartite graph and the points on the right side. Connect every attractor i on the left...

matlab,optimization,image-processing,photoshop,mathematical-optimization

This is my MATLAB implementation of the article: ``` function [ vBoxBlurKernel ] = GenerateBoxBlurKernel( boxBlurVar, numIterations ) % ----------------------------------------------------------------------------------------------- % % [ boxBlurKernel ] = GenerateBoxBlurKernel( boxBlurVar, numIterations ) % Approximates 1D Gaussian Kernel by iterative convolutions of "Extended Box Filter". % Input: % - boxBlurVar - BoxFilter Varaiance....

algorithm,artificial-intelligence,mathematical-optimization,genetic-algorithm

Since memory is hardly a problem nowadays, I'd choose a representation which is easy to work with from a programmers point of view (readability) supports your algorithm Using bit arrays will save space, but you'll end up with a lot of macros or function calls to separate the information over...

matlab,matrix,binary,integer,mathematical-optimization

The correct syntax for bintprog is X = bintprog(f,A,b,Aeq,beq). If you don't have f (which means you just want any feasible point), you can set it to []. However, for the others, your syntax is slightly wrong. I am assuming that the + in your constraints is actually a *...

python,math,while-loop,mathematical-optimization,python-3.4

The fundamental theorem of arithmetic states that every integer greater than 1 can be represented as the product of prime numbers. E.g., the number 2100 can be represented like so: 2 x 2 x 3 x 5 x 5 x 7 I've arranged it so the largest prime factor is...

python,scipy,nlp,mathematical-optimization

The error message means that numpy is getting a vector someplace that it expects a scalar value. Your objective function is returning the parameter vector v. Do you mean instead for it to return the scalar v_model?

matlab,constraints,mathematical-optimization

During the minimization process fmincon assumes that the size of the constraints vector doesn't change. This is because in order to do the minimization it has to compute the gradient of the cost function, as well as the gradient of the constraints, and it can't see some of the elements...

java,constraints,mathematical-optimization,cplex

It is far more likely to be a flaw in your code's logic. The code snippet you give us looks OK to me - I can't see where the problem would be without more context, knowing what the values are in the m matrix etc. First, try outputting your model...

mathematical-optimization,modeling,linear-programming,gurobi

Gurobi isn't stalling. It has found a solution within 0.11% of optimal and is continuing to try to improve the bound to 0.01%. If you want to stop it sooner, you should change the parameter MIPGap. To actually make Gurobi faster, you can try changing other parameters with the Tuning...

algorithm,mathematical-optimization,particle-swarm

Your terminology is a bit off. Simple PSO is a search for a vector x that minimizes some scalar objective function E(x). It does this by creating many candidate vectors. Call them x_i. These are the "particles". They are initialized randomly in both position and rate of change, also called...

I am not sure if this is a necessarily hard optimization problem given that your search space is completely determined. Essentially, given finite number of dimensions for w_i, you have a finite number of points in the R^w space that you want to search over. These are: c(1, 0, 0,...

algorithm,optimization,matrix,mathematical-optimization

For the sizes given, this looks like it will be very hard even to approximate. Anyway, here are a couple of ideas. When a cell needs to change from 0 to 1, I'll write +, when it needs to change in the other direction I'll write -, and when it...

mathematical-optimization,linear-programming,integer-programming,quadratic-programming

Models in this form are actually called bilinear optimization problems. The typical approach to linearizing bilinear terms is through something called the McCormick envelope. Consider variables x and y, where you want x*y in the objective of your maximization problem. If we assume x and y are bounded by xL...

r,regression,mathematical-optimization,linear-programming,minimization

Because Quw and ruw are both data, all constraints as well as the objective are linear in the yuw decision variables. As a result, this is a linear programming problem that can be approached by the lpSolve package. To abstract this out a bit, let's assume R=20 and C=10 describe...

r,ggplot2,mathematical-optimization,minimize,maximize

This is called the diet problem and it is a popular introduction to linear programming (see, e.g., the first Google hit I found for the diet problem). Linear programming solvers through packages such as lpSolve can be used to solve many variants of the diet problem. For instance, consider the...

python,mathematical-optimization,gurobi

I just tried adding the following line to the assignment.py example file, and it seemed to print out the runtime just fine. print m.Runtime Are you sure you're calling it after m.optimize() but before calling m.update() or anything else that resets the model run time? Try printing the run time...